Linguistics is the scholarly or scientific study of language. There are many different schools/flavours of linguistics, but the one thing they all have in common is that they try to identify how language(s) work(s), can be learnt, and – perhaps most importantly – be investigated and described.
Throughout your studies you will encounter a multitude of different terms used in order to achieve these purposes and you will find that different linguists often use these in different and sometimes confusing ways. In some cases, a good linguistics dictionary, such as David Crystal’s Dictionary of Linguistics and Phonetics will help you to eliminate some of this confusion, but certainly not in all. However, one of the explicit aims of this course is to give some appropriate means of handling this dilemma, by trying to introduce you to all the different core areas of linguistics, plus some relatively common terminology associated with them. Another important aim is that you should be able to learn ‘the tools of the trade’, i.e. the means for analysing language in a structured and systematic way. Once you will have achieved these two aims, we can then move on to the final one, which is to enable you to reflect upon both the terminology and methodology you have learnt critically and to be able to apply them in an independent and productive way.
Another aspect of this course is that you will be – as much as possible – introduced to to the idea of working with realistic language data. This is perhaps one of the key features of modern day linguistics, as opposed to old-fashioned pure ‘armchair linguistics’, where linguists only used to invent their own examples, rather than making recourse to whatever corpora (sing. corpus) of naturally occurring language existed in their time. Of course this approach also means that you will often have to understand about how you can gain access to this type of (usually electronic) data, which is where we shall try to establish a strong link between language and computers. To some of you, this idea may appear a bit scary at first, but ‘thinking’ like a computer, or rather understanding of how computers may be able to process language, will not only enable you to understand the complexity of human language better, but also give you an edge in terms of the practical use you will be able to draw from getting a degree in linguistics (or at least one that involves linguistics, if you only take it as a minor). After all, the categorisation of the sheer endless flood of language data taken from the World Wide Web or other sources in terms of its content is perhaps one of the most important challenges of our times, and many improvements to search engines, spell-checkers, dictionaries or language-learning programs are only conceivable if we can learn to incorporate a significant amount of knowledge about language (i.e. linguistics) into them.
The following (very brief) list is supposed to provide you with a very general idea about wich areas of linguistics can be assumed to provide the basis for all linguistic description and what their particular significance is. Although there are many linguists who only specialise in one field, in order to achieve a full analysis of a particular phenomenon, it is often necessary – and usually advisable – to consider multiple core areas in order to investigate a problem from all different possible directions since, more often than not, what happens on one level will also influence what happens on another.
Crystal, D. (2008). A Dictionary of Linguistics and Phonetics (6th ed.). Oxford: Blackwell.