Phonetics is essentially concerned with the physical aspects of spoken interaction, the way in which sounds are produced and transmitted from speaker to hearer. As such, it deals with how sounds are articulated and filtered by different media before they arrive at the hearer’s ear, as well as how these sounds are received and decoded by the hearer. It thus tries to investigate and represent the physical reality of speech sounds by using exact measurements and ways of representing their features. In order to achieve this, often techniques that allow us to make speech visible, such as waveforms, spectrograms, etc., are employed. We’ll find out some more about these as we go along, especially whenever they become useful in order to explain specific details of how certain sounds are produced or manipulated, or how we can establish which sound exactly occurs in a certain environment when it is difficult to ascertain this acoustically.
Phonology, on the other hand, is concerned with the regularities in the sound patterns that speakers of particular languages produce in order to communicate effectively. It uses more abstract models of human speech and languages and tries to ignore the non-functional elements that accompany the production of sounds. It often attempts to represent the functional elements by providing more or less complex rules, explaining why certain patterns are used, and how different rules interact with each other.
We’ll begin our exploration by looking at some of the physical aspects of speech first, and will then gradually move on to trying to understand how different language or accent systems work, which features they deem necessary and which ones they neglect, and how this may affect issues like mutual intelligibility between speakers of different accents of English.