My earliest formal training in linguistics was an undergraduate level course in Cognitive Linguistics through Northwestern University’s School of Continuing Studies. Since it wasn’t included in last week’s breakdown of areas of study, I’ll summarize it here:
Cognitive Linguistics studies how the brain processes, recognizes, and learns languages.
This week’s post is about recent work by scientists at the University of California at San Francisco and their research into how the brain recognizes the individual sounds (or phonemic features) that make up words. The work was briefly profiled on NPR (thanks to my mom, for bringing it to my attention!), and you can listen to the overview from All Things Considered.
The published article, in the journal Science, is a bit more complex, as you can tell from the title: “Phonetic Feature Encoding in Human Superior Temporal Gyrus.”
The NPR version does a wonderful job explaining how complex language recognition really is, the methodology of the UCSF study, and discusses how the results of this study might be able to help scientists better understand disorders, like dyslexia, or to improve computer programs that use language to function (like Siri).
*The one point I wanted to add is that phonemic features are language dependent. The NPR summary talks about the difference between “dad” and “bad” and this makes sense to English speakers, because “d” and “b” change the meaning of the word. In different languages, different features change the meaning of the word (for example, in Thai, whether or not a “p” is aspirated changes the meaning of the word; same thing in Mandarin, where a word can be made up of the same letters, but have a different meaning upon changing the intonation).