How People Tawk Affects How Well You Listen

4808475862_6129039fa8_o

People from different places speak differently – that we all know. Some dialects and accents are considered glamorous or authoritative, while others carry a definite social stigma. Speakers with a New York City dialect have even been known to enroll in speech therapy to lessen their ‘accent’ and avoid prejudice. Recent research indicates that they have good reason to be worried. It now appears that the prestige of people’s dialects can fundamentally affect how you process and remember what they say.

Meghan Sumner is a psycholinguist (not to be confused with a psycho linguist) at Stanford who studies the interaction between talker variation and speech perception. Together with Reiko Kataoka, she recently published a fascinating if troubling paper in the Journal of the Acoustical Society of America. The team conducted two separate experiments with undergraduates who speak standard American English (what you hear from anchors on the national nightly news). They had the undergraduates listen to words spoken by female speakers of 1) standard American English, 2) standard British English, or 3) the New York City dialect. Standard American English is a rhotic dialect, which means that its speakers pronounce the final –r in words like finger. Both speakers of British English and the New York City dialect drop that final –r sound, but one is a standard dialect that’s considered prestigious and the other is not. I bet you can guess which is which.

In their first experiment, Sumner and Kataoka tested how the dialect of spoken words affected semantic priming, an indication of how deeply the undergraduate listeners processed the words. The listeners first heard a word ending in –er  (e.g., slender) pronounced by one of the three different female speakers. After a very brief pause, they saw a written word (say, thin) and had to make a judgment about the written word. If they had processed the spoken word deeply, it should have brought related words to mind and allowed them to respond to a question about a related written word faster. The results? The listeners showed semantic priming for words spoken in standard American English but not in the New York City dialect. That’s not too surprising. The listeners might have been thrown off by the dropped r or simply the fact the word was spoken in a less familiar dialect than their own. But here’s the wild part: the listeners showed as much semantic priming for standard British English as they did for standard American English. Clearly, there’s something more to this story than a missing r.

In their second experiment, a new set of undergraduates with a standard American English dialect listened to sets of related words, each read by one of the speakers of the same three dialects: standard American, British, or NYC. Each set of words (say, rest, bed, dream, etc.) excluded a key related word (in this case, sleep). The listeners were then asked to list all of the words they remembered hearing. This is a classic task that consistently generates false memories. People tend to remember hearing the related lure (sleep) even though it wasn’t in the original set. In this experiment, listeners remembered about the same number of actual words from the sets regardless of dialect, indicating that they listened and understood the words irrespective of speaker. Yet listeners falsely recalled more lures for the word sets read by the NYC speaker than by either the standard American or British speakers.

Screen Shot 2014-02-06 at 3.23.28 PM

Figure from Sumner & Kataoka (2013) showing more false recalls from lists spoken with a NYC dialect than those spoken in standard American or British dialects.

The authors offer an explanation for the two findings. On some level, the listeners are paying less attention to the words spoken with a NYC dialect. In fact, decreased attention has been shown to both decrease semantic priming and increase the generation of false memories in similar tasks. In another paper, Sumner and her colleague Arthur Samuel showed that people with a standard American dialect as well as those with a NYC dialect showed better later memory for –er words that they originally heard in a standard American dialect compared with words heard in a NYC dialect. These results would also fit with the idea that speakers of standard American (and even speakers with a NYC dialect) do not pay as much attention to words spoken with a NYC dialect.

In fact, Sumner and colleagues recently published a review of a comprehensive theory based on a string of their findings. They suggest that we process the social features of speech sounds at the very earliest stages of speech perception and that we rapidly and automatically determine how deeply we will process the input according to its ‘social weight’ (read: the prestige of the speaker’s dialect). They present this theory in neutral, scientific terms, but it essentially means that we access our biases and prejudices toward certain dialects as soon as we listen to speech and we use this information to at least partially ‘tune out’ people who speak in a stigmatized way.

If true, this theory could apply to other dialects that are associated with low socioeconomic status or groups that face discrimination. Here in the United States, we may automatically devalue or pay less attention to people who speak with an African American Vernacular dialect, a Boston dialect, or a Southern drawl. It’s a troubling thought for a nation founded on democracy, regional diversity, and freedom of speech. Heck, it’s just a troubling thought.

_____

Photo credit: Melvin Gaal, used via Creative Commons license

Sumner M, & Kataoka R (2013). Effects of phonetically-cued talker variation on semantic encoding Journal of the Acoustical Society of America DOI: 10.1121/1.4826151

Sumner M, Kim S K, King E, & McGowan K B (2014). The socially weighted encoding of spoken words: a dual-route approach to speech perception Frontiers in Psychology DOI: 10.3389/fpsyg.2013.01015

Can You Name That Scent?

3309276218_26baf1c493_b

We are great at identifying a color as blue versus yellow or a surface as scratchy, soft, or bumpy. But how do we do with a scent? Not so well, it turns out. If I presented you with several everyday scents, ranging from chocolate to sawdust to tea, you would probably name fewer than half of them correctly.

This sort of dismal performance is often chalked up to idiosyncrasies of the human brain. Compared to many mammals, we have paltry neural smelling machinery. To smell something, the smell receptors in your upper nasal cavity must detect a molecule and pass this information on to twin nubs that stick out of the brain. These nubs are called olfactory bulbs and they carry out the earliest steps of scent processing in the brain. While the size of the human brain is impressive with respect to our bodies, the human olfactory bulbs are nothing to brag about. Below, take a look at the honking olfactory bulbs (relative to overall brain size) on the dog. Compared to theirs, our bulbs look like a practical joke.

Screen Shot 2014-01-27 at 10.19.31 AM

Human (upper) and dog (lower) brain photos indicating the olfactory bulb and tract. From International Journal of Morphology (Kavoi & Jameela, 2011).

It’s easy to blame our bulbs for our smelling deficiencies. Indeed, many scientists have offered brain-based explanations for our shortcomings in the smell department. But could there be more to the story? What if we are hamstrung by a lackluster odor vocabulary? After all, we use abstract, categorical words to identify colors (e.g., blue), shapes (round), and textures (rough), but we generally identify odors by their specific sources. You might say: “This smells like coffee,” or “I detect a hint of cinnamon,” or offer up a subjective judgment like, “That smells gross.” We lack a descriptive, abstract vocabulary for scents. Could this fact account for some of our smell shortcomings?

Linguists Asifa Majid and Niclas Burenhult tackled this question by studying a group of people with a smell vocabulary quite unlike our own. The Jahai are a relatively small group of hunter-gatherers who live in Malaysia and Thailand. They use their sense of smell often in everyday life and their native language (also called Jahai) includes many abstract words for odors. Check out the first two columns in the table below for several examples of abstract odor words in Jahai.

Screen Shot 2014-01-27 at 4.42.24 PM

Table from Majid & Burenhult (2014) in Cognition providing Jahai odor and color words, as well as their rough translations into English.

Majid and Burenhult tested whether Jahai speakers and speakers of American English could effectively and consistently name scents in their respective native languages. They stacked the deck in favor of the Americans by using odors that are all commonplace for Americans, while many are unfamiliar to the Jahai. The scents were: cinnamon, turpentine, lemon, smoke, chocolate, rose, paint thinner, banana, pineapple, gasoline, soap, and onion. For a comparison, they also asked both groups to name a range of color swatches.

The researchers published their findings in a recent issue of Cognition. As expected, English speakers used abstract descriptions for colors but largely source-based descriptions for scents. Their responses  differed substantially from one person to the next on the odor task, while they were relatively consistent on the color task. Their answers were also nearly five times longer for the odor task than for the color task. That’s because English speakers struggled and tried to describe individual scents in more than one way. For example, here’s how one English speaker struggled to describe the cinnamon scent:

“I don’t know how to say that, sweet, yeah; I have tasted that gum like Big Red or something tastes like, what do I want to say? I can’t get the word. Jesus it’s like that gum smell like something like Big Red. Can I say that? Ok. Big Red. Big Red gum.”

Screen Shot 2014-01-27 at 9.53.54 AM

Figure from Majid & Burenhult (2014) comparing the “codability” (consistency) and abstract versus source-based responses from Americans and Jahai.

Now compare that with Jahai speakers, who gave slightly shorter responses to name odors than to name colors and used abstract descriptors 99% of the time for both tasks. They were equally consistent at naming both colors and scents. And, if anything, this study probably underestimated the odor-naming consistency of the Jahai because many of the scents used in the test were unfamiliar to them.

The performance of the Jahai proves that odors are not inherently inexpressible, either by virtue of their diversity or the human brain’s inability to do them justice. As the authors state in the paper’s abstract and title, odors are expressible in language, as long as you speak the right language.

Yet this discovery is not the final word either. The differences between Americans and Jahai don’t end with their vocabularies. The Jahai participants in the study use their sense of smell every day for foraging (their primary source of income). Presumably, their language contains a wealth of odor words because of the integral role this sense plays in their lives. While Americans and other westerners are surrounded by smells, few of us rely on them for our livelihood, safety, or well-being. Thanks to the adaptive nature of brain organization, there may be major differences in how Americans and Jahai represent odors in the brain. In fact, I’d wager that there are. Neuroscience studies have shown time and again that training and experience have very real effects on how the brain represents information from the senses.

As with all scientific discoveries, answers raise new questions. Is it the Jahai vocabulary that allows the Jahai to consistently identify and categorize odors? Or is it their lifelong experience and expertise that gave rise to their vocabulary and, separately, trained their brains in ways that alter their experience of odor? If someone magically endowed English speakers with the power to speak Jahai, would they have the smelling abilities to put its abstract odor words to use?

Would a rose by any other name smell as Itpit? The answer awaits the linguist, neuroscientist, or psychologist who is brave and clever enough to sniff it out.

Asifa Majid, Niclas Burenhult (2014). Odors are expressible in language, as long as you speak the right language Cognition, 130 (2), 266-270 DOI: 10.1016/j.cognition.2013.11.004

Photograph by Dennis Wong, used via Creative Commons license

Say What?!

Although I grew up outside of Chicago, I’ve spent the last decade split between the East and West Coasts. Now, after 5 years in Los Angeles, my husband and I are settling into life as Michiganders. Aside from the longer days and lower cost of living, the biggest differences I’ve noticed are linguistic. People speak differently here, and for me it’s like coming home. After a decade away, I am back in a state where people drink pop instead of soda. And, at long last, I’ve returned to the land of the Northern City Vowel Shift.

Speech is constantly in flux, whether or not we are aware of it. Regional dialects diverge, giving us the drawls of the South and the dropped r’s of the Northeast. More recently, cities in a large swath of the northern Midwest are reinventing their vowels, especially the short vowels in ben, bin, and ban. From Syracuse to Minneapolis, Green Bay to Cleveland, these vowels have been changing among Caucasian native English speakers. The vowels are now pronounced with a different positioning of the tongue, in some cases dramatically altering the sound of the vowel. A wonderful NPR interview on the subject is available online in audio form and includes examples of these vowel changes.

I must have picked up the Northern City vowels growing up near Chicago. When I arrived in Boston for graduate school, friends poked fun at my subtle accent. They loved to hear me talk about my can-tact lenses. And I can’t blame them for teasing me. The dialect can sound pretty absurd, especially when pushed to the extreme. It was probably best parodied by George Wendt and the SNL cast in the long-running Super Fans sketch.

I have long been in love with the field of phonetics and phonology, or how we produce and perceive speech sounds. Creating and understanding speech are two truly impressive (and often underappreciated) feats. Each time we speak, we must move our tongues, lips, teeth and vocal folds in precise and dynamic ways to produce complex acoustical resonances. And whenever we listen, we must deconstruct the multifaceted spectral signatures of speech sounds to translate them into what we perceive as simple vowels, consonants, syllables. We do all of that without a single conscious thought – leaving our minds free to focus on the informational content of our conversations, be they about astrophysics or Tom and Katie’s breakup.

Experiences in the first couple years of life are critical for our phonetic and phonological development. Details of the local dialect are incorporated into our speech patterns early in life and can be hard to change later on. As a result, everyone’s speech is littered with telltale signs of their regional origins. My mother and aunt spent their early years in a region of Kansas where the vowels in pen and pin were pronounced the same. To this day, they neither say nor hear them as different. Imagine the trouble my mother had when she worked with both a Jenny and Ginny. I’ve noticed major differences between my husband’s dialect and my own as well. My husband, a native Angeleno, pronounces the word dew as dyoo, while I pronounce it as doo because in Chicago the vowels yoo and oo have merged.

These days I’m watching phonetic development from a front-row seat. My baby has been babbling for a while and I’ve watched as she practiced using her new little vocal tract. She would vocalize as she moved her tongue all around her open mouth and presumably learned how the sound changed with it. From shrieks to gasps to blowing raspberries, she tested the range of noises her vocal tract could create.  And as she hones in on the spoken sounds she hears, her babbling has become remarkably speech-like. The consonants and vowels are mixed up in haphazard combinations, but they are English consonants and vowels all right. Through months of experimentation, mimicry, and practice, she has learned where to put her tongue, how far to open her mouth, and how to shape her lips to create the sounds that are the building blocks of our language. And just as she was figuring it out, we went and moved her smack into a different dialect. She will have to muddle through and learn to speak all the same. And once that happens, it will be interesting to see where her sweet little vowels end up.

%d bloggers like this: