In the Blink of an Eye


It takes around 150 milliseconds (or about one sixth of a second) to blink your eyes. In other words, not long. That’s why you say something happened “in the blink of an eye” when an event passed so quickly that you were barely aware of it. Yet a new study shows that humans can process pictures at speeds that make an eye blink seem like a screening of Titanic. Even more, these results challenge a popular theory about how the brain creates your conscious experience of what you see.

To start, imagine your eyes and brain as a flight of stairs. I know, I know, but hear me out. Each step represents a stage in visual processing. At the bottom of the stairs you have the parts of the visual system that deal with the spots of darkness and light that make up whatever you’re looking at (let’s say an old family photograph). As you stare at the photograph, information about light and dark starts out at the bottom of the stairs in what neuroscientists called “low-level” visual areas like the retinas in your eyes and a swath of tissue tucked away at the very back of your brain called primary visual cortex, or V1.

Now imagine that the information about the photograph begins to climb our metaphorical neural staircase. Each time the information reaches a new step (a.k.a. visual brain area) it is transformed in ways that discard the details of light and dark and replace them with meaningful information about the picture. At one step, say, an area of your brain detects a face in the photograph. Higher up the flight, other areas might identify the face as your great-aunt Betsy’s, discern that her expression is sad, or note that she is gazing off to her right. By the time we reach the top of the stairs, the image is, in essence, a concept with personal significance. After it first strikes your eyes, it only takes visual information 100-150 milliseconds to climb to the top of the stairs, yet in that time your brain has translated a pattern of light and dark into meaning.

For many years, neuroscientists and psychologists believed that vision was essentially a sprint up this flight of stairs. You see something, you process it as the information moves to higher areas, and somewhere near the top of the stairs you become consciously aware of what you’re seeing. Yet intriguing results from patients with blindsight, along with other studies, seemed to suggest that visual awareness happens somewhere on the bottom of the stairs rather than at the top.

New, compelling demonstrations came from studies using transcranial magnetic stimulation, a method that can temporarily disrupt brain activity at a specific point in time. In one experiment, scientists used this technique to disrupt activity in V1 about 100 milliseconds after subjects looked at an image. At this point (100 milliseconds in), information about the image should already be near the top of the stairs, yet zapping lowly V1 at the bottom of the stairs interfered with the subjects’ ability to consciously perceive the image. From this and other studies, a new theory was born. In order to consciously see an image, visual information from the image that reaches the top of the stairs must return to the bottom and combine with ongoing activity in V1. This magical mixture of nitty-gritty visual details and extracted meaning somehow creates what we experience as visual awareness

In order for this model of visual processing to work, you would have to look at the photo of Aunt Betsy for at least 100 milliseconds in order to be consciously aware of it (since that’s how long it takes for the information to sprint up and down the metaphorical flight of stairs). But what would happen if you saw Aunt Betsy’s photo for less than 100 milliseconds and then immediately saw a picture of your old dog, Sparky? Once Aunt Betsy made it to the top of the stairs, she wouldn’t be able to return to the bottom stairs because Sparky has taken her place. Unable to return to V1, Aunt Betsy would never make it to your conscious awareness. In theory, you wouldn’t know that you’d seen her at all.

Mary Potter and colleagues at MIT tested this prediction and recently published their results in the journal Attention, Perception, & Psychophysics. They showed subjects brief pictures of complex scenes including people and objects in a style called rapid serial visual presentation (RSVP). You can find an example of an RSVP image stream here, although the images in the demo are more racy and are shown for longer than the pictures in the Potter study.

The RSVP image streams in the Potter study were strings of six photographs shown in quick succession. In some image streams, pictures were each shown for 80 milliseconds (or about half the time it takes to blink). Pictures in other streams were shown for 53, 27, or 13 milliseconds each. To give you a sense of scale, 13 milliseconds is about one tenth of an eye blink, or one hundredth of a second. It is also far less than time than Aunt Betsy would need to sprint to the top of the stairs, much less to return to the bottom.

At such short timescales, people can’t remember and report all of the pictures they see in an image stream. But are they aware of them at all? To test this, the scientists gave their subjects a written description of a target picture from the image stream (say, flowers) either just before the stream began or just after it ended. In either case, once the stream was over, the subject had to indicate whether an image fitting that description appeared in the stream. If it did appear, subjects had to pick which of two pictures fitting the description actually appeared in the stream.

Considering how quickly these pictures are shown, the task should be hard for people to do even when they know what they’re looking for. Why? Because “flowers” could describe an infinite number of photographs with different arrangements, shapes, and colors. Even when the subject is tipped off with the description in advance, he or she must process each photo in the stream well enough to recognize the meaning of the picture and compare it to the description. On top of that, this experiment effectively jams the metaphorical visual staircase full of images, leaving no room for visual info to return to V1 and create a conscious experience.

The situation is even more dire when people get the description of the target only after they’ve viewed the entire image stream. To answer correctly, subjects have to process and remember as many of the pictures from the stream as possible. None of this would be impressive under ordinary circumstances but, again, we’re talking 13 milliseconds here.

Sensitivity (computed from subject performance) on the RSVP image streams with 6 images. From Potter et al., 2013.

Sensitivity (computed from subject performance) on the RSVP image streams with 6 images. From Potter et al., 2013.

How did the subjects do? Surprisingly well. In all cases, they performed better than if they were randomly guessing – even when tested on the pictures shown for 13 milliseconds. In general, they scored higher when the pictures were shown longer. And like any test-taker could tell you, people do better when they know the test questions in advance. This pattern held up even when the scientists repeated the experiment with 12-image streams. As you might imagine, that makes for a very crowded visual staircase.

These results challenge the idea that visual awareness happens when information from the top of the stairs returns to V1. Still, they are by no means the theory’s death knell. It’s possible that the stairs are wider than we thought and that V1 is able (at least to some degree) to represent more than one image at a time. Another possibility is that the subjects in the study answered the questions using a vague sense of familiarity – one that might arise even if they were never overtly conscious of seeing the images. This is a particularly compelling explanation because there’s evidence that people process visual information like color and line orientation without awareness when late activity in V1 is disrupted. The subjects in the Potter study may have used this type of information   to guide their responses.

However things ultimately shake out with the theory of visual awareness, I love that these intriguing results didn’t come from a fancy brain scanner or from the coils of a transcranial magnetic stimulation device. With a handful of pictures, a computer screen, and some good-old-fashioned thinking, the authors addressed a potentially high-tech question in a low-tech way. It’s a reminder that fancy, expensive techniques aren’t the only way – or even necessarily the best way – to tackle questions about the brain. It also shows that findings don’t need colorful brain pictures or glow-in-the-dark mice in order to be cool. You can see in less than one-tenth of a blink of an eye. How frickin’ cool is that?

Photo credit: Ivan Clow on Flickr, used via Creative Commons license

Potter MC, Wyble B, Hagmann CE, & McCourt ES (2013). Detecting meaning in RSVP at 13 ms per picture. Attention, perception & psychophysics PMID: 24374558

The Changing Face of Science: Part Two

In my last post, I wrote about how scientists are beginning to engage with the public, particularly via social media and blogs. Here, I will use my recent experiences at the AAAS conference to illustrate how social media are changing the business of science itself.

The AAAS conference was the first science meeting I’ve attended as an active tweeter. The experience opened my eyes. Throughout the event, scientists and science writers were tweeting interesting talks or points made in various sessions. Essentially, this gave me ears and eyes throughout the conference. For instance, during a slow moment in the session I was attending, I checked out the #AAAS hashtag on Twitter and saw several intriguing tweets from people in another session:

Screen Shot 2014-02-20 at 4.28.10 PM

Screen Shot 2014-02-20 at 4.28.35 PM Screen Shot 2014-02-20 at 12.35.51 PMScreen Shot 2014-02-20 at 4.29.00 PM

These tweets drew my attention to a talk that I would otherwise have missed completely. I could then decide if I wanted to switch to the other session or learn more about the speaker and her work later on. Even if I did neither, I’d learned a few interesting facts with minimal effort.

Twitter can be a very useful tool for scientists. Aside from its usefulness at conferences, it’s a great way to learn about new and exciting papers in your field. Those who aren’t on Twitter might be surprised to hear that it can be a source for academic papers rather than celebrity gossip. Ultimately, the information you glean from Twitter depends entirely on the people you choose to follow. Scientists often follow other scientists in their own or related fields. Thus, they’re more likely to come upon a great review on oligodendrocytes than news on Justin Bieber’s latest antics. Scientists and science writers form their own interconnected Twitter networks through which they share the type of content that interests them.

Katie Mack, an astrophysicist at the University of Melbourne, has logged some 32,000 tweets as @AstroKatie and has about 7,300 followers on Twitter to date. She recently explained on the blog Real Scientists why she joined Twitter in the first place:

“Twitter started out as an almost purely professional thing for me — I used it to keep up with what other physicists and astronomers were talking about, what people were saying at conferences, that kind of thing. It’s great for networking as well, and just kind of seeing what everyone is up to, in your own field and in other areas of science. Eventually I realized it could also be a great tool for outreach and for sharing my love of science with the world.”

Social media and the Internet more broadly have also made new avenues of scientific research possible. They’ve spurred citizen science projects and collaborative online databases like the International Nucleotide Sequence Database Collaboration. Yet social media and online content have also affected research on a smaller scale as individual scientists discover the science diamonds in the rough. For example, Amina Khan described in a recent Los Angeles Times article how a group of scientists mined online content to compare the strategies different animals use to swim. She writes:

“They culled 112 clips from sites like YouTube and Vimeo depicting 59 different species of flying and swimming animals in action, including moths, bats, birds and even humpback whales. They wanted to see where exactly the animals’ wings (or fins) bent most, and exactly how much they bent.”

Another wonderful example of the influence of YouTube on science came to my attention at the AAAS meeting when I attended a session on rhythmic entrainment in non-human animals. Rhythmic entrainment is the ability to match your movements to a regular beat, such as when you tap your foot to the rhythm of a song. Only five years ago it was widely believed that the ability to match a beat is unique to humans . . . that is, until Aniruddh Patel of Tufts University received an email from his friend.

As Dr. Patel described in the AAAS session, the friend wrote to share a link to a viral YouTube video of a cockatoo named Snowball getting down to the Backstreet Boys. What did Patel make of it? Although the bird certainly seemed to be keeping the beat, it was impossible to know what cues the animal was receiving off-screen. Instead of shrugging off the video or declaring it a fraud, Patel contacted the woman who posted it. She agreed to collaborate with Patel and let him test Snowball under carefully controlled conditions. Remarkably, Snowball was still able to dance to various beats. Patel and his colleagues published their results in 2009, upending the field of beat perception.

That finding sparked a string of new experiments with various species and an entertaining lineup of speakers and animal videos at the AAAS session. Among them, I had the pleasure of watching a sea lion nodding along to “Boogie Wonderland” and a bonobo pounding on a drum.

In essence, the Internet and social media are bringing new opportunities to the doorsteps of scientists. As Dr. Patel’s experience shows, it’s wise to open the door and invite them in. Like everything else in modern society, science does not lie beyond the reach of social media. And thank goodness for that.


Patel, Aniruddh D., Iversen, John R., Bregman, Micah R., & Schulz, Irena (2009). Experimental Evidence for Synchronization to a Musical Beat in a Nonhuman Animal Current Biology, 19 (10), 827-830 DOI: 10.1016/j.cub.2009.03.038

The Changing Face of Science: Part One


While waiting for the L train to attend the American Academy for the Advancement of Science (AAAS) meeting this week, I came upon Nicholas Kristof’s latest New York Times op-ed: “Professors, We Need You!” In his piece, Kristof portrays professors as out-of-touch intellectuals who study esoteric fields and hide their findings in impenetrable jargon. He also says that academia crushes rebels who communicate their science to the public. I admire Mr. Kristof for his efforts to bring awareness to injustices around the world and I agree that academic papers are often painful – if not impossible – to read. But my experience at the AAAS conference this week highlights how wrong he is, both in his depiction of academics and of the driving forces within academia itself.

AAAS is the organization behind Science magazine, ScienceNOW news, Science Careers, and the AAAS fellowship programs. Among the goals in its mission statement: to enhance communications among scientists, engineers, and the public; to provide a voice for science on societal issues; and to increase public engagement with science and technology. So yes, you would expect their conference to focus on science communication. Still, the social media sessions (Engaging with Social Media and Getting Started in Social Media) were full of scientists of all ages. Another well-attended session taught listeners how to use sites and services like Google Scholar, Mendeley, ORCID, and ResearchGate to improve the visibility of their work online.

Throughout the conference, scientists were live-tweeting interesting facts and commentary from the sessions they attended using the #AAASmtg hashtag. I saw a particularly wonderful example of this at a Saturday morning symposium called Building Babies. All five of the speakers at the symposium have accounts on Twitter and four of them were live-tweeting during each other’s presentations. Three of them (Kate Clancy, Julienne Rutherford, and Katie Hinde) also have popular blogs: Context and Variation, BANDIT, and Mammals Suck, respectively. After the symposium, Dr. Hinde compiled the symposium-related tweets on Storify.

I won’t claim that this panel of speakers is representative of scientists as a whole, but I do believe that they are representative of the direction in which scientists are moving. And contrary to Mr. Kristof’s claims, I would argue that their public visibility and embrace of online communication have probably helped rather than hindered their careers. Increased visibility can lead to more invitations to give talks, more coverage from the science press, and added connections outside of one’s narrow field of expertise. The first two of these can fill out a CV and attract positive public attention to a department, both pluses for a young academic who’s up for tenure. Moreover, while hiring and tenure decisions are made within departments, funding comes from organizations and institutions that typically value plain-speaking scientists who do research with societal relevance. For these reasons (and, I’m sure, others), it’s becoming obvious that scientists can benefit from clarity, accessibility, and visibility. In turn, many scientists are learning the necessary skills and making inroads to communicating with the public.

Of course, public visibility offers both promise and peril for scientists. As climate scientist and blogger Kim Cobb explained in her wonderful AAAS talk, scientists worry about appearing biased or unprofessional when they venture into the public conversation on social media. Science writer and former researcher Bethany Brookshire mentioned another potential peril: the fact that thoughtless or offensive off-the-cuff comments made on social media can come back to haunt scientists in their professional lives. It is also certainly true in academia (as it is in most spheres) that people are disdainful of peers who seem arrogant or overly self-promotional.

In short, scientists hoping to reach the public have their work cut out for them. They must learn how to talk about science in clear and comprehensible terms for non-scientists. They must be engaging yet appropriate in public forums and strike the right balance between public visibility and the hard-won research results to back up the attention they receive. They have good reason to tread carefully as they wade into the rapid waters of the Twitterverse, the blogosphere, and other wide-open forums. Yet in they are wading all the same.

There have already been some great responses to Kristof’s call for professors. Political scientist Erik Voeten argued that many academics already engage the public in a variety of ways. Political scientist Robin Corey pointed out that the engagement of academics with the public is often stymied by a lack of time and funding. Academics are rarely paid for the time they spend communicating with the public and may need to concentrate their efforts on academic publications and grant applications because of the troubling job market and funding situation.

Still, many academics are ready to take the plunge and engage with the public. What they need is more training and guidance. Graduate programs should provide better training in writing and communicating science. Universities and  societies should offer mentorship and seminars for scientists who want to improve the visibility of their research via the web. We need to have many more panels and discussions like the ones that took place at the AAAS meeting this week.

Oh, and while we’re at it: fewer misinformed, stereotypical descriptions of stodgy professors in ivory towers would be nice.


Photo credit: Ian Britton, used via Creative Commons license

How People Tawk Affects How Well You Listen


People from different places speak differently – that we all know. Some dialects and accents are considered glamorous or authoritative, while others carry a definite social stigma. Speakers with a New York City dialect have even been known to enroll in speech therapy to lessen their ‘accent’ and avoid prejudice. Recent research indicates that they have good reason to be worried. It now appears that the prestige of people’s dialects can fundamentally affect how you process and remember what they say.

Meghan Sumner is a psycholinguist (not to be confused with a psycho linguist) at Stanford who studies the interaction between talker variation and speech perception. Together with Reiko Kataoka, she recently published a fascinating if troubling paper in the Journal of the Acoustical Society of America. The team conducted two separate experiments with undergraduates who speak standard American English (what you hear from anchors on the national nightly news). They had the undergraduates listen to words spoken by female speakers of 1) standard American English, 2) standard British English, or 3) the New York City dialect. Standard American English is a rhotic dialect, which means that its speakers pronounce the final –r in words like finger. Both speakers of British English and the New York City dialect drop that final –r sound, but one is a standard dialect that’s considered prestigious and the other is not. I bet you can guess which is which.

In their first experiment, Sumner and Kataoka tested how the dialect of spoken words affected semantic priming, an indication of how deeply the undergraduate listeners processed the words. The listeners first heard a word ending in –er  (e.g., slender) pronounced by one of the three different female speakers. After a very brief pause, they saw a written word (say, thin) and had to make a judgment about the written word. If they had processed the spoken word deeply, it should have brought related words to mind and allowed them to respond to a question about a related written word faster. The results? The listeners showed semantic priming for words spoken in standard American English but not in the New York City dialect. That’s not too surprising. The listeners might have been thrown off by the dropped r or simply the fact the word was spoken in a less familiar dialect than their own. But here’s the wild part: the listeners showed as much semantic priming for standard British English as they did for standard American English. Clearly, there’s something more to this story than a missing r.

In their second experiment, a new set of undergraduates with a standard American English dialect listened to sets of related words, each read by one of the speakers of the same three dialects: standard American, British, or NYC. Each set of words (say, rest, bed, dream, etc.) excluded a key related word (in this case, sleep). The listeners were then asked to list all of the words they remembered hearing. This is a classic task that consistently generates false memories. People tend to remember hearing the related lure (sleep) even though it wasn’t in the original set. In this experiment, listeners remembered about the same number of actual words from the sets regardless of dialect, indicating that they listened and understood the words irrespective of speaker. Yet listeners falsely recalled more lures for the word sets read by the NYC speaker than by either the standard American or British speakers.

Screen Shot 2014-02-06 at 3.23.28 PM

Figure from Sumner & Kataoka (2013) showing more false recalls from lists spoken with a NYC dialect than those spoken in standard American or British dialects.

The authors offer an explanation for the two findings. On some level, the listeners are paying less attention to the words spoken with a NYC dialect. In fact, decreased attention has been shown to both decrease semantic priming and increase the generation of false memories in similar tasks. In another paper, Sumner and her colleague Arthur Samuel showed that people with a standard American dialect as well as those with a NYC dialect showed better later memory for –er words that they originally heard in a standard American dialect compared with words heard in a NYC dialect. These results would also fit with the idea that speakers of standard American (and even speakers with a NYC dialect) do not pay as much attention to words spoken with a NYC dialect.

In fact, Sumner and colleagues recently published a review of a comprehensive theory based on a string of their findings. They suggest that we process the social features of speech sounds at the very earliest stages of speech perception and that we rapidly and automatically determine how deeply we will process the input according to its ‘social weight’ (read: the prestige of the speaker’s dialect). They present this theory in neutral, scientific terms, but it essentially means that we access our biases and prejudices toward certain dialects as soon as we listen to speech and we use this information to at least partially ‘tune out’ people who speak in a stigmatized way.

If true, this theory could apply to other dialects that are associated with low socioeconomic status or groups that face discrimination. Here in the United States, we may automatically devalue or pay less attention to people who speak with an African American Vernacular dialect, a Boston dialect, or a Southern drawl. It’s a troubling thought for a nation founded on democracy, regional diversity, and freedom of speech. Heck, it’s just a troubling thought.


Photo credit: Melvin Gaal, used via Creative Commons license

Sumner M, & Kataoka R (2013). Effects of phonetically-cued talker variation on semantic encoding Journal of the Acoustical Society of America DOI: 10.1121/1.4826151

Sumner M, Kim S K, King E, & McGowan K B (2014). The socially weighted encoding of spoken words: a dual-route approach to speech perception Frontiers in Psychology DOI: 10.3389/fpsyg.2013.01015

Talking Thoughts with Tots: Post on Mind Matters

Hey there, friends. I recently contributed a post to the online Scientific American column Mind Matters. The piece is about how children develop the ability to contemplate, predict, and communicate other people’s thoughts and beliefs. You can read it here. Come for the new research findings, stay for the somewhat eerie revelation that babies as young as 10 months are predicting your thoughts and expectations.

Can You Name That Scent?


We are great at identifying a color as blue versus yellow or a surface as scratchy, soft, or bumpy. But how do we do with a scent? Not so well, it turns out. If I presented you with several everyday scents, ranging from chocolate to sawdust to tea, you would probably name fewer than half of them correctly.

This sort of dismal performance is often chalked up to idiosyncrasies of the human brain. Compared to many mammals, we have paltry neural smelling machinery. To smell something, the smell receptors in your upper nasal cavity must detect a molecule and pass this information on to twin nubs that stick out of the brain. These nubs are called olfactory bulbs and they carry out the earliest steps of scent processing in the brain. While the size of the human brain is impressive with respect to our bodies, the human olfactory bulbs are nothing to brag about. Below, take a look at the honking olfactory bulbs (relative to overall brain size) on the dog. Compared to theirs, our bulbs look like a practical joke.

Screen Shot 2014-01-27 at 10.19.31 AM

Human (upper) and dog (lower) brain photos indicating the olfactory bulb and tract. From International Journal of Morphology (Kavoi & Jameela, 2011).

It’s easy to blame our bulbs for our smelling deficiencies. Indeed, many scientists have offered brain-based explanations for our shortcomings in the smell department. But could there be more to the story? What if we are hamstrung by a lackluster odor vocabulary? After all, we use abstract, categorical words to identify colors (e.g., blue), shapes (round), and textures (rough), but we generally identify odors by their specific sources. You might say: “This smells like coffee,” or “I detect a hint of cinnamon,” or offer up a subjective judgment like, “That smells gross.” We lack a descriptive, abstract vocabulary for scents. Could this fact account for some of our smell shortcomings?

Linguists Asifa Majid and Niclas Burenhult tackled this question by studying a group of people with a smell vocabulary quite unlike our own. The Jahai are a relatively small group of hunter-gatherers who live in Malaysia and Thailand. They use their sense of smell often in everyday life and their native language (also called Jahai) includes many abstract words for odors. Check out the first two columns in the table below for several examples of abstract odor words in Jahai.

Screen Shot 2014-01-27 at 4.42.24 PM

Table from Majid & Burenhult (2014) in Cognition providing Jahai odor and color words, as well as their rough translations into English.

Majid and Burenhult tested whether Jahai speakers and speakers of American English could effectively and consistently name scents in their respective native languages. They stacked the deck in favor of the Americans by using odors that are all commonplace for Americans, while many are unfamiliar to the Jahai. The scents were: cinnamon, turpentine, lemon, smoke, chocolate, rose, paint thinner, banana, pineapple, gasoline, soap, and onion. For a comparison, they also asked both groups to name a range of color swatches.

The researchers published their findings in a recent issue of Cognition. As expected, English speakers used abstract descriptions for colors but largely source-based descriptions for scents. Their responses  differed substantially from one person to the next on the odor task, while they were relatively consistent on the color task. Their answers were also nearly five times longer for the odor task than for the color task. That’s because English speakers struggled and tried to describe individual scents in more than one way. For example, here’s how one English speaker struggled to describe the cinnamon scent:

“I don’t know how to say that, sweet, yeah; I have tasted that gum like Big Red or something tastes like, what do I want to say? I can’t get the word. Jesus it’s like that gum smell like something like Big Red. Can I say that? Ok. Big Red. Big Red gum.”

Screen Shot 2014-01-27 at 9.53.54 AM

Figure from Majid & Burenhult (2014) comparing the “codability” (consistency) and abstract versus source-based responses from Americans and Jahai.

Now compare that with Jahai speakers, who gave slightly shorter responses to name odors than to name colors and used abstract descriptors 99% of the time for both tasks. They were equally consistent at naming both colors and scents. And, if anything, this study probably underestimated the odor-naming consistency of the Jahai because many of the scents used in the test were unfamiliar to them.

The performance of the Jahai proves that odors are not inherently inexpressible, either by virtue of their diversity or the human brain’s inability to do them justice. As the authors state in the paper’s abstract and title, odors are expressible in language, as long as you speak the right language.

Yet this discovery is not the final word either. The differences between Americans and Jahai don’t end with their vocabularies. The Jahai participants in the study use their sense of smell every day for foraging (their primary source of income). Presumably, their language contains a wealth of odor words because of the integral role this sense plays in their lives. While Americans and other westerners are surrounded by smells, few of us rely on them for our livelihood, safety, or well-being. Thanks to the adaptive nature of brain organization, there may be major differences in how Americans and Jahai represent odors in the brain. In fact, I’d wager that there are. Neuroscience studies have shown time and again that training and experience have very real effects on how the brain represents information from the senses.

As with all scientific discoveries, answers raise new questions. Is it the Jahai vocabulary that allows the Jahai to consistently identify and categorize odors? Or is it their lifelong experience and expertise that gave rise to their vocabulary and, separately, trained their brains in ways that alter their experience of odor? If someone magically endowed English speakers with the power to speak Jahai, would they have the smelling abilities to put its abstract odor words to use?

Would a rose by any other name smell as Itpit? The answer awaits the linguist, neuroscientist, or psychologist who is brave and clever enough to sniff it out.

Asifa Majid, Niclas Burenhult (2014). Odors are expressible in language, as long as you speak the right language Cognition, 130 (2), 266-270 DOI: 10.1016/j.cognition.2013.11.004

Photograph by Dennis Wong, used via Creative Commons license

%d bloggers like this: