Did I Do That? Distinguishing Real from Imagined Actions

473089805_b4cb670102_o

If you’re like most people, you spend a great deal of your time remembering past events and planning or imagining events that may happen in the future. While these activities have their uses, they also make it terribly hard to keep track of what you have and haven’t actually seen, heard, or done. Distinguishing between memories of real experiences and memories of imagined or dreamt experiences is called reality monitoring and it’s something we do (or struggle to do) all of the time.

Why is reality monitoring a challenge? To illustrate, let’s say you’re at the Louvre standing before the Mona Lisa. As you look at the painting, visual areas of your brain are busy representing the image with specific patterns of activity. So far, so good. But problems emerge if we rewind to a time before you saw the Mona Lisa at the Louvre. Let’s say you were about to head over to the museum and you imagined the special moment when you would gaze upon Da Vinci’s masterwork. When you imagined seeing the picture, you were activating the same visual areas of the brain in a similar pattern to when you would look at the masterpiece itself.*

When you finally return home from Paris and try to remember that magical moment at the Louvre, how will you be able to distinguish your memories of seeing the Mona Lisa from imagining her? Reality monitoring studies have asked this very question (minus the Mona Lisa). Their findings suggest that you’ll probably use additional details associated with the memory to ferret out the mnemonic wheat from the chaff. You might use memory of perceptual details, like how the lights reflected off the brushstrokes, or you might use details of what you thought or felt, like your surprise at the painting’s actual size. Studies find that people activate both visual areas (like the fusiform gyrus) and self-monitoring regions of the brain (like the medial prefrontal cortex) when they are deciding whether they saw or just imagined seeing a picture.

It’s important to know what you did and didn’t see, but another crucial and arguably more important facet of reality monitoring involves determining what you did and didn’t do. How do you distinguish memories of things you’ve actually done from those you’ve planned to do or imagined doing? You have to do this every day and it isn’t a trivial task. Perhaps you’ve left the house and headed to work, only to wonder en route if you’d locked the door. Even if you thought you did, it can be hard to tell whether you remember actually doing it or just thinking about doing it. The distinction has consequences. Going home and checking could make you late for work, but leaving your door unlocked all day could mean losing your possessions. So how do we tell the possibilities apart?

Valerie Brandt, Jon Simons, and colleagues at the University of Cambridge looked into this question and published their findings last month in the journal Cognitive, Affective, and Behavioral Neuroscience. For the first part of the experiment (the study phase), they sat healthy adult participants down in front of two giant boxes – one red and one blue – that each contained 80 ordinary objects. The experimenter would draw each object out of one of the two boxes, place it in front of the participant, and tell him or her to either perform or to imagine performing a logical action with the object. For example, when the object was a book, participants were told to either open or imagine opening it.

After the study phase, the experiment moved to a scanner for fMRI. During these scans, participants were shown photographs of all 160 of the studied objects and, for each item, were asked to indicate either 1) whether they had performed or merely imagined performing an action on that object, or 2) which box the object had been drawn from.** When the scans were over, the participants saw the pictures of the objects again and were asked to rate how much specific detail they’d recalled about encountering each object and how hard it had been to bring that particular memory to mind.

The scientists compared fMRI measures of brain activation during the reality-monitoring task (Did I use or imagine using that object?) with activation during the location task (Which box did this object come from?). One of the areas they found to be more active during reality monitoring was the supplementary motor area, a region involved in planning and executing movements of the body. Just as visual areas are activated for reality monitoring of visual memories, motor areas are activated when people evaluate their action memories. In other words, when you ask yourself whether you locked the door or just imagined it, you may be using details of motor aspects of the memory (e.g., pronating your wrist to turn the key in the lock) to make your decision.

The study’s authors also found greater activation in the anterior medial prefrontal cortex when they compared reality monitoring for actions participants performed with those they only imagined performing. The medial prefrontal cortex encompasses a respectable swath of the brain with a variety of functions that appear to include making self-referential judgments, or evaluating how you feel or think about experiences, sensations, and the like. Other experiments have implicated a role for this or nearby areas in reality monitoring of visual memories. The study by Brandt and Simons also found that activation of this medial prefrontal region during reality-monitoring trials correlated with the number of internal details the participants said they’d recalled in those trials. In other words, the more details participants remembered about their thoughts and feelings during the past actions, the busier this area appeared to be. So when faced with uncertainty about a past action, the medial prefrontal cortex may be piping up about the internal details of the memory. I must have locked the door because I remember simultaneously wondering when my package would arrive from Amazon, or, because I was also feeling sad about leaving my dog alone at home.

As I read these results, I found myself thinking about the topic of my prior post on OCD. Pathological checking is a common and often disruptive symptom of the illness. Although it may seem like a failure of reality monitoring, several behavioral studies have shown that people with OCD have normal reality monitoring for past actions. The difference is that people with checking symptoms of OCD have much lower confidence in the quality of their memories than others. It seems to be this distrust of their own memories, along with relentless anxiety, that drives them to double-check over and over again.

So the next time you find yourself wondering whether you actually locked the door, cut yourself some slack. Reality monitoring ain’t easy. All you can do is trust your brain not to lead you astray. Make a call and stick with it. You’re better off being wrong than being anxious about it – that is, unless you have really nice stuff.

_____

Photo credit: Liz (documentarist on Flickr), used via Creative Commons license

* Of course, the mental image you conjure of the painting is actually based on the memory of having seen it in ads, books, or posters before. In fact, a growing area of neuroscience research focuses on how imagining the future relies on the same brain areas involved in remembering the past. Imagination seems to be, in large part, a collage of old memories cut and pasted together to make something new.

**The study also had a baseline condition, used additional contrasts, and found additional activations that I didn’t mention for the sake of brevity. Check out the original article for full details.

Brandt, V., Bergström, Z., Buda, M., Henson, R., & Simons, J. (2014). Did I turn off the gas? Reality monitoring of everyday actions Cognitive, Affective, & Behavioral Neuroscience, 14 (1), 209-219 DOI: 10.3758/s13415-013-0189-z

The Slippery Question of Control in OCD

6058142799_d4422a8fe2_b

It’s nice to believe that you have control over your environment and your fate – that is until something bad happens that you’d rather not be responsible for. In today’s complex and interconnected world, it can be hard to figure out who or what causes various events to happen and to what degree you had a hand in shaping their outcomes. Yet in order to function, everyone has to create mental representations of causation and control. What happens when I press this button? Did my glib comment upset my friends? If I belch on the first date, will it scare her off?

People often believe they have more control over outcomes (particularly positive outcomes) than they actually do. Psychologists discovered this illusion of control in controlled experiments, but you can witness the same principle in many a living room now that March Madness is upon us. Of course, wearing your lucky underwear or sitting in your go-to La-Z-Boy isn’t going to help your team win the game, and the very idea that it might shows how easily one’s sense of personal control can become inflated. Decades ago, researchers discovered that the illusion of control is not universal. People suffering from depression tend not to fall for this illusion. That fact, along with similar findings from depression, gave rise to the term depressive realism. Two recent studies now suggest that patients with obsessive-compulsive disorder (OCD) may also represent contingency and estimate personal control differently from the norm.

OCD is something of a paradox when it comes to the concept of control. The illness has two characteristic features: obsessions based on fears or regrets that occupy a sufferer’s thoughts and make him or her anxious, and compulsions, or repetitive and unnecessary actions that may or may not relieve the anxiety. For decades, psychiatrists and psychologists have theorized that control lies at the heart of this cycle. Here’s how the NIMH website on OCD describes it (emphasis is mine):

The frequent upsetting thoughts are called obsessions. To try to control them, a person will feel an overwhelming urge to repeat certain rituals or behaviors called compulsions. People with OCD can’t control these obsessions and compulsions. Most of the time, the rituals end up controlling them.

In short, their obsessions cause them distress and they perform compulsions in an effort to regain some sense of control over their thoughts, fears, and anxieties. Yet in some cases, compulsions (like sports fans’ superstitions) seem to indicate an inflated sense of personal control. Based on this conventional model of OCD, you might predict that people with the illness will either underestimate or overestimate their personal control over events. So which did the studies find? In a word: both.

The latest study, which appeared this month in Frontiers in Psychology, used a classic experimental design to study the illusion of control. The authors tested 26 people with OCD and 26 comparison subjects. The subjects were shown an image of an unlit light bulb and told that their goal was to illuminate the light bulb as often as possible. On each trial, they could choose to either press or not press the space bar. After they made their decision, the light bulb either did or did not light up. Their job was to estimate, based on their trial-by-trial experimentation, how much control they had over the light bulb. Here’s the catch: the subjects had absolutely no control over the light bulb, which lit up or remained dark according to a fixed sequence.*

After 40 trials, subjects were asked to rate the degree of control they thought they had over the illumination of the light bulb, ranging from 0 (no control) to 100 (complete control). Estimates of control were consistently higher for the comparison subjects than for the subjects with OCD. In other words, the people with OCD believed they had less control – and since they actually had no control, that means that they were also more accurate than the comparison subjects. As the paper points out, this is a limitation of the study: it can’t tell us whether patients are generally prone to underestimating their control over events or if they’re simply more accurate that comparison subjects. To do that, it would need to have included situations in which subjects actually did have some degree of control over the outcomes.

Why wasn’t the light bulb study designed to distinguish between these alternatives? Because the authors were expecting the opposite result. They had designed their experiment to follow up on a 2008 study that found a heightened illusion of control among people with OCD. The earlier study used a different test. They showed subjects either neutral pictures of household items or disturbing pictures of distorted faces. The experimenters encouraged the subjects to try to control the presentation of images by pressing buttons on a keyboard and asked them to estimate their control over the images three times during the session. However, just like in the light bulb study, the presentation of the images was fixed in advance and could not be affected by the subjects’ button presses.

How can two studies of estimated control in OCD have opposite results? It seems that the devil is in the details. Prior studies with tasks like these have shown that healthy subjects’ control estimates depend on details like the frequency of the preferred outcome and whether the experimenter is physically in the room during testing.  Mental illness throws additional uncertainty into the mix. For example, the disturbing face images in the 2008 study might have made the subjects with OCD anxious, which could have triggered a different cognitive pattern. Still, both findings suggest that control estimation is abnormal for people with OCD, possibly in complex and situation-dependent ways.

These and other studies indicate that decision-making and representations of causality in OCD are altered in interesting and important ways. A better understanding of these differences could help us understand the illness and, in the process, might even shed light on the minor rituals and superstitions that are common to us all. Sadly, like a lucky pair of underwear, it probably won’t help your team get to the Final Four.

_____

Photo by Olga Reznik on Flickr, used via Creative Commons license

*The experiment also manipulated reinforcement (how often the light bulb lit up) and valence (whether the lit bulb earned them money or the unlit bulb cost them money) across different testing sections, but I don’t go into that here because the manipulations didn’t affect the results.

Gillan CM, Morein-Zamir S, Durieux AM, Fineberg NA, Sahakian BJ, & Robbins TW (2014). Obsessive-compulsive disorder patients have a reduced sense of control on the illusion of control task. Frontiers in Psychology, 5 PMID: 24659974

In the Blink of an Eye

4330664213_ab665a8419_o

It takes around 150 milliseconds (or about one sixth of a second) to blink your eyes. In other words, not long. That’s why you say something happened “in the blink of an eye” when an event passed so quickly that you were barely aware of it. Yet a new study shows that humans can process pictures at speeds that make an eye blink seem like a screening of Titanic. Even more, these results challenge a popular theory about how the brain creates your conscious experience of what you see.

To start, imagine your eyes and brain as a flight of stairs. I know, I know, but hear me out. Each step represents a stage in visual processing. At the bottom of the stairs you have the parts of the visual system that deal with the spots of darkness and light that make up whatever you’re looking at (let’s say an old family photograph). As you stare at the photograph, information about light and dark starts out at the bottom of the stairs in what neuroscientists called “low-level” visual areas like the retinas in your eyes and a swath of tissue tucked away at the very back of your brain called primary visual cortex, or V1.

Now imagine that the information about the photograph begins to climb our metaphorical neural staircase. Each time the information reaches a new step (a.k.a. visual brain area) it is transformed in ways that discard the details of light and dark and replace them with meaningful information about the picture. At one step, say, an area of your brain detects a face in the photograph. Higher up the flight, other areas might identify the face as your great-aunt Betsy’s, discern that her expression is sad, or note that she is gazing off to her right. By the time we reach the top of the stairs, the image is, in essence, a concept with personal significance. After it first strikes your eyes, it only takes visual information 100-150 milliseconds to climb to the top of the stairs, yet in that time your brain has translated a pattern of light and dark into meaning.

For many years, neuroscientists and psychologists believed that vision was essentially a sprint up this flight of stairs. You see something, you process it as the information moves to higher areas, and somewhere near the top of the stairs you become consciously aware of what you’re seeing. Yet intriguing results from patients with blindsight, along with other studies, seemed to suggest that visual awareness happens somewhere on the bottom of the stairs rather than at the top.

New, compelling demonstrations came from studies using transcranial magnetic stimulation, a method that can temporarily disrupt brain activity at a specific point in time. In one experiment, scientists used this technique to disrupt activity in V1 about 100 milliseconds after subjects looked at an image. At this point (100 milliseconds in), information about the image should already be near the top of the stairs, yet zapping lowly V1 at the bottom of the stairs interfered with the subjects’ ability to consciously perceive the image. From this and other studies, a new theory was born. In order to consciously see an image, visual information from the image that reaches the top of the stairs must return to the bottom and combine with ongoing activity in V1. This magical mixture of nitty-gritty visual details and extracted meaning somehow creates what we experience as visual awareness

In order for this model of visual processing to work, you would have to look at the photo of Aunt Betsy for at least 100 milliseconds in order to be consciously aware of it (since that’s how long it takes for the information to sprint up and down the metaphorical flight of stairs). But what would happen if you saw Aunt Betsy’s photo for less than 100 milliseconds and then immediately saw a picture of your old dog, Sparky? Once Aunt Betsy made it to the top of the stairs, she wouldn’t be able to return to the bottom stairs because Sparky has taken her place. Unable to return to V1, Aunt Betsy would never make it to your conscious awareness. In theory, you wouldn’t know that you’d seen her at all.

Mary Potter and colleagues at MIT tested this prediction and recently published their results in the journal Attention, Perception, & Psychophysics. They showed subjects brief pictures of complex scenes including people and objects in a style called rapid serial visual presentation (RSVP). You can find an example of an RSVP image stream here, although the images in the demo are more racy and are shown for longer than the pictures in the Potter study.

The RSVP image streams in the Potter study were strings of six photographs shown in quick succession. In some image streams, pictures were each shown for 80 milliseconds (or about half the time it takes to blink). Pictures in other streams were shown for 53, 27, or 13 milliseconds each. To give you a sense of scale, 13 milliseconds is about one tenth of an eye blink, or one hundredth of a second. It is also far less than time than Aunt Betsy would need to sprint to the top of the stairs, much less to return to the bottom.

At such short timescales, people can’t remember and report all of the pictures they see in an image stream. But are they aware of them at all? To test this, the scientists gave their subjects a written description of a target picture from the image stream (say, flowers) either just before the stream began or just after it ended. In either case, once the stream was over, the subject had to indicate whether an image fitting that description appeared in the stream. If it did appear, subjects had to pick which of two pictures fitting the description actually appeared in the stream.

Considering how quickly these pictures are shown, the task should be hard for people to do even when they know what they’re looking for. Why? Because “flowers” could describe an infinite number of photographs with different arrangements, shapes, and colors. Even when the subject is tipped off with the description in advance, he or she must process each photo in the stream well enough to recognize the meaning of the picture and compare it to the description. On top of that, this experiment effectively jams the metaphorical visual staircase full of images, leaving no room for visual info to return to V1 and create a conscious experience.

The situation is even more dire when people get the description of the target only after they’ve viewed the entire image stream. To answer correctly, subjects have to process and remember as many of the pictures from the stream as possible. None of this would be impressive under ordinary circumstances but, again, we’re talking 13 milliseconds here.

Sensitivity (computed from subject performance) on the RSVP image streams with 6 images. From Potter et al., 2013.

Sensitivity (computed from subject performance) on the RSVP image streams with 6 images. From Potter et al., 2013.

How did the subjects do? Surprisingly well. In all cases, they performed better than if they were randomly guessing – even when tested on the pictures shown for 13 milliseconds. In general, they scored higher when the pictures were shown longer. And like any test-taker could tell you, people do better when they know the test questions in advance. This pattern held up even when the scientists repeated the experiment with 12-image streams. As you might imagine, that makes for a very crowded visual staircase.

These results challenge the idea that visual awareness happens when information from the top of the stairs returns to V1. Still, they are by no means the theory’s death knell. It’s possible that the stairs are wider than we thought and that V1 is able (at least to some degree) to represent more than one image at a time. Another possibility is that the subjects in the study answered the questions using a vague sense of familiarity – one that might arise even if they were never overtly conscious of seeing the images. This is a particularly compelling explanation because there’s evidence that people process visual information like color and line orientation without awareness when late activity in V1 is disrupted. The subjects in the Potter study may have used this type of information   to guide their responses.

However things ultimately shake out with the theory of visual awareness, I love that these intriguing results didn’t come from a fancy brain scanner or from the coils of a transcranial magnetic stimulation device. With a handful of pictures, a computer screen, and some good-old-fashioned thinking, the authors addressed a potentially high-tech question in a low-tech way. It’s a reminder that fancy, expensive techniques aren’t the only way – or even necessarily the best way – to tackle questions about the brain. It also shows that findings don’t need colorful brain pictures or glow-in-the-dark mice in order to be cool. You can see in less than one-tenth of a blink of an eye. How frickin’ cool is that?

Photo credit: Ivan Clow on Flickr, used via Creative Commons license

Potter MC, Wyble B, Hagmann CE, & McCourt ES (2013). Detecting meaning in RSVP at 13 ms per picture. Attention, perception & psychophysics PMID: 24374558

How People Tawk Affects How Well You Listen

4808475862_6129039fa8_o

People from different places speak differently – that we all know. Some dialects and accents are considered glamorous or authoritative, while others carry a definite social stigma. Speakers with a New York City dialect have even been known to enroll in speech therapy to lessen their ‘accent’ and avoid prejudice. Recent research indicates that they have good reason to be worried. It now appears that the prestige of people’s dialects can fundamentally affect how you process and remember what they say.

Meghan Sumner is a psycholinguist (not to be confused with a psycho linguist) at Stanford who studies the interaction between talker variation and speech perception. Together with Reiko Kataoka, she recently published a fascinating if troubling paper in the Journal of the Acoustical Society of America. The team conducted two separate experiments with undergraduates who speak standard American English (what you hear from anchors on the national nightly news). They had the undergraduates listen to words spoken by female speakers of 1) standard American English, 2) standard British English, or 3) the New York City dialect. Standard American English is a rhotic dialect, which means that its speakers pronounce the final –r in words like finger. Both speakers of British English and the New York City dialect drop that final –r sound, but one is a standard dialect that’s considered prestigious and the other is not. I bet you can guess which is which.

In their first experiment, Sumner and Kataoka tested how the dialect of spoken words affected semantic priming, an indication of how deeply the undergraduate listeners processed the words. The listeners first heard a word ending in –er  (e.g., slender) pronounced by one of the three different female speakers. After a very brief pause, they saw a written word (say, thin) and had to make a judgment about the written word. If they had processed the spoken word deeply, it should have brought related words to mind and allowed them to respond to a question about a related written word faster. The results? The listeners showed semantic priming for words spoken in standard American English but not in the New York City dialect. That’s not too surprising. The listeners might have been thrown off by the dropped r or simply the fact the word was spoken in a less familiar dialect than their own. But here’s the wild part: the listeners showed as much semantic priming for standard British English as they did for standard American English. Clearly, there’s something more to this story than a missing r.

In their second experiment, a new set of undergraduates with a standard American English dialect listened to sets of related words, each read by one of the speakers of the same three dialects: standard American, British, or NYC. Each set of words (say, rest, bed, dream, etc.) excluded a key related word (in this case, sleep). The listeners were then asked to list all of the words they remembered hearing. This is a classic task that consistently generates false memories. People tend to remember hearing the related lure (sleep) even though it wasn’t in the original set. In this experiment, listeners remembered about the same number of actual words from the sets regardless of dialect, indicating that they listened and understood the words irrespective of speaker. Yet listeners falsely recalled more lures for the word sets read by the NYC speaker than by either the standard American or British speakers.

Screen Shot 2014-02-06 at 3.23.28 PM

Figure from Sumner & Kataoka (2013) showing more false recalls from lists spoken with a NYC dialect than those spoken in standard American or British dialects.

The authors offer an explanation for the two findings. On some level, the listeners are paying less attention to the words spoken with a NYC dialect. In fact, decreased attention has been shown to both decrease semantic priming and increase the generation of false memories in similar tasks. In another paper, Sumner and her colleague Arthur Samuel showed that people with a standard American dialect as well as those with a NYC dialect showed better later memory for –er words that they originally heard in a standard American dialect compared with words heard in a NYC dialect. These results would also fit with the idea that speakers of standard American (and even speakers with a NYC dialect) do not pay as much attention to words spoken with a NYC dialect.

In fact, Sumner and colleagues recently published a review of a comprehensive theory based on a string of their findings. They suggest that we process the social features of speech sounds at the very earliest stages of speech perception and that we rapidly and automatically determine how deeply we will process the input according to its ‘social weight’ (read: the prestige of the speaker’s dialect). They present this theory in neutral, scientific terms, but it essentially means that we access our biases and prejudices toward certain dialects as soon as we listen to speech and we use this information to at least partially ‘tune out’ people who speak in a stigmatized way.

If true, this theory could apply to other dialects that are associated with low socioeconomic status or groups that face discrimination. Here in the United States, we may automatically devalue or pay less attention to people who speak with an African American Vernacular dialect, a Boston dialect, or a Southern drawl. It’s a troubling thought for a nation founded on democracy, regional diversity, and freedom of speech. Heck, it’s just a troubling thought.

_____

Photo credit: Melvin Gaal, used via Creative Commons license

Sumner M, & Kataoka R (2013). Effects of phonetically-cued talker variation on semantic encoding Journal of the Acoustical Society of America DOI: 10.1121/1.4826151

Sumner M, Kim S K, King E, & McGowan K B (2014). The socially weighted encoding of spoken words: a dual-route approach to speech perception Frontiers in Psychology DOI: 10.3389/fpsyg.2013.01015

Can You Name That Scent?

3309276218_26baf1c493_b

We are great at identifying a color as blue versus yellow or a surface as scratchy, soft, or bumpy. But how do we do with a scent? Not so well, it turns out. If I presented you with several everyday scents, ranging from chocolate to sawdust to tea, you would probably name fewer than half of them correctly.

This sort of dismal performance is often chalked up to idiosyncrasies of the human brain. Compared to many mammals, we have paltry neural smelling machinery. To smell something, the smell receptors in your upper nasal cavity must detect a molecule and pass this information on to twin nubs that stick out of the brain. These nubs are called olfactory bulbs and they carry out the earliest steps of scent processing in the brain. While the size of the human brain is impressive with respect to our bodies, the human olfactory bulbs are nothing to brag about. Below, take a look at the honking olfactory bulbs (relative to overall brain size) on the dog. Compared to theirs, our bulbs look like a practical joke.

Screen Shot 2014-01-27 at 10.19.31 AM

Human (upper) and dog (lower) brain photos indicating the olfactory bulb and tract. From International Journal of Morphology (Kavoi & Jameela, 2011).

It’s easy to blame our bulbs for our smelling deficiencies. Indeed, many scientists have offered brain-based explanations for our shortcomings in the smell department. But could there be more to the story? What if we are hamstrung by a lackluster odor vocabulary? After all, we use abstract, categorical words to identify colors (e.g., blue), shapes (round), and textures (rough), but we generally identify odors by their specific sources. You might say: “This smells like coffee,” or “I detect a hint of cinnamon,” or offer up a subjective judgment like, “That smells gross.” We lack a descriptive, abstract vocabulary for scents. Could this fact account for some of our smell shortcomings?

Linguists Asifa Majid and Niclas Burenhult tackled this question by studying a group of people with a smell vocabulary quite unlike our own. The Jahai are a relatively small group of hunter-gatherers who live in Malaysia and Thailand. They use their sense of smell often in everyday life and their native language (also called Jahai) includes many abstract words for odors. Check out the first two columns in the table below for several examples of abstract odor words in Jahai.

Screen Shot 2014-01-27 at 4.42.24 PM

Table from Majid & Burenhult (2014) in Cognition providing Jahai odor and color words, as well as their rough translations into English.

Majid and Burenhult tested whether Jahai speakers and speakers of American English could effectively and consistently name scents in their respective native languages. They stacked the deck in favor of the Americans by using odors that are all commonplace for Americans, while many are unfamiliar to the Jahai. The scents were: cinnamon, turpentine, lemon, smoke, chocolate, rose, paint thinner, banana, pineapple, gasoline, soap, and onion. For a comparison, they also asked both groups to name a range of color swatches.

The researchers published their findings in a recent issue of Cognition. As expected, English speakers used abstract descriptions for colors but largely source-based descriptions for scents. Their responses  differed substantially from one person to the next on the odor task, while they were relatively consistent on the color task. Their answers were also nearly five times longer for the odor task than for the color task. That’s because English speakers struggled and tried to describe individual scents in more than one way. For example, here’s how one English speaker struggled to describe the cinnamon scent:

“I don’t know how to say that, sweet, yeah; I have tasted that gum like Big Red or something tastes like, what do I want to say? I can’t get the word. Jesus it’s like that gum smell like something like Big Red. Can I say that? Ok. Big Red. Big Red gum.”

Screen Shot 2014-01-27 at 9.53.54 AM

Figure from Majid & Burenhult (2014) comparing the “codability” (consistency) and abstract versus source-based responses from Americans and Jahai.

Now compare that with Jahai speakers, who gave slightly shorter responses to name odors than to name colors and used abstract descriptors 99% of the time for both tasks. They were equally consistent at naming both colors and scents. And, if anything, this study probably underestimated the odor-naming consistency of the Jahai because many of the scents used in the test were unfamiliar to them.

The performance of the Jahai proves that odors are not inherently inexpressible, either by virtue of their diversity or the human brain’s inability to do them justice. As the authors state in the paper’s abstract and title, odors are expressible in language, as long as you speak the right language.

Yet this discovery is not the final word either. The differences between Americans and Jahai don’t end with their vocabularies. The Jahai participants in the study use their sense of smell every day for foraging (their primary source of income). Presumably, their language contains a wealth of odor words because of the integral role this sense plays in their lives. While Americans and other westerners are surrounded by smells, few of us rely on them for our livelihood, safety, or well-being. Thanks to the adaptive nature of brain organization, there may be major differences in how Americans and Jahai represent odors in the brain. In fact, I’d wager that there are. Neuroscience studies have shown time and again that training and experience have very real effects on how the brain represents information from the senses.

As with all scientific discoveries, answers raise new questions. Is it the Jahai vocabulary that allows the Jahai to consistently identify and categorize odors? Or is it their lifelong experience and expertise that gave rise to their vocabulary and, separately, trained their brains in ways that alter their experience of odor? If someone magically endowed English speakers with the power to speak Jahai, would they have the smelling abilities to put its abstract odor words to use?

Would a rose by any other name smell as Itpit? The answer awaits the linguist, neuroscientist, or psychologist who is brave and clever enough to sniff it out.

Asifa Majid, Niclas Burenhult (2014). Odors are expressible in language, as long as you speak the right language Cognition, 130 (2), 266-270 DOI: 10.1016/j.cognition.2013.11.004

Photograph by Dennis Wong, used via Creative Commons license

Outsourcing Memory

3088541520_00a0721cde_b

Do you rely on your spouse to remember special events and travel plans? Your coworker to remember how to submit some frustrating form? Your cell phone to store every phone number you’ll ever need? Yeah, me too. You might call this time saving or delegating, but if you were a fancy psychologist you’d call it transactive memory.

Transactive memory is a wonderful concept. There’s too much information in this world to know and remember. Why not store some of it in “the cloud” that is your partner or coworker’s brain or in “the cloud” itself, whatever and wherever that is? The idea of transactive memory came from the innovative psychologist Daniel Wegner, most recently of Harvard, who passed away in July of this year. Wegner proposed the idea in the mid-80s and framed it in terms of the “intimate dyad” – spouses or other close couples who know each other very well over a long period of time.

Transactive memory between partners can be a straightforward case of cognitive outsourcing. I remember monthly expenses and you remember family birthdays. But it can also be a subtler and more interactive process. For example, one spouse remembers why you chose to honeymoon at Waikiki and the other remembers which hotel you stayed in. If the partners try to recall their honeymoon together, they can produce a far richer description of the experience than if they were to try separately.

Here’s an example from a recent conversation with my husband. It began when my husband mentioned that a Red Sox player once asked me out.

“Never happened,” I told him. And it hadn’t. But he insisted.

“You know, years ago. You went out on a date or something?”

“Nope.” But clearly he was thinking of something specific.

I thought really hard until a shred of a recollection came to me. “I’ve never met a Red Sox player, but I once met a guy who was called up from the farm team.”

My husband nodded. “That guy.”

But what interaction did we have? I met the guy nine years ago, not long before I met my husband. What were the circumstances? Finally, I began to remember. It wasn’t actually a date. We’d gone bowling with mutual friends and formed teams. The guy – a pitcher – was intensely competitive and I was the worst bowler there. He was annoyed that I was ruining our team score and I was annoyed that he was taking it all so seriously. I’d even come away from the experience with a lesson: never play games with competitive athletes.

Apparently, I’d told the anecdote to my husband after we met and he remembered a nugget of the story. Even though all of the key details from that night were buried somewhere in my brain, I’m quite sure that I would never have remembered them again if not for my husband’s prompts. This is a facet of transactive memory, one that Wegner called interactive cueing.

In a sense, transactive memory is a major benefit of having long-term relationships. Sharing memory, whether with a partner, parent, or friend, allows you to index or back up some of that memory. This fact also underscores just how much you lose when a loved one passes away. When you lose a spouse, a parent, a sibling, you are also losing part of yourself and the shared memory you have with that person. After I lost my father, I noticed this strange additional loss. I caught myself wondering when I’d stopped writing stories on his old typewriter. I realized I’d forgotten parts of the fanciful stories he used to tell me on long drives. I wished I could ask him to fill in the blanks, but of course it was too late.

Memories can be shared with people, but they can also be shared with things. If you write in a diary, you are storing details about current experiences that you can access later in life. No spouse required. You also upload memories and information to your technological gadgets. If you store phone numbers in your cell phone and use bookmarks and autocomplete tools in your browser, you are engaging in transactive memory. You are able to do more while remembering less. It’s efficient, convenient, and downright necessary in today’s world of proliferating numbers, websites, and passwords.

In 2011, a Science paper described how people create transactive memory with online search engines. The study, authored by Betsy Sparrow, Jenny Liu, and Wegner, received plenty of attention at the time, including here and here.

In one experiment, they asked participants either hard or easy questions and then had them do a modified Stroop task that involved reporting the physical color of a written word rather than naming the word. This was a measure of priming, essentially whether a participant has been thinking about that word or similar concepts recently. Sometimes the participants were tested with the names of online search engines (Google, Yahoo) and at others they were tested with other name brands (Nike, Target). After hard questions, the participants took much longer to do the Stroop task with Google and Yahoo than with the other brand names, suggesting that hard questions made them automatically think about searching the Internet for the answer.

Screen Shot 2013-11-21 at 1.53.54 PM

The other experiments described in the paper showed that people are less likely to remember trivia if they believe they will be able to look it up later. When participants thought that items of trivia were saved somewhere on a computer, they were also more likely to remember where the items were saved than they were to remember the actual trivia items themselves. Together, the study’s findings suggest that people actively outsource memory to their computers and to the Internet. This will come as no surprise to those of us who can’t remember a single phone number offhand, don’t know how to get around without the GPS, and hop on our smartphones to answer the simplest of questions.

Search engines, computer atlases, and online databases are remarkable things. In a sense, we’d be crazy not to make use of them. But here’s the rub: the Internet is jam-packed with misinformation or near-miss information. Anti-vaxxers, creationists, global warming deniers: you can find them all on the web. And when people want the definitive answer, they almost always find themselves at Wikipedia. While Wikipedia has valuable information, it is not written and curated by experts. It is not always the God’s-honest-truth and it is not a safe replacement for learning and knowing information ourselves. Of course, the memories of our loved ones aren’t foolproof either, but at least they don’t carry the aura of authority that comes with a list of citations.

Speaking of which. There is now a Wikipedia page for “The Google Effect” that is based on the 2011 Science article. A banner across the top shows an open book featuring a large question mark and the following warning: “This article relies largely or entirely upon a single source. . . . Please help improve this article by introducing citations to additional sources.” The citation for the first section is a dead link. The last section has two placeholders for citations, but in lieu of numbers they say, According to whom?

Folks, if that ain’t a reminder to be wary of the outsourcing your brain to Google and Wikipedia, I don’t know what is.

_________

Photo credits:

1. Photo by Mike Baird on Flickr, used via Creative Commons license

2. Figure from “Google Effects on Memory: Cognitive Consequences of Having Information at Our Fingertips” by Betsy Sparrow, Jenny Liu, and Daniel M. Wegner.

Sparrow B, Liu J, & Wegner DM (2011). Google effects on memory: cognitive consequences of having information at our fingertips. Science (New York, N.Y.), 333 (6043), 776-8 PMID: 21764755

Neural Conspiracy Theories

140775790_e3e122cd65_bLast month, a paper quietly appeared in The Journal of Neuroscience to little fanfare and scant media attention (with these exceptions). The study revolved around a clever and almost diabolical premise: that using perceptual trickery and outright deception, its authors could plant a delusion-like belief in the heads of healthy subjects. Before you call the ethics police, I should mention that the belief wasn’t a delusion in the formal sense of the word. It didn’t cause the subjects any distress and was limited to the unique materials used in the study. Still, it provided a model delusion that scientists Katharina Schmack, Philipp Sterzer, and colleagues could study to investigate the interplay of perception and belief in healthy subjects. The experiment is quite involved, so I’ll stick to the coolest and most relevant details.

As I mentioned in my last post, delusions are not exclusive to people suffering from psychosis. Many people who are free of any diagnosable mental illness still have a tendency to develop them, although the frequency and severity of these delusions differ across individuals. There are some good reasons to conduct studies like this one on healthy people rather than psychiatric patients. Healthy subjects are a heck of a lot easier to recruit, easier to work with, and less affected by confounding factors like medication and stress.

Schmack, Sterzer, and colleagues designed their experiment to test the idea that delusions arise from two distinct but related processes. First, a person experiences perceptual disturbances. According to the group’s model, these disturbances actually reflect poor expectation signals as the brain processes information from the senses. In theory, these poor signals would make irrelevant or commonplace sights, sounds, and sensations seem surprising and important. Without an explanation for this unexpected weirdness, the individual comes up with a delusion to make sense of it all. Once the delusion is in place, so-called higher areas of the brain (those that do more complex things like ponder, theorize, and believe) generate new expectation signals based on the delusion. These signals feed back on so-called lower sensory areas and actually bias the person’s perception of the outside world based on the delusion. According to the authors, this would explain why people become so convinced of their delusions: they are constantly perceiving confirmatory evidence. Strangely enough, this model sounds like a paranoid delusion in its own right. Various regions of your brain may be colluding to fool your senses into making you believe a lie!

To test the idea, the experimenters first had to toy with their subjects’ senses. They did so by capitalizing on a quirk of the visual system: that when people are shown two conflicting images separately to their two eyes, they don’t perceive both images at once. Instead, perception alternates between the two. In the first part of this experiment, the two images were actually movies of moving dots that appeared to form a 3-D sphere spinning either to the left (for one eye) or to the right (for the other). For this ambiguous visual condition, subjects were equally likely to see a sphere spinning to the right or to the left at any given moment in time, with it switching direction periodically.

Now the experimenters went about planting the fake belief. They gave the subjects a pair of transparent glasses and told them that the lenses contained polarizing filters that would make the sphere appear to spin more in one of the two directions. In fact, the lenses were made of simple plastic and could do no such thing. Once the subjects had the glasses on, the experimenters began showing the same movie to both eyes. While this change allowed the scientists to control exactly what the subjects saw, the subjects had no idea that the visual setup had changed. In this unambiguous condition, all subjects saw a sphere that alternated direction (just as the ambiguous sphere had done), except that this sphere spun far more in one of the two directions. This visual trick, paired with the story about polarized lenses, was meant to make subjects believe that the glasses caused the change in perception.

After that clever setup, the scientists were ready to see how the model delusion would affect each subject’s actual perception. While the subject continued to wear the glasses, they were shown the two original, conflicting movies to their two separate eyes. In the first part of the experiment, this ambiguous condition caused subjects to see a rotating sphere that alternated equally between spinning to the left and right. But if their new belief about the glasses biased their perception of the spinning sphere, they would now report seeing the sphere spin more often in the belief-consistent direction.

What happened? Subjects did see the sphere spin more in the belief-consistent direction. While the effect was small, it was still impressive that they could bias perception at all, considering the simplicity of the images. They also found that each subject’s delusional conviction score (how convinced they were by their delusional thoughts in everyday life) correlated with this effect. The more the subject believed her real-life delusional thoughts, the more her belief about the glasses affected her perception of the ambiguous spinning sphere.

But there’s a hitch. What if subjects were reporting the motion bias because they thought that was what they were supposed to see and not because they actually saw it? To answer this question, they recruited a new batch of participants and ran the experiment again in a scanner using fMRI.

Since the subjects’ task hinged on motion perception, Sterzer and colleagues first looked at the activity in a brain area called MT that processes visual motion. By analyzing the patterns of fMRI activity in this area, the scientists confirmed that subjects were accurately reporting the motion they perceived. That may sound far-fetched, but this kind of ‘mind reading’ with fMRI  has been done quite successfully for basic visual properties like motion.

The group also studied activity throughout the brain while their glasses-wearing subjects learned the false belief (unambiguous condition) and allowed the false belief to more or less affect their perception (ambiguous condition). They found that belief-based perceptual bias correlated with activity in the left orbitofrontal cortex, a region just behind the eyes that is involved in decision-making and expectation. In essence, subjects with more activity in this region during both conditions tended to also report lopsided spin directions that confirmed their expectations during the ambiguous condition. And here’s the cherry on top: subjects with higher delusional conviction scores appeared to have greater communication between left orbitofrontal cortex and motion-processing area MT during the ambiguous visual condition. Although fMRI can’t directly measure communication between areas and can’t tell us the direction of communication, this pattern suggests that the left orbitofrontal cortex may be directly responsible for biasing motion perception in delusion-prone subjects.

All told, the results of the experiment seem to tell a neat story that fits the authors’ model about delusions. Yet there are a couple of caveats worth mentioning. First, the key finding of their study – that a person’s delusional conviction score correlates with his or her belief-based motion perception bias – is built upon a quirky and unnatural aspect of human vision that may or may not reflect more typical sensory processes. Second, it’s hard to say how clinically relevant the results are. No one knows for certain if delusions arise by the same neural mechanisms in the general population as they do in patients with illnesses like schizophrenia. It has been argued that they probably do because the same risk factors pop up for patients as for non-psychotic people with delusions: unemployment, social difficulties, urban surroundings, mood disturbances and drug or alcohol abuse. Then again, this group is probably also at the highest risk for getting hit by a bus, dying from an curable disease, or suffering any number of misfortunes that disproportionately affect people in vulnerable circumstances. So the jury is still about on the clinical applicability of these results.

Despite the study’s limitations, it was brilliantly designed and tells a compelling tale about how the brain conspires to manipulate perception based on beliefs. It also implicates a culprit in this neural conspiracy. Dare I say ringleader? Mastermind? Somebody cue the close up of orbitofrontal cortex cackling and stroking a cat.

_____

Photo credit: Daniel Horacio Agostini (dhammza) on Flickr, used through Creative Commons license

Schmack K, Gòmez-Carrillo de Castro A, Rothkirch M, Sekutowicz M, Rössler H, Haynes JD, Heinz A, Petrovic P, & Sterzer P (2013). Delusions and the role of beliefs in perceptual inference. The Journal of Neuroscience, 33 (34), 13701-13712 PMID: 23966692

%d bloggers like this: