Can You Name That Scent?


We are great at identifying a color as blue versus yellow or a surface as scratchy, soft, or bumpy. But how do we do with a scent? Not so well, it turns out. If I presented you with several everyday scents, ranging from chocolate to sawdust to tea, you would probably name fewer than half of them correctly.

This sort of dismal performance is often chalked up to idiosyncrasies of the human brain. Compared to many mammals, we have paltry neural smelling machinery. To smell something, the smell receptors in your upper nasal cavity must detect a molecule and pass this information on to twin nubs that stick out of the brain. These nubs are called olfactory bulbs and they carry out the earliest steps of scent processing in the brain. While the size of the human brain is impressive with respect to our bodies, the human olfactory bulbs are nothing to brag about. Below, take a look at the honking olfactory bulbs (relative to overall brain size) on the dog. Compared to theirs, our bulbs look like a practical joke.

Screen Shot 2014-01-27 at 10.19.31 AM

Human (upper) and dog (lower) brain photos indicating the olfactory bulb and tract. From International Journal of Morphology (Kavoi & Jameela, 2011).

It’s easy to blame our bulbs for our smelling deficiencies. Indeed, many scientists have offered brain-based explanations for our shortcomings in the smell department. But could there be more to the story? What if we are hamstrung by a lackluster odor vocabulary? After all, we use abstract, categorical words to identify colors (e.g., blue), shapes (round), and textures (rough), but we generally identify odors by their specific sources. You might say: “This smells like coffee,” or “I detect a hint of cinnamon,” or offer up a subjective judgment like, “That smells gross.” We lack a descriptive, abstract vocabulary for scents. Could this fact account for some of our smell shortcomings?

Linguists Asifa Majid and Niclas Burenhult tackled this question by studying a group of people with a smell vocabulary quite unlike our own. The Jahai are a relatively small group of hunter-gatherers who live in Malaysia and Thailand. They use their sense of smell often in everyday life and their native language (also called Jahai) includes many abstract words for odors. Check out the first two columns in the table below for several examples of abstract odor words in Jahai.

Screen Shot 2014-01-27 at 4.42.24 PM

Table from Majid & Burenhult (2014) in Cognition providing Jahai odor and color words, as well as their rough translations into English.

Majid and Burenhult tested whether Jahai speakers and speakers of American English could effectively and consistently name scents in their respective native languages. They stacked the deck in favor of the Americans by using odors that are all commonplace for Americans, while many are unfamiliar to the Jahai. The scents were: cinnamon, turpentine, lemon, smoke, chocolate, rose, paint thinner, banana, pineapple, gasoline, soap, and onion. For a comparison, they also asked both groups to name a range of color swatches.

The researchers published their findings in a recent issue of Cognition. As expected, English speakers used abstract descriptions for colors but largely source-based descriptions for scents. Their responses  differed substantially from one person to the next on the odor task, while they were relatively consistent on the color task. Their answers were also nearly five times longer for the odor task than for the color task. That’s because English speakers struggled and tried to describe individual scents in more than one way. For example, here’s how one English speaker struggled to describe the cinnamon scent:

“I don’t know how to say that, sweet, yeah; I have tasted that gum like Big Red or something tastes like, what do I want to say? I can’t get the word. Jesus it’s like that gum smell like something like Big Red. Can I say that? Ok. Big Red. Big Red gum.”

Screen Shot 2014-01-27 at 9.53.54 AM

Figure from Majid & Burenhult (2014) comparing the “codability” (consistency) and abstract versus source-based responses from Americans and Jahai.

Now compare that with Jahai speakers, who gave slightly shorter responses to name odors than to name colors and used abstract descriptors 99% of the time for both tasks. They were equally consistent at naming both colors and scents. And, if anything, this study probably underestimated the odor-naming consistency of the Jahai because many of the scents used in the test were unfamiliar to them.

The performance of the Jahai proves that odors are not inherently inexpressible, either by virtue of their diversity or the human brain’s inability to do them justice. As the authors state in the paper’s abstract and title, odors are expressible in language, as long as you speak the right language.

Yet this discovery is not the final word either. The differences between Americans and Jahai don’t end with their vocabularies. The Jahai participants in the study use their sense of smell every day for foraging (their primary source of income). Presumably, their language contains a wealth of odor words because of the integral role this sense plays in their lives. While Americans and other westerners are surrounded by smells, few of us rely on them for our livelihood, safety, or well-being. Thanks to the adaptive nature of brain organization, there may be major differences in how Americans and Jahai represent odors in the brain. In fact, I’d wager that there are. Neuroscience studies have shown time and again that training and experience have very real effects on how the brain represents information from the senses.

As with all scientific discoveries, answers raise new questions. Is it the Jahai vocabulary that allows the Jahai to consistently identify and categorize odors? Or is it their lifelong experience and expertise that gave rise to their vocabulary and, separately, trained their brains in ways that alter their experience of odor? If someone magically endowed English speakers with the power to speak Jahai, would they have the smelling abilities to put its abstract odor words to use?

Would a rose by any other name smell as Itpit? The answer awaits the linguist, neuroscientist, or psychologist who is brave and clever enough to sniff it out.

Asifa Majid, Niclas Burenhult (2014). Odors are expressible in language, as long as you speak the right language Cognition, 130 (2), 266-270 DOI: 10.1016/j.cognition.2013.11.004

Photograph by Dennis Wong, used via Creative Commons license

Perfect Pitch Redux


I can just hear the advertisement now.

Do you have perfect pitch? Would you like to? Then Depakote might be right for you . . .

Perfect pitch is the ability to name or produce a musical note without a reference note. While most children presumably have the capacity to learn perfect pitch, only one in about ten thousand adults can actually do it. That’s because children must receive extensive musical training as youngsters to develop it. Most adults with perfect pitch began studying music at six years of age or younger. By the time children turn nine, their window to learn perfect pitch has already closed. They may yet blossom into wonderful musicians but they will never be able to count perfect pitch among their talents.

Or might they after all?

Well no, probably not. But a new study, published in Frontiers in Systems Neuroscience, has opened the door to such questions. Its authors tested how young men learned to name notes when they were on or off of a drug called valproate (brand name: Depakote). Valproate is widely used to treat epilepsy and bipolar disorder. It’s part of a class of drugs called histone-deacetylase, or HDAC, inhibitors that fiddle with how DNA is stored and alter how genes are read out and translated into proteins.

The intricacies of how HDAC inhibitors affect gene expression and how those changes reduce seizures and mania are still up in the air. But while some scientists have been working those details out, others have been noticing that HDAC inhibitors help old mice learn new tricks. These drugs allow adult mice to adapt to visual and auditory changes in ways that are only otherwise possible for juvenile mice. In other words, HDAC inhibitors allowed mice to learn things beyond the typical window, or critical period, in which the brain is capable of that specific type of learning.

Judit Gervain, Allan Young, and the other authors of the current study set out to test whether HDAC inhibitors can reopen a learning window in humans as well. They randomly assigned their young male subjects to take valproate for either the first or the second half of the study. (Although I usually get my hackles up about the exclusion of female participants from biomedical studies, I understand their reason for doing so in this case. Valproate can cause severe birth defects. By testing men, the authors could be one hundred percent certain that their participants weren’t pregnant.) The subjects took valproate for one half of the study and a placebo for the other half . . . and of course they weren’t told which was which.

During the first half of the study, they trained twenty-four participants to learn six pitch classes. Instead of teaching them the formal names of these pitches in the twelve-tone musical system, they assigned proper names to each one (e.g., Eric, Rachel, or Francine), indicating that each is the name of a person who only plays one pitch class. The participants received this training online for up to ten minutes daily for seven days. During the second half of the study, eighteen of the same subjects underwent the same training with six new pitch classes and names. At the end of each seven-day training session, they heard the six pitch classes one at a time and, for each, answered the question: “Who played that note?”


Study results showing better performance at naming tones for participants on valproate in the first half of the experiment. From: Gervain et al, 2013

The results? There was a whopping effect of treatment on performance in the first half of the study. The young men on valproate did significantly better than the men on placebo. That’s pretty cool and amazing. It is particularly impressive and surprising because the participants received very little training. The online training summed to a mere seventy minutes and some of the participants didn’t even complete all seven of the ten-minute sessions.

As cool as the main finding is, there are some odd aspects to the study. As you can see from the figure, the second half of the experiment (after the treatments were switched) doesn’t show the same result as the first. Here, participants on valproate perform no differently from those on placebo. The authors suggest that the training in the first half of the experiment interfered with learning in the second half – a plausible explanation (and one they might have predicted in advance). Still, at this point we can’t tell if we are looking at a case of proactive interference or a failure to replicate results. Only time and future experiments will tell.

There were two other odd aspects of the study that caught my eye. The authors used synthesized piano tones instead of pure tones because the former has additional cues like timbre that help people without perfect pitch complete the task. They also taught the participants to associate each note with the name of the person who supposedly plays it rather than the name of the actual note or some abstract stand-in identifier. Both choices make it easier for the participants to perform well on the task but call into question how similar the participants’ learning is to the specific phenomenon of perfect pitch. Perhaps the subjects on valproate in the first half of the experiment were relying on different cues (e.g., timbre instead of frequency). Likewise, associating proper names of people with notes may help subjects learn precisely because it recruits social processes and networks that people with perfect pitch don’t use for the task. If these social processes don’t have a critical period like perfect pitch judgment does, well then valproate might be boosting a very different kind of learning.

As the authors themselves point out, this small study is merely a “proof-of-concept,” albeit a dramatic one. It is not meant to be the final word on the subject. Still, I am curious to see where this leads. Might valproate’s success with seizures and mania have something to do with its ability to trigger new learning? And if HDAC inhibitors do alter the brain’s ability to learn skills that are typically crystallized by adulthood, how has that affected the millions of adults who have been taking these drugs for years? Yet again, only time and science will tell.

I, for one, will be waiting to hear what they have to say.


Photo credit: Brandon Giesbrecht on Flickr, used via Creative Commons license

Gervain J, Vines BW, Chen LM, Seo RJ, Hensch TK, Werker JF, & Young AH (2013). Valproate reopens critical-period learning of absolute pitch. Frontiers in Systems Neuroscience, 7 PMID: 24348349

Known Unknowns

Why no one can say exactly how much is safe to drink while pregnant


I was waiting in the dining car of an Amtrak train recently when I looked up and saw that old familiar sign:

“According to the Surgeon General, women should not drink alcoholic beverages during pregnancy because of the risk of birth defects.”

One finds this warning everywhere: printed on bottles and menus or posted on placards at restaurants and even train cars barreling through Midwestern farmland in the middle of the night. The warnings are, of course, intended to reduce the number of cases of fetal alcohol syndrome in the United States. To that end, the Centers for Disease Control and Prevention (CDC) and the American Congress of Obstetricians and Gynecologists (ACOG) recommend that women avoid drinking any alcohol throughout their pregnancies.

Here’s how the CDC puts it:

“There is no known safe amount of alcohol to drink while pregnant.”

And here’s ACOG’s statement in 2008:

“. . . ACOG reiterates its long-standing position that no amount of alcohol consumption can be considered safe during pregnancy.”

Did you notice what they did there? These statements don’t actually say that no amount of alcohol is safe during pregnancy. They say that no safe amount is known and that no amount can be considered safe, respectively. Ultimately, these are statements of uncertainty. We don’t know how much is safe to drink, so it’s best if you don’t drink any at all.

Lest you think this is a merely a reflection of America’s puritanical roots, check out the recommendations of the U.K.’s National Health Service. While they make allowances for the fact that some women choose to drink, they still advise pregnant women to avoid alcohol altogether. As they say:

“If women want to avoid all possible alcohol-related risks, they should not drink alcohol during pregnancy because the evidence on this is limited.”

Yet it seems odd that the evidence is so limited. The damaging effects of binge drinking on fetal development were known in the 18th century and the first modern description of fetal alcohol syndrome was published in a French medical journal nearly 50 years ago. Six years later, in 1973, a group of researchers at the University of Washington documented the syndrome in The Lancet. Even then, people knew the cause of fetal alcohol syndrome: alcohol. And in the forty years since, fetal alcohol syndrome has become a well-known and well-studied illness. NIH alone devotes more than $30 million dollars annually to research in the field. So how come no one has answered the most pressing question (at least for pregnant women): How much is safe to drink?

One reason is that fetal alcohol syndrome isn’t like HIV. You can’t diagnose it with a blood test. Doctors rely on a characteristic pattern of facial abnormalities, growth delays and neural or mental problems – often in addition to evidence of prenatal alcohol exposure – to diagnose a child. Yet children exposed to and affected by alcohol during fetal development don’t always show all of these symptoms. Doctors and agencies now define fetal alcohol syndrome as the extreme end of a spectrum of disorders caused by prenatal alcohol exposure. The full spectrum, called fetal alcohol spectrum disorders (FASD), includes milder forms of the illness that involve subtler cognitive or behavioral problems and lack the classic facial features of the full-blown syndrome.

As you might imagine, milder cases of FASD are hard to identify. Pediatricians can miss the signs altogether. And there’s a fundamental difficulty in diagnosing the mildest cases of FASD. To put it crudely, if your child is slow, who’s to say whether the culprit is a little wine during pregnancy, genetics, too much television, too few vegetables, or god-knows-what-else? Unfortunately, identifying and understanding the mildest cases is crucial. These are the cases that worry pregnant women who drink lightly. They lie at the heart of the uncertainty voiced by the CDC, ACOG, and others. Most pregnant women would like to enjoy the occasional merlot or Sam Adams, but not if they thought it would rob their children of IQ points or otherwise limit their abilities – even just a little – down the line.

While it’s hard to pin down the subtlest cases in the clinic, scientists can still detect them by looking for differences between groups of children with different exposures. The most obvious way of testing this would be to randomly assign pregnant women to drink alcohol at different doses, but of course that experiment would be unethical and should never be done. Instead, researchers capitalize on the variability in how much women choose to drink during pregnancy (or at least how much they report that they drank, which may not always be the same thing.) In addition to interviewing moms about their drinking habits, the scientists test their children at different ages and look for correlations between prenatal alcohol exposure and test performance.

While essential, these studies can be messy and hard to interpret. When researchers do find correlations between moderate prenatal alcohol exposure and poor test performance, they can’t definitively claim that the former caused the latter (although it’s suggestive). A mysterious third variable (say, maternal cocaine use) might be responsible for them both. On the flip side, interpreting studies that don’t find correlations is even trickier.  It’s hard to show that one thing doesn’t affect another, particularly when you are interested in very small effects. To establish this with any confidence, scientists must show that it holds with large numbers of people and that they are using the right outcome measure (e.g., IQ score). FASD impairments can span language, movement, math skills, goal-directed behaviors, and social interactions. Any number of measures from wildly different tests might be relevant. If a given study doesn’t find a correlation between prenatal alcohol exposure and outcome measure, it might be because the study didn’t test enough children or didn’t choose the right test to pick up the subtle differences between groups.

When studies in humans get tricky, scientists often turn to animal models. FASD research has been no exception. These animal studies have helped us understand the physiological and biochemical mechanisms behind fetal alcohol syndrome, but they can’t tell us how much alcohol a pregnant woman can safely drink. Alcohol metabolism rates vary quite a bit between species. The sensitivity of developing neurons to alcohol may differ too. One study used computational modeling to predict that the blood alcohol level of a pregnant rat must be 10 times that of a pregnant human to wreak the same neural havoc on the fetus. Yet computational models are far from foolproof. Scientists simply don’t know precisely how a dose in a rat, monkey, or other animal would translate to a human mother and fetus.

And here’s the clincher: alcohol’s prenatal effects also differ between humans. Thanks to genetic differences, people metabolize alcohol at very different rates. The faster a pregnant woman clears alcohol from her system, the lower the exposure to her fetus. Other factors make a difference, too. Prenatal alcohol exposure seems to take a heavier toll on the fetuses of older mothers. The same goes for poor mothers, probably because of confounding factors like nutrition and stress. Taken together, these differences mean that if two pregnant women drink the same amount of alcohol at the same time, their fetuses might experience very different alcohol exposures and have very different outcomes. In short, there is no single limit to how much a pregnant woman can safely drink because every woman and every pregnancy is different.

As organizations like the CDC point out, the surest way to prevent FASD is to avoid alcohol entirely while pregnant. Ultimately, every expecting mother has to make her own decision about drinking based on her own understanding of the risk. She may hear strong opinions from friends, family, the blogosphere and conventional media. Lots of people will seem sure of many things and those are precisely the people that she should ignore.

When making any important decision, it’s best to know as much as you can – even when that means knowing how much remains unknown.


Photo Credit: Uncalno Tekno on Flickr, used via Creative Commons license

Hurley TD, & Edenberg HJ (2012). Genes encoding enzymes involved in ethanol metabolism. Alcohol research : current reviews, 34 (3), 339-44 PMID: 23134050

Stoler JM, & Holmes LB (1999). Under-recognition of prenatal alcohol effects in infants of known alcohol abusing women. The Journal of Pediatrics, 135 (4), 430-6 PMID: 10518076

Pet Coke: Breakfast of Champions?


Pet coke piles in Chicago as of 10/19/13 by Josh Mogerman, used via Creative Commons license

This morning NPR informed me that petroleum coke and the Koch brothers have struck again – this time in my hometown of Chicago.

Petroleum coke, adorably nicknamed pet coke, made headlines this past summer when it was improperly stored by Koch Carbon and billowed into homes and neighborhoods in Detroit, where I currently live.

But wait. That’s a lie. I live just outside of Detroit in a wealthier suburb, just as I grew up outside of Chicago in a tree-lined college town. That makes a big difference. No one would dream of dumping three-story-high piles of industrial soot in my current backyard or the one I played in as a child. Those neighborhoods are simply too wealthy, too powerful, too ready and willing to sue.

Communities near the pet coke storage sites in both Detroit and Chicago are hurting financially. We all know about the struggles of bankrupt Detroit, where it takes about an hour for emergency workers to respond to the direst 911 calls. Southeast Chicago, once an industrial hub, has faced many of the same challenges as Detroit. This year, both areas became dumping grounds for increasing quantities of pet coke (in some cases, without a permit).

That increase in pet coke is due to ramped-up tar sands drilling in Canada. Pet coke is a product of the tar sands refining process. Although it is too sooty to be used for energy here in the United States, countries like Mexico and China will buy it to use as fuel. That means neighborhoods like South Deering in Chicago wind up serving as holding stations for pet coke while companies sell it internationally and arrange for its transport. But this pet coke sits and waits in open-air piles. Strong gusts of wind cause black plumes of dust that travel into neighborhoods and homes.

View of a pet coke plume from the Detroit piles on 7/27/13, via 3860remerson on YouTube

The residents of these neighborhoods have found black dust coating their floors, countertops, and even food. They describe it getting into their eyes, mouths, and lungs. I find these exposures alarming. But Laurie C. McCausland, who represents the Koch brothers’ interests as the deputy general counsel for Koch Companies Public Sector, thinks that’s silly. According to WBEZ, Chicago’s NPR affiliate, McCausland says that overall pet coke is safe. WBEZ quotes her as follows:

“It’s unfair for people to be overly scared about this product. I think people just don’t have a lot of information.”

In a letter to the editor of the Chicago Tribune, Jim Watson, the Executive Director of the Illinois Petroleum Council, expressed a similar sentiment. He wrote:

“Extensive testing has revealed that petcoke has no observed carcinogenic, reproductive, or developmental effects in humans and a low potential to cause adverse effects on aquatic or terrestrial environments.”

I was curious if these statements were true. Has pet coke been extensively studied? And is the health concern surrounding pet coke just an instance of misinformed scaremongering like the anti-vaccination movement? I headed over to PubMed, the U.S. government’s comprehensive catalog of scholarly papers in science, health, and medicine. I searched for “petroleum coke” and got 56 results. Most of these papers had to do with 1) nifty chemical reactions you can do with pet coke, 2) how pet coke affects aquatic life, and 3) the health of people who make pet coke react with other potentially hazardous compounds for a living. I came across only three titles that appeared to be specific and relevant: one that assessed correlations between pet coke exposure and lung cancer in petroleum workers and two that tested the effects of pet coke exposure in mammalian animal models.

The most recent was published in the International Journal of Toxicology this year. The authors include representatives from ExxonMobil (first author), the American Petroleum Institute, and Shell (last author). From what I can tell, the remaining authors work for contract research laboratories (as in, paid by the oil companies). Another paper was published in Occupational & Environmental Medicine in 2012 by authors at ExxonMobil (first author) and Imperial Oil, although this study at least included collaborators from actual universities including McGill (last author). A third paper, published in 1987 in the American Journal of Industrial Medicine, was penned by representatives of Standard Oil, the American Petroleum Institute (last author), and two contract laboratory companies (first author). Not surprisingly, none of these papers conclude that pet coke is especially hazardous.

Even if I missed one (or even ten!) relevant articles in my search, I think it’s safe to say that the research is anything but “extensive.”  I haven’t yet combed through the three papers, nor am I the best person to evaluate their methods. Still, I do think it’s proper to question their impartiality and recommend that they be scrutinized by unbiased experts. We should also wonder if we are getting an accurate representation of such industry-funded research. When corporations and labs-for-hire come up with results they don’t like, they don’t have to (and often don’t) publish them. Yet when corporations do get a result that they like (for whatever reason, including a lack of statistical power), they are happy to publish it and thrust it into the hands of publicists and legal representatives like Ms. McCausland who tell us not to be silly; pet coke’s perfectly safe. That bias alone throws off a fair evaluation of the issue.

Residents of Southeastern Chicago on the pet coke,  via NRDCflix on YouTube

Modern (and ancient) history play like a broken record of chemicals, compounds, and practices that were harmless until suddenly they weren’t. Shoe stores once had x-ray machines so you could see how well your shoes fit – or just stare at your wiggling toe bones. We’ve seen the rise and fall of leaded paint and gasoline, asbestos, and thalidomide and now we’re learning about the dangers of plastics in our baby bottles and flame retardants in our cushions. There’s plenty of reason to suspect that pet coke exposure is no day at the health spa. Inhaled particulates irritate the airways and can, at the very least, exacerbate asthma and other respiratory illnesses. Analysis of the Detroit pet coke dust showed that it also contained the toxic elements vanadium and selenium, although it’s not clear whether residents were exposed at high enough levels to cause ill effects. (While we actually require trace amounts of selenium,  further exposure is toxic.)

It seems to me that we need more information. We need impartial toxicologists, epidemiologists, and other specialists to pore over the papers published on the topic and start conducting unbiased experiments of their own. And while we wait, we need to protect the residents who live in the shadow of pet coke. Pet coke piles should be enclosed so that the dust can’t escape into communities, schools, and homes.

I find myself wondering how much faith people like Laurie McCausland, Jim Watson, and Charles Koch truly put in those industry-funded studies on pet coke. Would they be willing to move their families into a community coated with pet coke? Or is it only safe enough for those families who can’t afford to live elsewhere?

Between permit oversights and unlawful air pollution, the Koch brothers’ companies may already have broken the law. But if they are putting vulnerable people’s health and well being at risk to make a buck? Well that truly is criminal.

Schnatter AR, Nicolich MJ, Lewis RJ, Thompson FL, Dineen HK, Drummond I, Dahlman D, Katz AM, & Thériault G (2012). Lung cancer incidence in Canadian petroleum workers. Occupational and environmental medicine, 69 (12), 877-82 PMID: 23077208

McKee RH, Herron D, Beatty P, Podhasky P, Hoffman GM, Swigert J, Lee C, & Wong D (2013). Toxicological Assessment of Green Petroleum Coke. International journal of toxicology PMID: 24179031

fMR-Why? Bad Science Meets Chocolate and Body Envy


Imagine this: You have bulimia nervosa, a psychiatric condition that traps you in an unhealthy cycle of binge eating and purging. You’ve been recruited to participate in a functional MRI experiment on this devastating illness. As you lie in the scanner, you are shown pictures of pizza, chocolate and other high-calorie foods and you’re told to imagine eating them. You do this for 72 pictures of delicious, fatty foods. At other points in the experiment, you see pictures of bodies (sans heads) of models clipped from a women’s magazine. You are told to compare your body to each of the bodies in the pictures. You do this 72 times, once for each skinny (and probably retouched) model’s body. The experience would have been unsettling enough for normal women trying to eat healthier or feel happier with their not-so-super-model bodies. But for women with bulimia, it must have truly been a hoot and a half.

Luckily, the misery was worth it. When the researchers publish their findings, they claim to have shown that patients with bulimia process body images differently. In their conclusions, they say that their results can inform how psychotherapists should treat patients with the illness. They even suggest that it might someday lead to direct interventions, such as a targeted zap to the head using transcranial magnetic stimulation.

My recommendation? Cover your therapist’s ears and stay away from the head zapper. This study shows nothing of the sort.

Functional MRI is a widely used and quite powerful method of probing the brain, but it is only useful for experiments that are thoughtfully conceived and carefully interpreted. Unfortunately, many fMRI papers that make it to publication are neither.

One of the most common problems in fMRI is making bad comparisons. All fMRI studies rely on comparisons because brains are all different and scanners are all different. If you are going to say that Region X becomes active when you see a picture of chocolate, you first have to answer that crucial question: compared to what? If you’re interested in how the brain reacts to unhealthy food in particular, you might compare looking at pictures of chocolate with looking at pictures of raisins or eggplant. And if you’re comparing these comparisons across subject groups (such as patients versus non-patients), both groups had better have the same the control condition. Otherwise, you’re not even comparing apples to oranges. You’re comparing apples to gym socks.

Sadly, that is just what these experimenters did. They compared brain blood flow when the subjects looked either at junk food or skinny women with blood flow during 36-second stretches of time when subjects just stared at a small, white ‘+’ on the screen. The authors say that using a more similar control condition (say, imagining using non-food objects like a lamp or a door) would be bad because patients with bulimia might respond to these objects differently than healthy subjects. This argument is nonsensical. There’s no reason to believe that people with bulimia feel any differently about doors or lamps than anyone else, but there’s plenty of reason to believe that they would spend 36-second moments of downtime before or after comparing their bodies to those of models either obsessing or trying not to obsess about how their bodies ‘measure up.’

In fact, I suspect that could not help but wonder if the authors didn’t originally intend to use this ‘+’ as the control condition. They actually had less crappy control conditions built into the experiment. As a control for imagining eating pizza and chocolate, the participants were also shown non-food objects like tools and told to imagine using them. They also saw interior décor photos and had to compare the furniture to those in their own homes – a control for comparing each model’s body to one’s own.

When the authors did their analyses using these (better) control conditions, they found very few differences between patients and non-patients. None, in fact, for the imagine-eating-junk-food portion of the study. For the comparing-oneself-to-models portion, they only found that patients showed less activation than controls in two regions of visual cortex. These regions may correspond to areas that specifically process body images. But would less activation in these regions mean that patients with bulimia process body images differently than other people? Not at all. If the patients were not looking at the pictures as much as non-patients or were more distracted/less attentive to them, you would see the same pattern of results. In short, the authors had no story to tell when they used the better controls. They had a ‘null result’ that would not get published.


Based on the design of their experiment, I suspect that find myself wondering if this was how they originally intended to analyze their data.* And it’s really the only sensible way to analyze these data. Experiments like these include the ‘+’ condition to establish a baseline (essentially, what you’re going to call ‘zero’). These ‘+’ blocks also correct for an unfortunate phenomenon called scanner drift that adds noise to the data.

It’s possible that I have to wonder if the authors decided to use the ‘+’ for their comparisons because they didn’t get any exciting results with the actual control conditions. If so, it unfortunately worked. Using the baseline condition, they found two differences between patient and non-patient activations in the food task and even more differences between the groups in the body task. Ultimately, the authors got their significant results and they got them published.  But those differences have nothing to do with the causes of bulimia and everything to do with what flits through people’s minds while they stare at a plus sign.

Unfortunately, this is just one example from a growing sea of bad fMRI studies out there. And while many people do wonderful work with the technique and advance the field, others do it a disservice and set us all back. From researchers to reviewers, publishers, science writers and reporters, we all need to proceed with caution and evaluate papers with a critical eye. The participants in our experiments deserve it. The public deserves it. Most of all, patients deserve the best information we can give them. Science done well and served to them straight.

Update: I’ve made a few small changes to this post to clarify my intent. I don’t personally know the study’s authors and have no insight into their actions, intentions, or motivations. In writing the piece, I hoped to bring attention to a widespread problem in fMRI research. Of the study’s authors I can only say that they did some seriously flawed research. Why, when, or how is as much your guess as mine.

Since posting this piece, I’ve contacted the editor of BMC Psychiatry regarding my concerns with the paper. Not only have I received no reply from her, but this paper is still listed as one of the ‘Editor’s Picks’ on their website as of 1/5/14.


*For curious fMRI folk: each run contained 6 food/body blocks, 6 non-food/décor blocks, and only 3 baseline ‘+’ blocks. That means they collected twice the data for the control conditions that they supposedly didn’t intend to use than for the ones that they did.

Photo #1 credit: MRI scanner, photo by Matthias Weinberger (cszar on Flickr), used via Creative Commons license

Photo #2 credit: Structural MRI of kiwi fruit by Dom McIntyre (McBadger on Flickr), used via Creative Commons license

Van den Eynde F, Giampietro V, Simmons A, Uher R, Andrew CM, Harvey PO, Campbell IC, & Schmidt U (2013). Brain responses to body image stimuli but not food are altered in women with bulimia nervosa. BMC Psychiatry, 13 (1) PMID: 24238299


The Trouble with (and without) Fish


This week I’m posting a piece from my archives (August, 2011) that I’ve updated a little. Two things brought this post to mind: 1) the recent EPA report that women have become better informed about mercury and are making better choices at the fish counter and 2) remarkable updates from my scientist friend who is blogging her way through the world’s oceans as she collects water samples to catalog mercury levels around the globe. Both demonstrate that we are making some progress in studying and alerting people to the mercury in our waters and our fish. NB: when I say “now that I’m pregnant,” it’s 2011 me talking.


Once upon a time in a vast ocean, life evolved. And then, over many millions of years, neurons and spinal cords and eyes developed, nourished all the while in a gentle bath of nutrients and algae.

Our brains and eyes are distant descendants of those early nervous systems formed in the sea. And even though our ancestors eventually sprouted legs and waddled out of the ocean, the neural circuitry of modern humans is still dependent on certain nutrients that their water-logged predecessors had in abundance.

This obscure fact about a distant evolution has recently turned into a major annoyance for me now that I’m pregnant. In fact, whether they know it or not, all pregnant women are trapped in a no-win dilemma over what they put into their stomachs. Take, for instance, a popular guidebook for pregnant women. On one page, it advocates eating lots of seafood while pregnant, explaining that fish contain key nutrients that the developing eyes and brain of the fetus will need. A few pages later, however, the author warns that seafood contains methylmercury, a neurotoxic pollutant, and that fish intake should be strictly curtailed. What is a well-meaning pregnant lady to do?

On a visceral level, nothing sounds worse than poisoning your child, so many women reduce their seafood intake while pregnant. I have spoken with women who cut all seafood out of their diet while pregnant, for fear that a little exposure could prove to be too much. They had good reason to be worried. Extreme methylmercury poisoning episodes in Japan and Iraq in past decades have shown that excessive methylmercury intake during pregnancy can cause developmental delays, deafness, blindness, and seizures in the babies exposed.

But what happens if pregnant women eliminate seafood from their diet altogether? Without careful supplementation of vital nutrients found in marine ecosystems, children face neural setbacks or developmental delays on a massive scale. Consider deficiencies in iodine, a key nutrient readily found in seafood. Its scarcity in the modern land-based diet was causing mental retardation in children – and sparked the creation of iodized salt (salt supplemented with iodine) to ensure that the nutritional need was met.


Perhaps the hardest nutrient to get without seafood is an omega-3 fatty acid known as DHA. In recent years, scientists have learned that this particular fatty acid is essential for proper brain development and functioning, yet it is almost impossible to get from non-aquatic dietary sources. At the grocery store, you’ll find vegetarian products that claim to fill those needs by supplying the biochemical precursor to DHA (found in flaxseed, walnuts, and soybean oils), but it’s not clear that the precursor will do the trick. Our bodies take a while to synthesize DHA from its precursor. In fact, we may burn much of the precursor for energy before we manage to convert it to DHA.

The best way for pregnant women to meet the needs of their growing babies is to eat food from marine sources. Yet thanks to global practices of burning coal and disposing of industrial and medical waste, any seafood women eat will expose their offspring to some amount of methylmercury. There’s no simple solution to this problem, although studies suggest that child outcomes are best when women consume ample seafood while avoiding species with higher levels of methylmercury (such as shark, tilefish, walleye, pike, and some types of tuna). It also matters where the fish was caught. Mercury levels will be higher in fish from mercury-polluted waters – one of the reasons that it’s important to catalog mercury levels around the globe.

Unless we start cleaning up our oceans, pregnant women will continue to face this awful decision each time they sit down at the dinner table. Far worse, we may face future generations with lower IQs and developmental delays regardless of which choice their mothers make. Thanks to shoddy environmental oversight, we may be saddling our children with brains that don’t work as well as our own. And that is something I truly can’t swallow.


Photo credits:

Photo 1: by Gideon (malias) on Flickr, used via Creative Commons license

Photo 2: by @Doug88888 on Flickr, used via Creative Commons license

Outsourcing Memory


Do you rely on your spouse to remember special events and travel plans? Your coworker to remember how to submit some frustrating form? Your cell phone to store every phone number you’ll ever need? Yeah, me too. You might call this time saving or delegating, but if you were a fancy psychologist you’d call it transactive memory.

Transactive memory is a wonderful concept. There’s too much information in this world to know and remember. Why not store some of it in “the cloud” that is your partner or coworker’s brain or in “the cloud” itself, whatever and wherever that is? The idea of transactive memory came from the innovative psychologist Daniel Wegner, most recently of Harvard, who passed away in July of this year. Wegner proposed the idea in the mid-80s and framed it in terms of the “intimate dyad” – spouses or other close couples who know each other very well over a long period of time.

Transactive memory between partners can be a straightforward case of cognitive outsourcing. I remember monthly expenses and you remember family birthdays. But it can also be a subtler and more interactive process. For example, one spouse remembers why you chose to honeymoon at Waikiki and the other remembers which hotel you stayed in. If the partners try to recall their honeymoon together, they can produce a far richer description of the experience than if they were to try separately.

Here’s an example from a recent conversation with my husband. It began when my husband mentioned that a Red Sox player once asked me out.

“Never happened,” I told him. And it hadn’t. But he insisted.

“You know, years ago. You went out on a date or something?”

“Nope.” But clearly he was thinking of something specific.

I thought really hard until a shred of a recollection came to me. “I’ve never met a Red Sox player, but I once met a guy who was called up from the farm team.”

My husband nodded. “That guy.”

But what interaction did we have? I met the guy nine years ago, not long before I met my husband. What were the circumstances? Finally, I began to remember. It wasn’t actually a date. We’d gone bowling with mutual friends and formed teams. The guy – a pitcher – was intensely competitive and I was the worst bowler there. He was annoyed that I was ruining our team score and I was annoyed that he was taking it all so seriously. I’d even come away from the experience with a lesson: never play games with competitive athletes.

Apparently, I’d told the anecdote to my husband after we met and he remembered a nugget of the story. Even though all of the key details from that night were buried somewhere in my brain, I’m quite sure that I would never have remembered them again if not for my husband’s prompts. This is a facet of transactive memory, one that Wegner called interactive cueing.

In a sense, transactive memory is a major benefit of having long-term relationships. Sharing memory, whether with a partner, parent, or friend, allows you to index or back up some of that memory. This fact also underscores just how much you lose when a loved one passes away. When you lose a spouse, a parent, a sibling, you are also losing part of yourself and the shared memory you have with that person. After I lost my father, I noticed this strange additional loss. I caught myself wondering when I’d stopped writing stories on his old typewriter. I realized I’d forgotten parts of the fanciful stories he used to tell me on long drives. I wished I could ask him to fill in the blanks, but of course it was too late.

Memories can be shared with people, but they can also be shared with things. If you write in a diary, you are storing details about current experiences that you can access later in life. No spouse required. You also upload memories and information to your technological gadgets. If you store phone numbers in your cell phone and use bookmarks and autocomplete tools in your browser, you are engaging in transactive memory. You are able to do more while remembering less. It’s efficient, convenient, and downright necessary in today’s world of proliferating numbers, websites, and passwords.

In 2011, a Science paper described how people create transactive memory with online search engines. The study, authored by Betsy Sparrow, Jenny Liu, and Wegner, received plenty of attention at the time, including here and here.

In one experiment, they asked participants either hard or easy questions and then had them do a modified Stroop task that involved reporting the physical color of a written word rather than naming the word. This was a measure of priming, essentially whether a participant has been thinking about that word or similar concepts recently. Sometimes the participants were tested with the names of online search engines (Google, Yahoo) and at others they were tested with other name brands (Nike, Target). After hard questions, the participants took much longer to do the Stroop task with Google and Yahoo than with the other brand names, suggesting that hard questions made them automatically think about searching the Internet for the answer.

Screen Shot 2013-11-21 at 1.53.54 PM

The other experiments described in the paper showed that people are less likely to remember trivia if they believe they will be able to look it up later. When participants thought that items of trivia were saved somewhere on a computer, they were also more likely to remember where the items were saved than they were to remember the actual trivia items themselves. Together, the study’s findings suggest that people actively outsource memory to their computers and to the Internet. This will come as no surprise to those of us who can’t remember a single phone number offhand, don’t know how to get around without the GPS, and hop on our smartphones to answer the simplest of questions.

Search engines, computer atlases, and online databases are remarkable things. In a sense, we’d be crazy not to make use of them. But here’s the rub: the Internet is jam-packed with misinformation or near-miss information. Anti-vaxxers, creationists, global warming deniers: you can find them all on the web. And when people want the definitive answer, they almost always find themselves at Wikipedia. While Wikipedia has valuable information, it is not written and curated by experts. It is not always the God’s-honest-truth and it is not a safe replacement for learning and knowing information ourselves. Of course, the memories of our loved ones aren’t foolproof either, but at least they don’t carry the aura of authority that comes with a list of citations.

Speaking of which. There is now a Wikipedia page for “The Google Effect” that is based on the 2011 Science article. A banner across the top shows an open book featuring a large question mark and the following warning: “This article relies largely or entirely upon a single source. . . . Please help improve this article by introducing citations to additional sources.” The citation for the first section is a dead link. The last section has two placeholders for citations, but in lieu of numbers they say, According to whom?

Folks, if that ain’t a reminder to be wary of the outsourcing your brain to Google and Wikipedia, I don’t know what is.


Photo credits:

1. Photo by Mike Baird on Flickr, used via Creative Commons license

2. Figure from “Google Effects on Memory: Cognitive Consequences of Having Information at Our Fingertips” by Betsy Sparrow, Jenny Liu, and Daniel M. Wegner.

Sparrow B, Liu J, & Wegner DM (2011). Google effects on memory: cognitive consequences of having information at our fingertips. Science (New York, N.Y.), 333 (6043), 776-8 PMID: 21764755

%d bloggers like this: