How People Tawk Affects How Well You Listen

4808475862_6129039fa8_o

People from different places speak differently – that we all know. Some dialects and accents are considered glamorous or authoritative, while others carry a definite social stigma. Speakers with a New York City dialect have even been known to enroll in speech therapy to lessen their ‘accent’ and avoid prejudice. Recent research indicates that they have good reason to be worried. It now appears that the prestige of people’s dialects can fundamentally affect how you process and remember what they say.

Meghan Sumner is a psycholinguist (not to be confused with a psycho linguist) at Stanford who studies the interaction between talker variation and speech perception. Together with Reiko Kataoka, she recently published a fascinating if troubling paper in the Journal of the Acoustical Society of America. The team conducted two separate experiments with undergraduates who speak standard American English (what you hear from anchors on the national nightly news). They had the undergraduates listen to words spoken by female speakers of 1) standard American English, 2) standard British English, or 3) the New York City dialect. Standard American English is a rhotic dialect, which means that its speakers pronounce the final –r in words like finger. Both speakers of British English and the New York City dialect drop that final –r sound, but one is a standard dialect that’s considered prestigious and the other is not. I bet you can guess which is which.

In their first experiment, Sumner and Kataoka tested how the dialect of spoken words affected semantic priming, an indication of how deeply the undergraduate listeners processed the words. The listeners first heard a word ending in –er  (e.g., slender) pronounced by one of the three different female speakers. After a very brief pause, they saw a written word (say, thin) and had to make a judgment about the written word. If they had processed the spoken word deeply, it should have brought related words to mind and allowed them to respond to a question about a related written word faster. The results? The listeners showed semantic priming for words spoken in standard American English but not in the New York City dialect. That’s not too surprising. The listeners might have been thrown off by the dropped r or simply the fact the word was spoken in a less familiar dialect than their own. But here’s the wild part: the listeners showed as much semantic priming for standard British English as they did for standard American English. Clearly, there’s something more to this story than a missing r.

In their second experiment, a new set of undergraduates with a standard American English dialect listened to sets of related words, each read by one of the speakers of the same three dialects: standard American, British, or NYC. Each set of words (say, rest, bed, dream, etc.) excluded a key related word (in this case, sleep). The listeners were then asked to list all of the words they remembered hearing. This is a classic task that consistently generates false memories. People tend to remember hearing the related lure (sleep) even though it wasn’t in the original set. In this experiment, listeners remembered about the same number of actual words from the sets regardless of dialect, indicating that they listened and understood the words irrespective of speaker. Yet listeners falsely recalled more lures for the word sets read by the NYC speaker than by either the standard American or British speakers.

Screen Shot 2014-02-06 at 3.23.28 PM

Figure from Sumner & Kataoka (2013) showing more false recalls from lists spoken with a NYC dialect than those spoken in standard American or British dialects.

The authors offer an explanation for the two findings. On some level, the listeners are paying less attention to the words spoken with a NYC dialect. In fact, decreased attention has been shown to both decrease semantic priming and increase the generation of false memories in similar tasks. In another paper, Sumner and her colleague Arthur Samuel showed that people with a standard American dialect as well as those with a NYC dialect showed better later memory for –er words that they originally heard in a standard American dialect compared with words heard in a NYC dialect. These results would also fit with the idea that speakers of standard American (and even speakers with a NYC dialect) do not pay as much attention to words spoken with a NYC dialect.

In fact, Sumner and colleagues recently published a review of a comprehensive theory based on a string of their findings. They suggest that we process the social features of speech sounds at the very earliest stages of speech perception and that we rapidly and automatically determine how deeply we will process the input according to its ‘social weight’ (read: the prestige of the speaker’s dialect). They present this theory in neutral, scientific terms, but it essentially means that we access our biases and prejudices toward certain dialects as soon as we listen to speech and we use this information to at least partially ‘tune out’ people who speak in a stigmatized way.

If true, this theory could apply to other dialects that are associated with low socioeconomic status or groups that face discrimination. Here in the United States, we may automatically devalue or pay less attention to people who speak with an African American Vernacular dialect, a Boston dialect, or a Southern drawl. It’s a troubling thought for a nation founded on democracy, regional diversity, and freedom of speech. Heck, it’s just a troubling thought.

_____

Photo credit: Melvin Gaal, used via Creative Commons license

Sumner M, & Kataoka R (2013). Effects of phonetically-cued talker variation on semantic encoding Journal of the Acoustical Society of America DOI: 10.1121/1.4826151

Sumner M, Kim S K, King E, & McGowan K B (2014). The socially weighted encoding of spoken words: a dual-route approach to speech perception Frontiers in Psychology DOI: 10.3389/fpsyg.2013.01015

Talking Thoughts with Tots: Post on Mind Matters

Hey there, friends. I recently contributed a post to the online Scientific American column Mind Matters. The piece is about how children develop the ability to contemplate, predict, and communicate other people’s thoughts and beliefs. You can read it here. Come for the new research findings, stay for the somewhat eerie revelation that babies as young as 10 months are predicting your thoughts and expectations.

Can You Name That Scent?

3309276218_26baf1c493_b

We are great at identifying a color as blue versus yellow or a surface as scratchy, soft, or bumpy. But how do we do with a scent? Not so well, it turns out. If I presented you with several everyday scents, ranging from chocolate to sawdust to tea, you would probably name fewer than half of them correctly.

This sort of dismal performance is often chalked up to idiosyncrasies of the human brain. Compared to many mammals, we have paltry neural smelling machinery. To smell something, the smell receptors in your upper nasal cavity must detect a molecule and pass this information on to twin nubs that stick out of the brain. These nubs are called olfactory bulbs and they carry out the earliest steps of scent processing in the brain. While the size of the human brain is impressive with respect to our bodies, the human olfactory bulbs are nothing to brag about. Below, take a look at the honking olfactory bulbs (relative to overall brain size) on the dog. Compared to theirs, our bulbs look like a practical joke.

Screen Shot 2014-01-27 at 10.19.31 AM

Human (upper) and dog (lower) brain photos indicating the olfactory bulb and tract. From International Journal of Morphology (Kavoi & Jameela, 2011).

It’s easy to blame our bulbs for our smelling deficiencies. Indeed, many scientists have offered brain-based explanations for our shortcomings in the smell department. But could there be more to the story? What if we are hamstrung by a lackluster odor vocabulary? After all, we use abstract, categorical words to identify colors (e.g., blue), shapes (round), and textures (rough), but we generally identify odors by their specific sources. You might say: “This smells like coffee,” or “I detect a hint of cinnamon,” or offer up a subjective judgment like, “That smells gross.” We lack a descriptive, abstract vocabulary for scents. Could this fact account for some of our smell shortcomings?

Linguists Asifa Majid and Niclas Burenhult tackled this question by studying a group of people with a smell vocabulary quite unlike our own. The Jahai are a relatively small group of hunter-gatherers who live in Malaysia and Thailand. They use their sense of smell often in everyday life and their native language (also called Jahai) includes many abstract words for odors. Check out the first two columns in the table below for several examples of abstract odor words in Jahai.

Screen Shot 2014-01-27 at 4.42.24 PM

Table from Majid & Burenhult (2014) in Cognition providing Jahai odor and color words, as well as their rough translations into English.

Majid and Burenhult tested whether Jahai speakers and speakers of American English could effectively and consistently name scents in their respective native languages. They stacked the deck in favor of the Americans by using odors that are all commonplace for Americans, while many are unfamiliar to the Jahai. The scents were: cinnamon, turpentine, lemon, smoke, chocolate, rose, paint thinner, banana, pineapple, gasoline, soap, and onion. For a comparison, they also asked both groups to name a range of color swatches.

The researchers published their findings in a recent issue of Cognition. As expected, English speakers used abstract descriptions for colors but largely source-based descriptions for scents. Their responses  differed substantially from one person to the next on the odor task, while they were relatively consistent on the color task. Their answers were also nearly five times longer for the odor task than for the color task. That’s because English speakers struggled and tried to describe individual scents in more than one way. For example, here’s how one English speaker struggled to describe the cinnamon scent:

“I don’t know how to say that, sweet, yeah; I have tasted that gum like Big Red or something tastes like, what do I want to say? I can’t get the word. Jesus it’s like that gum smell like something like Big Red. Can I say that? Ok. Big Red. Big Red gum.”

Screen Shot 2014-01-27 at 9.53.54 AM

Figure from Majid & Burenhult (2014) comparing the “codability” (consistency) and abstract versus source-based responses from Americans and Jahai.

Now compare that with Jahai speakers, who gave slightly shorter responses to name odors than to name colors and used abstract descriptors 99% of the time for both tasks. They were equally consistent at naming both colors and scents. And, if anything, this study probably underestimated the odor-naming consistency of the Jahai because many of the scents used in the test were unfamiliar to them.

The performance of the Jahai proves that odors are not inherently inexpressible, either by virtue of their diversity or the human brain’s inability to do them justice. As the authors state in the paper’s abstract and title, odors are expressible in language, as long as you speak the right language.

Yet this discovery is not the final word either. The differences between Americans and Jahai don’t end with their vocabularies. The Jahai participants in the study use their sense of smell every day for foraging (their primary source of income). Presumably, their language contains a wealth of odor words because of the integral role this sense plays in their lives. While Americans and other westerners are surrounded by smells, few of us rely on them for our livelihood, safety, or well-being. Thanks to the adaptive nature of brain organization, there may be major differences in how Americans and Jahai represent odors in the brain. In fact, I’d wager that there are. Neuroscience studies have shown time and again that training and experience have very real effects on how the brain represents information from the senses.

As with all scientific discoveries, answers raise new questions. Is it the Jahai vocabulary that allows the Jahai to consistently identify and categorize odors? Or is it their lifelong experience and expertise that gave rise to their vocabulary and, separately, trained their brains in ways that alter their experience of odor? If someone magically endowed English speakers with the power to speak Jahai, would they have the smelling abilities to put its abstract odor words to use?

Would a rose by any other name smell as Itpit? The answer awaits the linguist, neuroscientist, or psychologist who is brave and clever enough to sniff it out.

Asifa Majid, Niclas Burenhult (2014). Odors are expressible in language, as long as you speak the right language Cognition, 130 (2), 266-270 DOI: 10.1016/j.cognition.2013.11.004

Photograph by Dennis Wong, used via Creative Commons license

Perfect Pitch Redux

5819184201_df0392f0e7_b

I can just hear the advertisement now.

Do you have perfect pitch? Would you like to? Then Depakote might be right for you . . .

Perfect pitch is the ability to name or produce a musical note without a reference note. While most children presumably have the capacity to learn perfect pitch, only one in about ten thousand adults can actually do it. That’s because children must receive extensive musical training as youngsters to develop it. Most adults with perfect pitch began studying music at six years of age or younger. By the time children turn nine, their window to learn perfect pitch has already closed. They may yet blossom into wonderful musicians but they will never be able to count perfect pitch among their talents.

Or might they after all?

Well no, probably not. But a new study, published in Frontiers in Systems Neuroscience, has opened the door to such questions. Its authors tested how young men learned to name notes when they were on or off of a drug called valproate (brand name: Depakote). Valproate is widely used to treat epilepsy and bipolar disorder. It’s part of a class of drugs called histone-deacetylase, or HDAC, inhibitors that fiddle with how DNA is stored and alter how genes are read out and translated into proteins.

The intricacies of how HDAC inhibitors affect gene expression and how those changes reduce seizures and mania are still up in the air. But while some scientists have been working those details out, others have been noticing that HDAC inhibitors help old mice learn new tricks. These drugs allow adult mice to adapt to visual and auditory changes in ways that are only otherwise possible for juvenile mice. In other words, HDAC inhibitors allowed mice to learn things beyond the typical window, or critical period, in which the brain is capable of that specific type of learning.

Judit Gervain, Allan Young, and the other authors of the current study set out to test whether HDAC inhibitors can reopen a learning window in humans as well. They randomly assigned their young male subjects to take valproate for either the first or the second half of the study. (Although I usually get my hackles up about the exclusion of female participants from biomedical studies, I understand their reason for doing so in this case. Valproate can cause severe birth defects. By testing men, the authors could be one hundred percent certain that their participants weren’t pregnant.) The subjects took valproate for one half of the study and a placebo for the other half . . . and of course they weren’t told which was which.

During the first half of the study, they trained twenty-four participants to learn six pitch classes. Instead of teaching them the formal names of these pitches in the twelve-tone musical system, they assigned proper names to each one (e.g., Eric, Rachel, or Francine), indicating that each is the name of a person who only plays one pitch class. The participants received this training online for up to ten minutes daily for seven days. During the second half of the study, eighteen of the same subjects underwent the same training with six new pitch classes and names. At the end of each seven-day training session, they heard the six pitch classes one at a time and, for each, answered the question: “Who played that note?”

fnsys-07-00102-g002

Study results showing better performance at naming tones for participants on valproate in the first half of the experiment. From: Gervain et al, 2013

The results? There was a whopping effect of treatment on performance in the first half of the study. The young men on valproate did significantly better than the men on placebo. That’s pretty cool and amazing. It is particularly impressive and surprising because the participants received very little training. The online training summed to a mere seventy minutes and some of the participants didn’t even complete all seven of the ten-minute sessions.

As cool as the main finding is, there are some odd aspects to the study. As you can see from the figure, the second half of the experiment (after the treatments were switched) doesn’t show the same result as the first. Here, participants on valproate perform no differently from those on placebo. The authors suggest that the training in the first half of the experiment interfered with learning in the second half – a plausible explanation (and one they might have predicted in advance). Still, at this point we can’t tell if we are looking at a case of proactive interference or a failure to replicate results. Only time and future experiments will tell.

There were two other odd aspects of the study that caught my eye. The authors used synthesized piano tones instead of pure tones because the former has additional cues like timbre that help people without perfect pitch complete the task. They also taught the participants to associate each note with the name of the person who supposedly plays it rather than the name of the actual note or some abstract stand-in identifier. Both choices make it easier for the participants to perform well on the task but call into question how similar the participants’ learning is to the specific phenomenon of perfect pitch. Perhaps the subjects on valproate in the first half of the experiment were relying on different cues (e.g., timbre instead of frequency). Likewise, associating proper names of people with notes may help subjects learn precisely because it recruits social processes and networks that people with perfect pitch don’t use for the task. If these social processes don’t have a critical period like perfect pitch judgment does, well then valproate might be boosting a very different kind of learning.

As the authors themselves point out, this small study is merely a “proof-of-concept,” albeit a dramatic one. It is not meant to be the final word on the subject. Still, I am curious to see where this leads. Might valproate’s success with seizures and mania have something to do with its ability to trigger new learning? And if HDAC inhibitors do alter the brain’s ability to learn skills that are typically crystallized by adulthood, how has that affected the millions of adults who have been taking these drugs for years? Yet again, only time and science will tell.

I, for one, will be waiting to hear what they have to say.

_______

Photo credit: Brandon Giesbrecht on Flickr, used via Creative Commons license

Gervain J, Vines BW, Chen LM, Seo RJ, Hensch TK, Werker JF, & Young AH (2013). Valproate reopens critical-period learning of absolute pitch. Frontiers in Systems Neuroscience, 7 PMID: 24348349

Known Unknowns

Why no one can say exactly how much is safe to drink while pregnant

8538709738_0e2f5bb2ab_b

I was waiting in the dining car of an Amtrak train recently when I looked up and saw that old familiar sign:

“According to the Surgeon General, women should not drink alcoholic beverages during pregnancy because of the risk of birth defects.”

One finds this warning everywhere: printed on bottles and menus or posted on placards at restaurants and even train cars barreling through Midwestern farmland in the middle of the night. The warnings are, of course, intended to reduce the number of cases of fetal alcohol syndrome in the United States. To that end, the Centers for Disease Control and Prevention (CDC) and the American Congress of Obstetricians and Gynecologists (ACOG) recommend that women avoid drinking any alcohol throughout their pregnancies.

Here’s how the CDC puts it:

“There is no known safe amount of alcohol to drink while pregnant.”

And here’s ACOG’s statement in 2008:

“. . . ACOG reiterates its long-standing position that no amount of alcohol consumption can be considered safe during pregnancy.”

Did you notice what they did there? These statements don’t actually say that no amount of alcohol is safe during pregnancy. They say that no safe amount is known and that no amount can be considered safe, respectively. Ultimately, these are statements of uncertainty. We don’t know how much is safe to drink, so it’s best if you don’t drink any at all.

Lest you think this is a merely a reflection of America’s puritanical roots, check out the recommendations of the U.K.’s National Health Service. While they make allowances for the fact that some women choose to drink, they still advise pregnant women to avoid alcohol altogether. As they say:

“If women want to avoid all possible alcohol-related risks, they should not drink alcohol during pregnancy because the evidence on this is limited.”

Yet it seems odd that the evidence is so limited. The damaging effects of binge drinking on fetal development were known in the 18th century and the first modern description of fetal alcohol syndrome was published in a French medical journal nearly 50 years ago. Six years later, in 1973, a group of researchers at the University of Washington documented the syndrome in The Lancet. Even then, people knew the cause of fetal alcohol syndrome: alcohol. And in the forty years since, fetal alcohol syndrome has become a well-known and well-studied illness. NIH alone devotes more than $30 million dollars annually to research in the field. So how come no one has answered the most pressing question (at least for pregnant women): How much is safe to drink?

One reason is that fetal alcohol syndrome isn’t like HIV. You can’t diagnose it with a blood test. Doctors rely on a characteristic pattern of facial abnormalities, growth delays and neural or mental problems – often in addition to evidence of prenatal alcohol exposure – to diagnose a child. Yet children exposed to and affected by alcohol during fetal development don’t always show all of these symptoms. Doctors and agencies now define fetal alcohol syndrome as the extreme end of a spectrum of disorders caused by prenatal alcohol exposure. The full spectrum, called fetal alcohol spectrum disorders (FASD), includes milder forms of the illness that involve subtler cognitive or behavioral problems and lack the classic facial features of the full-blown syndrome.

As you might imagine, milder cases of FASD are hard to identify. Pediatricians can miss the signs altogether. And there’s a fundamental difficulty in diagnosing the mildest cases of FASD. To put it crudely, if your child is slow, who’s to say whether the culprit is a little wine during pregnancy, genetics, too much television, too few vegetables, or god-knows-what-else? Unfortunately, identifying and understanding the mildest cases is crucial. These are the cases that worry pregnant women who drink lightly. They lie at the heart of the uncertainty voiced by the CDC, ACOG, and others. Most pregnant women would like to enjoy the occasional merlot or Sam Adams, but not if they thought it would rob their children of IQ points or otherwise limit their abilities – even just a little – down the line.

While it’s hard to pin down the subtlest cases in the clinic, scientists can still detect them by looking for differences between groups of children with different exposures. The most obvious way of testing this would be to randomly assign pregnant women to drink alcohol at different doses, but of course that experiment would be unethical and should never be done. Instead, researchers capitalize on the variability in how much women choose to drink during pregnancy (or at least how much they report that they drank, which may not always be the same thing.) In addition to interviewing moms about their drinking habits, the scientists test their children at different ages and look for correlations between prenatal alcohol exposure and test performance.

While essential, these studies can be messy and hard to interpret. When researchers do find correlations between moderate prenatal alcohol exposure and poor test performance, they can’t definitively claim that the former caused the latter (although it’s suggestive). A mysterious third variable (say, maternal cocaine use) might be responsible for them both. On the flip side, interpreting studies that don’t find correlations is even trickier.  It’s hard to show that one thing doesn’t affect another, particularly when you are interested in very small effects. To establish this with any confidence, scientists must show that it holds with large numbers of people and that they are using the right outcome measure (e.g., IQ score). FASD impairments can span language, movement, math skills, goal-directed behaviors, and social interactions. Any number of measures from wildly different tests might be relevant. If a given study doesn’t find a correlation between prenatal alcohol exposure and outcome measure, it might be because the study didn’t test enough children or didn’t choose the right test to pick up the subtle differences between groups.

When studies in humans get tricky, scientists often turn to animal models. FASD research has been no exception. These animal studies have helped us understand the physiological and biochemical mechanisms behind fetal alcohol syndrome, but they can’t tell us how much alcohol a pregnant woman can safely drink. Alcohol metabolism rates vary quite a bit between species. The sensitivity of developing neurons to alcohol may differ too. One study used computational modeling to predict that the blood alcohol level of a pregnant rat must be 10 times that of a pregnant human to wreak the same neural havoc on the fetus. Yet computational models are far from foolproof. Scientists simply don’t know precisely how a dose in a rat, monkey, or other animal would translate to a human mother and fetus.

And here’s the clincher: alcohol’s prenatal effects also differ between humans. Thanks to genetic differences, people metabolize alcohol at very different rates. The faster a pregnant woman clears alcohol from her system, the lower the exposure to her fetus. Other factors make a difference, too. Prenatal alcohol exposure seems to take a heavier toll on the fetuses of older mothers. The same goes for poor mothers, probably because of confounding factors like nutrition and stress. Taken together, these differences mean that if two pregnant women drink the same amount of alcohol at the same time, their fetuses might experience very different alcohol exposures and have very different outcomes. In short, there is no single limit to how much a pregnant woman can safely drink because every woman and every pregnancy is different.

As organizations like the CDC point out, the surest way to prevent FASD is to avoid alcohol entirely while pregnant. Ultimately, every expecting mother has to make her own decision about drinking based on her own understanding of the risk. She may hear strong opinions from friends, family, the blogosphere and conventional media. Lots of people will seem sure of many things and those are precisely the people that she should ignore.

When making any important decision, it’s best to know as much as you can – even when that means knowing how much remains unknown.

_____

Photo Credit: Uncalno Tekno on Flickr, used via Creative Commons license

Hurley TD, & Edenberg HJ (2012). Genes encoding enzymes involved in ethanol metabolism. Alcohol research : current reviews, 34 (3), 339-44 PMID: 23134050

Stoler JM, & Holmes LB (1999). Under-recognition of prenatal alcohol effects in infants of known alcohol abusing women. The Journal of Pediatrics, 135 (4), 430-6 PMID: 10518076

Pet Coke: Breakfast of Champions?

10373257433_2e008cabf6_o

Pet coke piles in Chicago as of 10/19/13 by Josh Mogerman, used via Creative Commons license

This morning NPR informed me that petroleum coke and the Koch brothers have struck again – this time in my hometown of Chicago.

Petroleum coke, adorably nicknamed pet coke, made headlines this past summer when it was improperly stored by Koch Carbon and billowed into homes and neighborhoods in Detroit, where I currently live.

But wait. That’s a lie. I live just outside of Detroit in a wealthier suburb, just as I grew up outside of Chicago in a tree-lined college town. That makes a big difference. No one would dream of dumping three-story-high piles of industrial soot in my current backyard or the one I played in as a child. Those neighborhoods are simply too wealthy, too powerful, too ready and willing to sue.

Communities near the pet coke storage sites in both Detroit and Chicago are hurting financially. We all know about the struggles of bankrupt Detroit, where it takes about an hour for emergency workers to respond to the direst 911 calls. Southeast Chicago, once an industrial hub, has faced many of the same challenges as Detroit. This year, both areas became dumping grounds for increasing quantities of pet coke (in some cases, without a permit).

That increase in pet coke is due to ramped-up tar sands drilling in Canada. Pet coke is a product of the tar sands refining process. Although it is too sooty to be used for energy here in the United States, countries like Mexico and China will buy it to use as fuel. That means neighborhoods like South Deering in Chicago wind up serving as holding stations for pet coke while companies sell it internationally and arrange for its transport. But this pet coke sits and waits in open-air piles. Strong gusts of wind cause black plumes of dust that travel into neighborhoods and homes.

View of a pet coke plume from the Detroit piles on 7/27/13, via 3860remerson on YouTube

The residents of these neighborhoods have found black dust coating their floors, countertops, and even food. They describe it getting into their eyes, mouths, and lungs. I find these exposures alarming. But Laurie C. McCausland, who represents the Koch brothers’ interests as the deputy general counsel for Koch Companies Public Sector, thinks that’s silly. According to WBEZ, Chicago’s NPR affiliate, McCausland says that overall pet coke is safe. WBEZ quotes her as follows:

“It’s unfair for people to be overly scared about this product. I think people just don’t have a lot of information.”

In a letter to the editor of the Chicago Tribune, Jim Watson, the Executive Director of the Illinois Petroleum Council, expressed a similar sentiment. He wrote:

“Extensive testing has revealed that petcoke has no observed carcinogenic, reproductive, or developmental effects in humans and a low potential to cause adverse effects on aquatic or terrestrial environments.”

I was curious if these statements were true. Has pet coke been extensively studied? And is the health concern surrounding pet coke just an instance of misinformed scaremongering like the anti-vaccination movement? I headed over to PubMed, the U.S. government’s comprehensive catalog of scholarly papers in science, health, and medicine. I searched for “petroleum coke” and got 56 results. Most of these papers had to do with 1) nifty chemical reactions you can do with pet coke, 2) how pet coke affects aquatic life, and 3) the health of people who make pet coke react with other potentially hazardous compounds for a living. I came across only three titles that appeared to be specific and relevant: one that assessed correlations between pet coke exposure and lung cancer in petroleum workers and two that tested the effects of pet coke exposure in mammalian animal models.

The most recent was published in the International Journal of Toxicology this year. The authors include representatives from ExxonMobil (first author), the American Petroleum Institute, and Shell (last author). From what I can tell, the remaining authors work for contract research laboratories (as in, paid by the oil companies). Another paper was published in Occupational & Environmental Medicine in 2012 by authors at ExxonMobil (first author) and Imperial Oil, although this study at least included collaborators from actual universities including McGill (last author). A third paper, published in 1987 in the American Journal of Industrial Medicine, was penned by representatives of Standard Oil, the American Petroleum Institute (last author), and two contract laboratory companies (first author). Not surprisingly, none of these papers conclude that pet coke is especially hazardous.

Even if I missed one (or even ten!) relevant articles in my search, I think it’s safe to say that the research is anything but “extensive.”  I haven’t yet combed through the three papers, nor am I the best person to evaluate their methods. Still, I do think it’s proper to question their impartiality and recommend that they be scrutinized by unbiased experts. We should also wonder if we are getting an accurate representation of such industry-funded research. When corporations and labs-for-hire come up with results they don’t like, they don’t have to (and often don’t) publish them. Yet when corporations do get a result that they like (for whatever reason, including a lack of statistical power), they are happy to publish it and thrust it into the hands of publicists and legal representatives like Ms. McCausland who tell us not to be silly; pet coke’s perfectly safe. That bias alone throws off a fair evaluation of the issue.

Residents of Southeastern Chicago on the pet coke,  via NRDCflix on YouTube

Modern (and ancient) history play like a broken record of chemicals, compounds, and practices that were harmless until suddenly they weren’t. Shoe stores once had x-ray machines so you could see how well your shoes fit – or just stare at your wiggling toe bones. We’ve seen the rise and fall of leaded paint and gasoline, asbestos, and thalidomide and now we’re learning about the dangers of plastics in our baby bottles and flame retardants in our cushions. There’s plenty of reason to suspect that pet coke exposure is no day at the health spa. Inhaled particulates irritate the airways and can, at the very least, exacerbate asthma and other respiratory illnesses. Analysis of the Detroit pet coke dust showed that it also contained the toxic elements vanadium and selenium, although it’s not clear whether residents were exposed at high enough levels to cause ill effects. (While we actually require trace amounts of selenium,  further exposure is toxic.)

It seems to me that we need more information. We need impartial toxicologists, epidemiologists, and other specialists to pore over the papers published on the topic and start conducting unbiased experiments of their own. And while we wait, we need to protect the residents who live in the shadow of pet coke. Pet coke piles should be enclosed so that the dust can’t escape into communities, schools, and homes.

I find myself wondering how much faith people like Laurie McCausland, Jim Watson, and Charles Koch truly put in those industry-funded studies on pet coke. Would they be willing to move their families into a community coated with pet coke? Or is it only safe enough for those families who can’t afford to live elsewhere?

Between permit oversights and unlawful air pollution, the Koch brothers’ companies may already have broken the law. But if they are putting vulnerable people’s health and well being at risk to make a buck? Well that truly is criminal.
______

Schnatter AR, Nicolich MJ, Lewis RJ, Thompson FL, Dineen HK, Drummond I, Dahlman D, Katz AM, & Thériault G (2012). Lung cancer incidence in Canadian petroleum workers. Occupational and environmental medicine, 69 (12), 877-82 PMID: 23077208

McKee RH, Herron D, Beatty P, Podhasky P, Hoffman GM, Swigert J, Lee C, & Wong D (2013). Toxicological Assessment of Green Petroleum Coke. International journal of toxicology PMID: 24179031

fMR-Why? Bad Science Meets Chocolate and Body Envy

275285919_a7816d7ded_b

Imagine this: You have bulimia nervosa, a psychiatric condition that traps you in an unhealthy cycle of binge eating and purging. You’ve been recruited to participate in a functional MRI experiment on this devastating illness. As you lie in the scanner, you are shown pictures of pizza, chocolate and other high-calorie foods and you’re told to imagine eating them. You do this for 72 pictures of delicious, fatty foods. At other points in the experiment, you see pictures of bodies (sans heads) of models clipped from a women’s magazine. You are told to compare your body to each of the bodies in the pictures. You do this 72 times, once for each skinny (and probably retouched) model’s body. The experience would have been unsettling enough for normal women trying to eat healthier or feel happier with their not-so-super-model bodies. But for women with bulimia, it must have truly been a hoot and a half.

Luckily, the misery was worth it. When the researchers publish their findings, they claim to have shown that patients with bulimia process body images differently. In their conclusions, they say that their results can inform how psychotherapists should treat patients with the illness. They even suggest that it might someday lead to direct interventions, such as a targeted zap to the head using transcranial magnetic stimulation.

My recommendation? Cover your therapist’s ears and stay away from the head zapper. This study shows nothing of the sort.

Functional MRI is a widely used and quite powerful method of probing the brain, but it is only useful for experiments that are thoughtfully conceived and carefully interpreted. Unfortunately, many fMRI papers that make it to publication are neither.

One of the most common problems in fMRI is making bad comparisons. All fMRI studies rely on comparisons because brains are all different and scanners are all different. If you are going to say that Region X becomes active when you see a picture of chocolate, you first have to answer that crucial question: compared to what? If you’re interested in how the brain reacts to unhealthy food in particular, you might compare looking at pictures of chocolate with looking at pictures of raisins or eggplant. And if you’re comparing these comparisons across subject groups (such as patients versus non-patients), both groups had better have the same the control condition. Otherwise, you’re not even comparing apples to oranges. You’re comparing apples to gym socks.

Sadly, that is just what these experimenters did. They compared brain blood flow when the subjects looked either at junk food or skinny women with blood flow during 36-second stretches of time when subjects just stared at a small, white ‘+’ on the screen. The authors say that using a more similar control condition (say, imagining using non-food objects like a lamp or a door) would be bad because patients with bulimia might respond to these objects differently than healthy subjects. This argument is nonsensical. There’s no reason to believe that people with bulimia feel any differently about doors or lamps than anyone else, but there’s plenty of reason to believe that they would spend 36-second moments of downtime before or after comparing their bodies to those of models either obsessing or trying not to obsess about how their bodies ‘measure up.’

In fact, I suspect that could not help but wonder if the authors didn’t originally intend to use this ‘+’ as the control condition. They actually had less crappy control conditions built into the experiment. As a control for imagining eating pizza and chocolate, the participants were also shown non-food objects like tools and told to imagine using them. They also saw interior décor photos and had to compare the furniture to those in their own homes – a control for comparing each model’s body to one’s own.

When the authors did their analyses using these (better) control conditions, they found very few differences between patients and non-patients. None, in fact, for the imagine-eating-junk-food portion of the study. For the comparing-oneself-to-models portion, they only found that patients showed less activation than controls in two regions of visual cortex. These regions may correspond to areas that specifically process body images. But would less activation in these regions mean that patients with bulimia process body images differently than other people? Not at all. If the patients were not looking at the pictures as much as non-patients or were more distracted/less attentive to them, you would see the same pattern of results. In short, the authors had no story to tell when they used the better controls. They had a ‘null result’ that would not get published.

354288651_2f3adbc016_b

Based on the design of their experiment, I suspect that find myself wondering if this was how they originally intended to analyze their data.* And it’s really the only sensible way to analyze these data. Experiments like these include the ‘+’ condition to establish a baseline (essentially, what you’re going to call ‘zero’). These ‘+’ blocks also correct for an unfortunate phenomenon called scanner drift that adds noise to the data.

It’s possible that I have to wonder if the authors decided to use the ‘+’ for their comparisons because they didn’t get any exciting results with the actual control conditions. If so, it unfortunately worked. Using the baseline condition, they found two differences between patient and non-patient activations in the food task and even more differences between the groups in the body task. Ultimately, the authors got their significant results and they got them published.  But those differences have nothing to do with the causes of bulimia and everything to do with what flits through people’s minds while they stare at a plus sign.

Unfortunately, this is just one example from a growing sea of bad fMRI studies out there. And while many people do wonderful work with the technique and advance the field, others do it a disservice and set us all back. From researchers to reviewers, publishers, science writers and reporters, we all need to proceed with caution and evaluate papers with a critical eye. The participants in our experiments deserve it. The public deserves it. Most of all, patients deserve the best information we can give them. Science done well and served to them straight.
____

Update: I’ve made a few small changes to this post to clarify my intent. I don’t personally know the study’s authors and have no insight into their actions, intentions, or motivations. In writing the piece, I hoped to bring attention to a widespread problem in fMRI research. Of the study’s authors I can only say that they did some seriously flawed research. Why, when, or how is as much your guess as mine.

Since posting this piece, I’ve contacted the editor of BMC Psychiatry regarding my concerns with the paper. Not only have I received no reply from her, but this paper is still listed as one of the ‘Editor’s Picks’ on their website as of 1/5/14.

____

*For curious fMRI folk: each run contained 6 food/body blocks, 6 non-food/décor blocks, and only 3 baseline ‘+’ blocks. That means they collected twice the data for the control conditions that they supposedly didn’t intend to use than for the ones that they did.

Photo #1 credit: MRI scanner, photo by Matthias Weinberger (cszar on Flickr), used via Creative Commons license

Photo #2 credit: Structural MRI of kiwi fruit by Dom McIntyre (McBadger on Flickr), used via Creative Commons license

Van den Eynde F, Giampietro V, Simmons A, Uher R, Andrew CM, Harvey PO, Campbell IC, & Schmidt U (2013). Brain responses to body image stimuli but not food are altered in women with bulimia nervosa. BMC Psychiatry, 13 (1) PMID: 24238299

claimtoken-52c471c993394

Follow

Get every new post delivered to your Inbox.

Join 80 other followers

%d bloggers like this: