Zapping Brains, Seeing Scenes

4911994857_b0e24e01f4_o

More than fifteen years ago, neuroimagers found a region of the brain that seemed to be all about place. The region lies on the bottom surface of the temporal lobe near a fold called the parahippocampal gyrus, so it was called the parahippocampal place area, or PPA. You have two PPAs: one on the left side of your brain and one on the right. If you look at a picture of a house, an outdoor or indoor scene, or even an empty room, your PPAs will take notice. Since its discovery, hundreds of experiments have probed the place predilections of the PPA. Each time, the region demonstrated its dogged devotion to place. Less clear was exactly what type of scene information the PPA was representing and what it was doing with that information. A recent scientific paper now gives us a rare, direct glimpse at the inner workings of the PPA through the experience of a young man whose right PPA was stimulated with electrodes.

The young man in question wasn’t an overzealous grad student. He was a patient with severe epilepsy who was at the hospital to undergo brain surgery. When medications can’t bring a person’s seizures under control, surgery is one of few remaining option. The surgery involves removing the portion of the brain in which that patient’s seizures begin. Of course, removing brain tissue is not something one does lightly. Before a surgery, doctors use various techniques to determine in each patient where the seizures originate and also where crucial regions involved in language and movement are located. They do this so they will know which part of the brain to remove and which parts they must be sure not to remove. One of the ways of mapping these areas before surgery is to open the patient’s skull, plant electrodes into his or her brain, and monitor brain activity at the various electrode sites. This technique, called electrocorticography, allows doctors to both record brain activity and electrically stimulate the brain to map key areas. It is also the most powerful and direct look scientists can get into the human brain.

A group of researchers in New York headed by Ashesh Mehta and Pierre Mégevand documented the responses of the young man as they stimulated electrodes that were planted in and around his right PPA. During one stimulation, he described seeing a train station from the neighborhood where he lives. During another, he reported seeing a staircase and a closet stuffed with something blue. When they repeated the stimulation, he saw the same random indoor scene again. So stimulating the PPA can cause hallucinations of scenes that are both indoor and outdoor, familiar or unfamiliar. This suggests that specific scene representations in the brain may be both highly localized and complex. It is also just incredibly cool.

The doctor also stimulated an area involved in face processing and found that this made the patient see distortions in a face. Another study published in 2012 showed a similar effect in a different patient. While the patient looked at his doctor, the doctor stimulated the face area. As the patient reported, “You just turned into somebody else. Your face metamorphosed.” Here’s a link to a great video of that patient’s entire reaction and description.

The authors of the new study also stimulated a nearby region that had shown a complex response to both faces and scenes is previous testing. When they zapped this area, the patient saw something that made him chuckle. “I’m sorry. . . You all looked Italian. . . Like you were working in a pizza shop. That’s what I saw, aprons and whatnot. Yeah, almost like you were working in a pizzeria.”

Now wouldn’t we all love to know what that area does?

______

Photo credit: thisisbossi on Flickr, used via Creative Commons license

*In case you’re wondering, the patient underwent surgery and no longer suffers from seizures (although he still experiences auras).

Mégevand P, Groppe DM, Goldfinger MS, Hwang ST, Kingsley PB, Davidesco I, & Mehta AD (2014). Seeing scenes: topographic visual hallucinations evoked by direct electrical stimulation of the parahippocampal place area. The Journal of neuroscience : the official journal of the Society for Neuroscience, 34 (16), 5399-405 PMID: 24741031

The Slippery Question of Control in OCD

6058142799_d4422a8fe2_b

It’s nice to believe that you have control over your environment and your fate – that is until something bad happens that you’d rather not be responsible for. In today’s complex and interconnected world, it can be hard to figure out who or what causes various events to happen and to what degree you had a hand in shaping their outcomes. Yet in order to function, everyone has to create mental representations of causation and control. What happens when I press this button? Did my glib comment upset my friends? If I belch on the first date, will it scare her off?

People often believe they have more control over outcomes (particularly positive outcomes) than they actually do. Psychologists discovered this illusion of control in controlled experiments, but you can witness the same principle in many a living room now that March Madness is upon us. Of course, wearing your lucky underwear or sitting in your go-to La-Z-Boy isn’t going to help your team win the game, and the very idea that it might shows how easily one’s sense of personal control can become inflated. Decades ago, researchers discovered that the illusion of control is not universal. People suffering from depression tend not to fall for this illusion. That fact, along with similar findings from depression, gave rise to the term depressive realism. Two recent studies now suggest that patients with obsessive-compulsive disorder (OCD) may also represent contingency and estimate personal control differently from the norm.

OCD is something of a paradox when it comes to the concept of control. The illness has two characteristic features: obsessions based on fears or regrets that occupy a sufferer’s thoughts and make him or her anxious, and compulsions, or repetitive and unnecessary actions that may or may not relieve the anxiety. For decades, psychiatrists and psychologists have theorized that control lies at the heart of this cycle. Here’s how the NIMH website on OCD describes it (emphasis is mine):

The frequent upsetting thoughts are called obsessions. To try to control them, a person will feel an overwhelming urge to repeat certain rituals or behaviors called compulsions. People with OCD can’t control these obsessions and compulsions. Most of the time, the rituals end up controlling them.

In short, their obsessions cause them distress and they perform compulsions in an effort to regain some sense of control over their thoughts, fears, and anxieties. Yet in some cases, compulsions (like sports fans’ superstitions) seem to indicate an inflated sense of personal control. Based on this conventional model of OCD, you might predict that people with the illness will either underestimate or overestimate their personal control over events. So which did the studies find? In a word: both.

The latest study, which appeared this month in Frontiers in Psychology, used a classic experimental design to study the illusion of control. The authors tested 26 people with OCD and 26 comparison subjects. The subjects were shown an image of an unlit light bulb and told that their goal was to illuminate the light bulb as often as possible. On each trial, they could choose to either press or not press the space bar. After they made their decision, the light bulb either did or did not light up. Their job was to estimate, based on their trial-by-trial experimentation, how much control they had over the light bulb. Here’s the catch: the subjects had absolutely no control over the light bulb, which lit up or remained dark according to a fixed sequence.*

After 40 trials, subjects were asked to rate the degree of control they thought they had over the illumination of the light bulb, ranging from 0 (no control) to 100 (complete control). Estimates of control were consistently higher for the comparison subjects than for the subjects with OCD. In other words, the people with OCD believed they had less control – and since they actually had no control, that means that they were also more accurate than the comparison subjects. As the paper points out, this is a limitation of the study: it can’t tell us whether patients are generally prone to underestimating their control over events or if they’re simply more accurate that comparison subjects. To do that, it would need to have included situations in which subjects actually did have some degree of control over the outcomes.

Why wasn’t the light bulb study designed to distinguish between these alternatives? Because the authors were expecting the opposite result. They had designed their experiment to follow up on a 2008 study that found a heightened illusion of control among people with OCD. The earlier study used a different test. They showed subjects either neutral pictures of household items or disturbing pictures of distorted faces. The experimenters encouraged the subjects to try to control the presentation of images by pressing buttons on a keyboard and asked them to estimate their control over the images three times during the session. However, just like in the light bulb study, the presentation of the images was fixed in advance and could not be affected by the subjects’ button presses.

How can two studies of estimated control in OCD have opposite results? It seems that the devil is in the details. Prior studies with tasks like these have shown that healthy subjects’ control estimates depend on details like the frequency of the preferred outcome and whether the experimenter is physically in the room during testing.  Mental illness throws additional uncertainty into the mix. For example, the disturbing face images in the 2008 study might have made the subjects with OCD anxious, which could have triggered a different cognitive pattern. Still, both findings suggest that control estimation is abnormal for people with OCD, possibly in complex and situation-dependent ways.

These and other studies indicate that decision-making and representations of causality in OCD are altered in interesting and important ways. A better understanding of these differences could help us understand the illness and, in the process, might even shed light on the minor rituals and superstitions that are common to us all. Sadly, like a lucky pair of underwear, it probably won’t help your team get to the Final Four.

_____

Photo by Olga Reznik on Flickr, used via Creative Commons license

*The experiment also manipulated reinforcement (how often the light bulb lit up) and valence (whether the lit bulb earned them money or the unlit bulb cost them money) across different testing sections, but I don’t go into that here because the manipulations didn’t affect the results.

Gillan CM, Morein-Zamir S, Durieux AM, Fineberg NA, Sahakian BJ, & Robbins TW (2014). Obsessive-compulsive disorder patients have a reduced sense of control on the illusion of control task. Frontiers in Psychology, 5 PMID: 24659974

In the Blink of an Eye

4330664213_ab665a8419_o

It takes around 150 milliseconds (or about one sixth of a second) to blink your eyes. In other words, not long. That’s why you say something happened “in the blink of an eye” when an event passed so quickly that you were barely aware of it. Yet a new study shows that humans can process pictures at speeds that make an eye blink seem like a screening of Titanic. Even more, these results challenge a popular theory about how the brain creates your conscious experience of what you see.

To start, imagine your eyes and brain as a flight of stairs. I know, I know, but hear me out. Each step represents a stage in visual processing. At the bottom of the stairs you have the parts of the visual system that deal with the spots of darkness and light that make up whatever you’re looking at (let’s say an old family photograph). As you stare at the photograph, information about light and dark starts out at the bottom of the stairs in what neuroscientists called “low-level” visual areas like the retinas in your eyes and a swath of tissue tucked away at the very back of your brain called primary visual cortex, or V1.

Now imagine that the information about the photograph begins to climb our metaphorical neural staircase. Each time the information reaches a new step (a.k.a. visual brain area) it is transformed in ways that discard the details of light and dark and replace them with meaningful information about the picture. At one step, say, an area of your brain detects a face in the photograph. Higher up the flight, other areas might identify the face as your great-aunt Betsy’s, discern that her expression is sad, or note that she is gazing off to her right. By the time we reach the top of the stairs, the image is, in essence, a concept with personal significance. After it first strikes your eyes, it only takes visual information 100-150 milliseconds to climb to the top of the stairs, yet in that time your brain has translated a pattern of light and dark into meaning.

For many years, neuroscientists and psychologists believed that vision was essentially a sprint up this flight of stairs. You see something, you process it as the information moves to higher areas, and somewhere near the top of the stairs you become consciously aware of what you’re seeing. Yet intriguing results from patients with blindsight, along with other studies, seemed to suggest that visual awareness happens somewhere on the bottom of the stairs rather than at the top.

New, compelling demonstrations came from studies using transcranial magnetic stimulation, a method that can temporarily disrupt brain activity at a specific point in time. In one experiment, scientists used this technique to disrupt activity in V1 about 100 milliseconds after subjects looked at an image. At this point (100 milliseconds in), information about the image should already be near the top of the stairs, yet zapping lowly V1 at the bottom of the stairs interfered with the subjects’ ability to consciously perceive the image. From this and other studies, a new theory was born. In order to consciously see an image, visual information from the image that reaches the top of the stairs must return to the bottom and combine with ongoing activity in V1. This magical mixture of nitty-gritty visual details and extracted meaning somehow creates what we experience as visual awareness

In order for this model of visual processing to work, you would have to look at the photo of Aunt Betsy for at least 100 milliseconds in order to be consciously aware of it (since that’s how long it takes for the information to sprint up and down the metaphorical flight of stairs). But what would happen if you saw Aunt Betsy’s photo for less than 100 milliseconds and then immediately saw a picture of your old dog, Sparky? Once Aunt Betsy made it to the top of the stairs, she wouldn’t be able to return to the bottom stairs because Sparky has taken her place. Unable to return to V1, Aunt Betsy would never make it to your conscious awareness. In theory, you wouldn’t know that you’d seen her at all.

Mary Potter and colleagues at MIT tested this prediction and recently published their results in the journal Attention, Perception, & Psychophysics. They showed subjects brief pictures of complex scenes including people and objects in a style called rapid serial visual presentation (RSVP). You can find an example of an RSVP image stream here, although the images in the demo are more racy and are shown for longer than the pictures in the Potter study.

The RSVP image streams in the Potter study were strings of six photographs shown in quick succession. In some image streams, pictures were each shown for 80 milliseconds (or about half the time it takes to blink). Pictures in other streams were shown for 53, 27, or 13 milliseconds each. To give you a sense of scale, 13 milliseconds is about one tenth of an eye blink, or one hundredth of a second. It is also far less than time than Aunt Betsy would need to sprint to the top of the stairs, much less to return to the bottom.

At such short timescales, people can’t remember and report all of the pictures they see in an image stream. But are they aware of them at all? To test this, the scientists gave their subjects a written description of a target picture from the image stream (say, flowers) either just before the stream began or just after it ended. In either case, once the stream was over, the subject had to indicate whether an image fitting that description appeared in the stream. If it did appear, subjects had to pick which of two pictures fitting the description actually appeared in the stream.

Considering how quickly these pictures are shown, the task should be hard for people to do even when they know what they’re looking for. Why? Because “flowers” could describe an infinite number of photographs with different arrangements, shapes, and colors. Even when the subject is tipped off with the description in advance, he or she must process each photo in the stream well enough to recognize the meaning of the picture and compare it to the description. On top of that, this experiment effectively jams the metaphorical visual staircase full of images, leaving no room for visual info to return to V1 and create a conscious experience.

The situation is even more dire when people get the description of the target only after they’ve viewed the entire image stream. To answer correctly, subjects have to process and remember as many of the pictures from the stream as possible. None of this would be impressive under ordinary circumstances but, again, we’re talking 13 milliseconds here.

Sensitivity (computed from subject performance) on the RSVP image streams with 6 images. From Potter et al., 2013.

Sensitivity (computed from subject performance) on the RSVP image streams with 6 images. From Potter et al., 2013.

How did the subjects do? Surprisingly well. In all cases, they performed better than if they were randomly guessing – even when tested on the pictures shown for 13 milliseconds. In general, they scored higher when the pictures were shown longer. And like any test-taker could tell you, people do better when they know the test questions in advance. This pattern held up even when the scientists repeated the experiment with 12-image streams. As you might imagine, that makes for a very crowded visual staircase.

These results challenge the idea that visual awareness happens when information from the top of the stairs returns to V1. Still, they are by no means the theory’s death knell. It’s possible that the stairs are wider than we thought and that V1 is able (at least to some degree) to represent more than one image at a time. Another possibility is that the subjects in the study answered the questions using a vague sense of familiarity – one that might arise even if they were never overtly conscious of seeing the images. This is a particularly compelling explanation because there’s evidence that people process visual information like color and line orientation without awareness when late activity in V1 is disrupted. The subjects in the Potter study may have used this type of information   to guide their responses.

However things ultimately shake out with the theory of visual awareness, I love that these intriguing results didn’t come from a fancy brain scanner or from the coils of a transcranial magnetic stimulation device. With a handful of pictures, a computer screen, and some good-old-fashioned thinking, the authors addressed a potentially high-tech question in a low-tech way. It’s a reminder that fancy, expensive techniques aren’t the only way – or even necessarily the best way – to tackle questions about the brain. It also shows that findings don’t need colorful brain pictures or glow-in-the-dark mice in order to be cool. You can see in less than one-tenth of a blink of an eye. How frickin’ cool is that?

Photo credit: Ivan Clow on Flickr, used via Creative Commons license

Potter MC, Wyble B, Hagmann CE, & McCourt ES (2013). Detecting meaning in RSVP at 13 ms per picture. Attention, perception & psychophysics PMID: 24374558

The Changing Face of Science: Part One

Keyboard

While waiting for the L train to attend the American Academy for the Advancement of Science (AAAS) meeting this week, I came upon Nicholas Kristof’s latest New York Times op-ed: “Professors, We Need You!” In his piece, Kristof portrays professors as out-of-touch intellectuals who study esoteric fields and hide their findings in impenetrable jargon. He also says that academia crushes rebels who communicate their science to the public. I admire Mr. Kristof for his efforts to bring awareness to injustices around the world and I agree that academic papers are often painful – if not impossible – to read. But my experience at the AAAS conference this week highlights how wrong he is, both in his depiction of academics and of the driving forces within academia itself.

AAAS is the organization behind Science magazine, ScienceNOW news, Science Careers, and the AAAS fellowship programs. Among the goals in its mission statement: to enhance communications among scientists, engineers, and the public; to provide a voice for science on societal issues; and to increase public engagement with science and technology. So yes, you would expect their conference to focus on science communication. Still, the social media sessions (Engaging with Social Media and Getting Started in Social Media) were full of scientists of all ages. Another well-attended session taught listeners how to use sites and services like Google Scholar, Mendeley, ORCID, and ResearchGate to improve the visibility of their work online.

Throughout the conference, scientists were live-tweeting interesting facts and commentary from the sessions they attended using the #AAASmtg hashtag. I saw a particularly wonderful example of this at a Saturday morning symposium called Building Babies. All five of the speakers at the symposium have accounts on Twitter and four of them were live-tweeting during each other’s presentations. Three of them (Kate Clancy, Julienne Rutherford, and Katie Hinde) also have popular blogs: Context and Variation, BANDIT, and Mammals Suck, respectively. After the symposium, Dr. Hinde compiled the symposium-related tweets on Storify.

I won’t claim that this panel of speakers is representative of scientists as a whole, but I do believe that they are representative of the direction in which scientists are moving. And contrary to Mr. Kristof’s claims, I would argue that their public visibility and embrace of online communication have probably helped rather than hindered their careers. Increased visibility can lead to more invitations to give talks, more coverage from the science press, and added connections outside of one’s narrow field of expertise. The first two of these can fill out a CV and attract positive public attention to a department, both pluses for a young academic who’s up for tenure. Moreover, while hiring and tenure decisions are made within departments, funding comes from organizations and institutions that typically value plain-speaking scientists who do research with societal relevance. For these reasons (and, I’m sure, others), it’s becoming obvious that scientists can benefit from clarity, accessibility, and visibility. In turn, many scientists are learning the necessary skills and making inroads to communicating with the public.

Of course, public visibility offers both promise and peril for scientists. As climate scientist and blogger Kim Cobb explained in her wonderful AAAS talk, scientists worry about appearing biased or unprofessional when they venture into the public conversation on social media. Science writer and former researcher Bethany Brookshire mentioned another potential peril: the fact that thoughtless or offensive off-the-cuff comments made on social media can come back to haunt scientists in their professional lives. It is also certainly true in academia (as it is in most spheres) that people are disdainful of peers who seem arrogant or overly self-promotional.

In short, scientists hoping to reach the public have their work cut out for them. They must learn how to talk about science in clear and comprehensible terms for non-scientists. They must be engaging yet appropriate in public forums and strike the right balance between public visibility and the hard-won research results to back up the attention they receive. They have good reason to tread carefully as they wade into the rapid waters of the Twitterverse, the blogosphere, and other wide-open forums. Yet in they are wading all the same.

There have already been some great responses to Kristof’s call for professors. Political scientist Erik Voeten argued that many academics already engage the public in a variety of ways. Political scientist Robin Corey pointed out that the engagement of academics with the public is often stymied by a lack of time and funding. Academics are rarely paid for the time they spend communicating with the public and may need to concentrate their efforts on academic publications and grant applications because of the troubling job market and funding situation.

Still, many academics are ready to take the plunge and engage with the public. What they need is more training and guidance. Graduate programs should provide better training in writing and communicating science. Universities and  societies should offer mentorship and seminars for scientists who want to improve the visibility of their research via the web. We need to have many more panels and discussions like the ones that took place at the AAAS meeting this week.

Oh, and while we’re at it: fewer misinformed, stereotypical descriptions of stodgy professors in ivory towers would be nice.

____

Photo credit: Ian Britton, used via Creative Commons license

Perfect Pitch Redux

5819184201_df0392f0e7_b

I can just hear the advertisement now.

Do you have perfect pitch? Would you like to? Then Depakote might be right for you . . .

Perfect pitch is the ability to name or produce a musical note without a reference note. While most children presumably have the capacity to learn perfect pitch, only one in about ten thousand adults can actually do it. That’s because children must receive extensive musical training as youngsters to develop it. Most adults with perfect pitch began studying music at six years of age or younger. By the time children turn nine, their window to learn perfect pitch has already closed. They may yet blossom into wonderful musicians but they will never be able to count perfect pitch among their talents.

Or might they after all?

Well no, probably not. But a new study, published in Frontiers in Systems Neuroscience, has opened the door to such questions. Its authors tested how young men learned to name notes when they were on or off of a drug called valproate (brand name: Depakote). Valproate is widely used to treat epilepsy and bipolar disorder. It’s part of a class of drugs called histone-deacetylase, or HDAC, inhibitors that fiddle with how DNA is stored and alter how genes are read out and translated into proteins.

The intricacies of how HDAC inhibitors affect gene expression and how those changes reduce seizures and mania are still up in the air. But while some scientists have been working those details out, others have been noticing that HDAC inhibitors help old mice learn new tricks. These drugs allow adult mice to adapt to visual and auditory changes in ways that are only otherwise possible for juvenile mice. In other words, HDAC inhibitors allowed mice to learn things beyond the typical window, or critical period, in which the brain is capable of that specific type of learning.

Judit Gervain, Allan Young, and the other authors of the current study set out to test whether HDAC inhibitors can reopen a learning window in humans as well. They randomly assigned their young male subjects to take valproate for either the first or the second half of the study. (Although I usually get my hackles up about the exclusion of female participants from biomedical studies, I understand their reason for doing so in this case. Valproate can cause severe birth defects. By testing men, the authors could be one hundred percent certain that their participants weren’t pregnant.) The subjects took valproate for one half of the study and a placebo for the other half . . . and of course they weren’t told which was which.

During the first half of the study, they trained twenty-four participants to learn six pitch classes. Instead of teaching them the formal names of these pitches in the twelve-tone musical system, they assigned proper names to each one (e.g., Eric, Rachel, or Francine), indicating that each is the name of a person who only plays one pitch class. The participants received this training online for up to ten minutes daily for seven days. During the second half of the study, eighteen of the same subjects underwent the same training with six new pitch classes and names. At the end of each seven-day training session, they heard the six pitch classes one at a time and, for each, answered the question: “Who played that note?”

fnsys-07-00102-g002

Study results showing better performance at naming tones for participants on valproate in the first half of the experiment. From: Gervain et al, 2013

The results? There was a whopping effect of treatment on performance in the first half of the study. The young men on valproate did significantly better than the men on placebo. That’s pretty cool and amazing. It is particularly impressive and surprising because the participants received very little training. The online training summed to a mere seventy minutes and some of the participants didn’t even complete all seven of the ten-minute sessions.

As cool as the main finding is, there are some odd aspects to the study. As you can see from the figure, the second half of the experiment (after the treatments were switched) doesn’t show the same result as the first. Here, participants on valproate perform no differently from those on placebo. The authors suggest that the training in the first half of the experiment interfered with learning in the second half – a plausible explanation (and one they might have predicted in advance). Still, at this point we can’t tell if we are looking at a case of proactive interference or a failure to replicate results. Only time and future experiments will tell.

There were two other odd aspects of the study that caught my eye. The authors used synthesized piano tones instead of pure tones because the former has additional cues like timbre that help people without perfect pitch complete the task. They also taught the participants to associate each note with the name of the person who supposedly plays it rather than the name of the actual note or some abstract stand-in identifier. Both choices make it easier for the participants to perform well on the task but call into question how similar the participants’ learning is to the specific phenomenon of perfect pitch. Perhaps the subjects on valproate in the first half of the experiment were relying on different cues (e.g., timbre instead of frequency). Likewise, associating proper names of people with notes may help subjects learn precisely because it recruits social processes and networks that people with perfect pitch don’t use for the task. If these social processes don’t have a critical period like perfect pitch judgment does, well then valproate might be boosting a very different kind of learning.

As the authors themselves point out, this small study is merely a “proof-of-concept,” albeit a dramatic one. It is not meant to be the final word on the subject. Still, I am curious to see where this leads. Might valproate’s success with seizures and mania have something to do with its ability to trigger new learning? And if HDAC inhibitors do alter the brain’s ability to learn skills that are typically crystallized by adulthood, how has that affected the millions of adults who have been taking these drugs for years? Yet again, only time and science will tell.

I, for one, will be waiting to hear what they have to say.

_______

Photo credit: Brandon Giesbrecht on Flickr, used via Creative Commons license

Gervain J, Vines BW, Chen LM, Seo RJ, Hensch TK, Werker JF, & Young AH (2013). Valproate reopens critical-period learning of absolute pitch. Frontiers in Systems Neuroscience, 7 PMID: 24348349

Pet Coke: Breakfast of Champions?

10373257433_2e008cabf6_o

Pet coke piles in Chicago as of 10/19/13 by Josh Mogerman, used via Creative Commons license

This morning NPR informed me that petroleum coke and the Koch brothers have struck again – this time in my hometown of Chicago.

Petroleum coke, adorably nicknamed pet coke, made headlines this past summer when it was improperly stored by Koch Carbon and billowed into homes and neighborhoods in Detroit, where I currently live.

But wait. That’s a lie. I live just outside of Detroit in a wealthier suburb, just as I grew up outside of Chicago in a tree-lined college town. That makes a big difference. No one would dream of dumping three-story-high piles of industrial soot in my current backyard or the one I played in as a child. Those neighborhoods are simply too wealthy, too powerful, too ready and willing to sue.

Communities near the pet coke storage sites in both Detroit and Chicago are hurting financially. We all know about the struggles of bankrupt Detroit, where it takes about an hour for emergency workers to respond to the direst 911 calls. Southeast Chicago, once an industrial hub, has faced many of the same challenges as Detroit. This year, both areas became dumping grounds for increasing quantities of pet coke (in some cases, without a permit).

That increase in pet coke is due to ramped-up tar sands drilling in Canada. Pet coke is a product of the tar sands refining process. Although it is too sooty to be used for energy here in the United States, countries like Mexico and China will buy it to use as fuel. That means neighborhoods like South Deering in Chicago wind up serving as holding stations for pet coke while companies sell it internationally and arrange for its transport. But this pet coke sits and waits in open-air piles. Strong gusts of wind cause black plumes of dust that travel into neighborhoods and homes.

View of a pet coke plume from the Detroit piles on 7/27/13, via 3860remerson on YouTube

The residents of these neighborhoods have found black dust coating their floors, countertops, and even food. They describe it getting into their eyes, mouths, and lungs. I find these exposures alarming. But Laurie C. McCausland, who represents the Koch brothers’ interests as the deputy general counsel for Koch Companies Public Sector, thinks that’s silly. According to WBEZ, Chicago’s NPR affiliate, McCausland says that overall pet coke is safe. WBEZ quotes her as follows:

“It’s unfair for people to be overly scared about this product. I think people just don’t have a lot of information.”

In a letter to the editor of the Chicago Tribune, Jim Watson, the Executive Director of the Illinois Petroleum Council, expressed a similar sentiment. He wrote:

“Extensive testing has revealed that petcoke has no observed carcinogenic, reproductive, or developmental effects in humans and a low potential to cause adverse effects on aquatic or terrestrial environments.”

I was curious if these statements were true. Has pet coke been extensively studied? And is the health concern surrounding pet coke just an instance of misinformed scaremongering like the anti-vaccination movement? I headed over to PubMed, the U.S. government’s comprehensive catalog of scholarly papers in science, health, and medicine. I searched for “petroleum coke” and got 56 results. Most of these papers had to do with 1) nifty chemical reactions you can do with pet coke, 2) how pet coke affects aquatic life, and 3) the health of people who make pet coke react with other potentially hazardous compounds for a living. I came across only three titles that appeared to be specific and relevant: one that assessed correlations between pet coke exposure and lung cancer in petroleum workers and two that tested the effects of pet coke exposure in mammalian animal models.

The most recent was published in the International Journal of Toxicology this year. The authors include representatives from ExxonMobil (first author), the American Petroleum Institute, and Shell (last author). From what I can tell, the remaining authors work for contract research laboratories (as in, paid by the oil companies). Another paper was published in Occupational & Environmental Medicine in 2012 by authors at ExxonMobil (first author) and Imperial Oil, although this study at least included collaborators from actual universities including McGill (last author). A third paper, published in 1987 in the American Journal of Industrial Medicine, was penned by representatives of Standard Oil, the American Petroleum Institute (last author), and two contract laboratory companies (first author). Not surprisingly, none of these papers conclude that pet coke is especially hazardous.

Even if I missed one (or even ten!) relevant articles in my search, I think it’s safe to say that the research is anything but “extensive.”  I haven’t yet combed through the three papers, nor am I the best person to evaluate their methods. Still, I do think it’s proper to question their impartiality and recommend that they be scrutinized by unbiased experts. We should also wonder if we are getting an accurate representation of such industry-funded research. When corporations and labs-for-hire come up with results they don’t like, they don’t have to (and often don’t) publish them. Yet when corporations do get a result that they like (for whatever reason, including a lack of statistical power), they are happy to publish it and thrust it into the hands of publicists and legal representatives like Ms. McCausland who tell us not to be silly; pet coke’s perfectly safe. That bias alone throws off a fair evaluation of the issue.

Residents of Southeastern Chicago on the pet coke,  via NRDCflix on YouTube

Modern (and ancient) history play like a broken record of chemicals, compounds, and practices that were harmless until suddenly they weren’t. Shoe stores once had x-ray machines so you could see how well your shoes fit – or just stare at your wiggling toe bones. We’ve seen the rise and fall of leaded paint and gasoline, asbestos, and thalidomide and now we’re learning about the dangers of plastics in our baby bottles and flame retardants in our cushions. There’s plenty of reason to suspect that pet coke exposure is no day at the health spa. Inhaled particulates irritate the airways and can, at the very least, exacerbate asthma and other respiratory illnesses. Analysis of the Detroit pet coke dust showed that it also contained the toxic elements vanadium and selenium, although it’s not clear whether residents were exposed at high enough levels to cause ill effects. (While we actually require trace amounts of selenium,  further exposure is toxic.)

It seems to me that we need more information. We need impartial toxicologists, epidemiologists, and other specialists to pore over the papers published on the topic and start conducting unbiased experiments of their own. And while we wait, we need to protect the residents who live in the shadow of pet coke. Pet coke piles should be enclosed so that the dust can’t escape into communities, schools, and homes.

I find myself wondering how much faith people like Laurie McCausland, Jim Watson, and Charles Koch truly put in those industry-funded studies on pet coke. Would they be willing to move their families into a community coated with pet coke? Or is it only safe enough for those families who can’t afford to live elsewhere?

Between permit oversights and unlawful air pollution, the Koch brothers’ companies may already have broken the law. But if they are putting vulnerable people’s health and well being at risk to make a buck? Well that truly is criminal.
______

Schnatter AR, Nicolich MJ, Lewis RJ, Thompson FL, Dineen HK, Drummond I, Dahlman D, Katz AM, & Thériault G (2012). Lung cancer incidence in Canadian petroleum workers. Occupational and environmental medicine, 69 (12), 877-82 PMID: 23077208

McKee RH, Herron D, Beatty P, Podhasky P, Hoffman GM, Swigert J, Lee C, & Wong D (2013). Toxicological Assessment of Green Petroleum Coke. International journal of toxicology PMID: 24179031

Seeds of Science

154824818_22980b9cc5_oOCTOBER, 1889. Scientists flocked to Berlin for the annual meeting of the German Anatomical Society. The roster read like a who’s who of famous scientists of the day.

Into the fray marched a little-known Spaniard who’d spent years in Valencia and, later, Barcelona improving upon a method that made neurons visible under a microscope. Thanks to his patient tinkering, the Spaniard could see neurons in all their delicate, branching intricacy. He wanted to share his discoveries with other scientists. As he’d later say, he “gathered together for the purpose all my scanty savings and set out, full of hope, for the capital of the German Empire.”

In those days, scientific meetings were different from the parade of slideshows and posters sessions that they are today. The scientists at the 1889 meeting first read aloud from their papers and then took to their microscopes for demonstrations. The Spaniard unpacked his specimens and put them under several microscopes for the circulating scientists to view. Few came to see, in part because they expected little from a Spaniard. Spain was no scientific powerhouse. It lacked the scientific infrastructure and resources of countries like Germany, England, and France. What could one of its humble scientists possibly contribute to the meeting?

For the few curious gents who did stop by his demonstration, the Spaniard described his technique in broken French. Then he stepped aside and let them peer into the microscopes. Those who did became converts. The specimens spoke for themselves. Clear and complete, they revealed the intricate microarchitecture of neural structures like the retina and cerebellum.

Prominent German anatomists immediately adopted his technique and the Spaniard’s name quickly became known throughout the scientific community.

That name was Santiago Ramón y Cajal.Cajal

Ask any neuroscientist for his or her hero in the field and you are likely to hear this very name. Many consider him the founder of neurobiology as we know it today. The observations he made with his improved technique for seeing neurons allowed him to resolve a major controversy of the time and show that neurons are separate cells (as opposed to one huge, connected net). For his work, he won the Nobel Prize in Physiology or Medicine in 1906.

In short, he was an amazing guy who did amazing things – even though he wasn’t born in a wealthy nation known for science. Luckily, Cajal was able to get the tools and resources he needed to do his work. But what if he’d lived elsewhere, somewhere without the funds or equipment he needed? How far would that have set neuroscience back?

When I recently read an account of Cajal’s visit to Berlin, I found myself asking these questions. They reminded me of a Boston-based organization that is trying to equip the Cajals of today. The organization, a non-profit called Seeding Labs, partners with scientists, universities, and biomedical companies to equip stellar labs around the globe. (Full disclosure: The founder of Seeding Labs is the daughter of a family friend, which is how I first learned about the organization.)

The group’s core idea makes a lot of sense. Well-funded labs in the U.S. and other wealthy nations tend to update to newer models of their equipment often. These labs often discard perfectly functional older models that would be invaluable to scientists in developing nations. I’ve witnessed this kind of waste at major American universities. In the rush of doing science, people don’t have the time or energy to find new homes for their old autoclaves. They don’t even realize there’s a reason to try. While Seeding Labs now runs several programs to advance science in developing nations, its original aim was simply to turn one lab’s trash into another lab’s treasure.

I’m sure some struggling postdoc or assistant professor will read this post and scoff. Why devote energy to helping scientists in developing nations when we have a glut of scientists and a dearth of grants right here at home? It’s certainly true that research funding in America has tanked in recent years – a fact that needs to change. But in some countries the need is so great that a secondhand centrifuge could mean the difference between disappointment and discovery. That’s a pretty decent return on investment.

Here’s another benefit: labs in developing nations may be studying different problems than we are. They might focus on addressing local health or environmental concerns that we aren’t even aware of. So while scientists in wealthy nations find themselves racing to publish about well-trodden topics before competing labs, people in other countries may be researching crucial problems that wouldn’t otherwise be addressed.

And who knows? Perhaps these scientists are a good investment, in part, because of their relative isolation. Maybe a little distance from the scientific fray promotes ingenuity, creativity, and some good-old-fashioned tinkering. It certainly worked for Cajal.

____

Source: Stevens, Leonard A. Explorers of the Brain. Alfred A. Knopf, New York, 1971.

First photo credit: baigné par le soleil on Flickr, used via Creative Commons license

Second photo credit: Anonymous [Public domain], via Wikimedia Commons

%d bloggers like this: