Zapping Brains, Seeing Scenes

4911994857_b0e24e01f4_o

More than fifteen years ago, neuroimagers found a region of the brain that seemed to be all about place. The region lies on the bottom surface of the temporal lobe near a fold called the parahippocampal gyrus, so it was called the parahippocampal place area, or PPA. You have two PPAs: one on the left side of your brain and one on the right. If you look at a picture of a house, an outdoor or indoor scene, or even an empty room, your PPAs will take notice. Since its discovery, hundreds of experiments have probed the place predilections of the PPA. Each time, the region demonstrated its dogged devotion to place. Less clear was exactly what type of scene information the PPA was representing and what it was doing with that information. A recent scientific paper now gives us a rare, direct glimpse at the inner workings of the PPA through the experience of a young man whose right PPA was stimulated with electrodes.

The young man in question wasn’t an overzealous grad student. He was a patient with severe epilepsy who was at the hospital to undergo brain surgery. When medications can’t bring a person’s seizures under control, surgery is one of few remaining option. The surgery involves removing the portion of the brain in which that patient’s seizures begin. Of course, removing brain tissue is not something one does lightly. Before a surgery, doctors use various techniques to determine in each patient where the seizures originate and also where crucial regions involved in language and movement are located. They do this so they will know which part of the brain to remove and which parts they must be sure not to remove. One of the ways of mapping these areas before surgery is to open the patient’s skull, plant electrodes into his or her brain, and monitor brain activity at the various electrode sites. This technique, called electrocorticography, allows doctors to both record brain activity and electrically stimulate the brain to map key areas. It is also the most powerful and direct look scientists can get into the human brain.

A group of researchers in New York headed by Ashesh Mehta and Pierre Mégevand documented the responses of the young man as they stimulated electrodes that were planted in and around his right PPA. During one stimulation, he described seeing a train station from the neighborhood where he lives. During another, he reported seeing a staircase and a closet stuffed with something blue. When they repeated the stimulation, he saw the same random indoor scene again. So stimulating the PPA can cause hallucinations of scenes that are both indoor and outdoor, familiar or unfamiliar. This suggests that specific scene representations in the brain may be both highly localized and complex. It is also just incredibly cool.

The doctor also stimulated an area involved in face processing and found that this made the patient see distortions in a face. Another study published in 2012 showed a similar effect in a different patient. While the patient looked at his doctor, the doctor stimulated the face area. As the patient reported, “You just turned into somebody else. Your face metamorphosed.” Here’s a link to a great video of that patient’s entire reaction and description.

The authors of the new study also stimulated a nearby region that had shown a complex response to both faces and scenes is previous testing. When they zapped this area, the patient saw something that made him chuckle. “I’m sorry. . . You all looked Italian. . . Like you were working in a pizza shop. That’s what I saw, aprons and whatnot. Yeah, almost like you were working in a pizzeria.”

Now wouldn’t we all love to know what that area does?

______

Photo credit: thisisbossi on Flickr, used via Creative Commons license

*In case you’re wondering, the patient underwent surgery and no longer suffers from seizures (although he still experiences auras).

Mégevand P, Groppe DM, Goldfinger MS, Hwang ST, Kingsley PB, Davidesco I, & Mehta AD (2014). Seeing scenes: topographic visual hallucinations evoked by direct electrical stimulation of the parahippocampal place area. The Journal of neuroscience : the official journal of the Society for Neuroscience, 34 (16), 5399-405 PMID: 24741031

Did I Do That? Distinguishing Real from Imagined Actions

473089805_b4cb670102_o

If you’re like most people, you spend a great deal of your time remembering past events and planning or imagining events that may happen in the future. While these activities have their uses, they also make it terribly hard to keep track of what you have and haven’t actually seen, heard, or done. Distinguishing between memories of real experiences and memories of imagined or dreamt experiences is called reality monitoring and it’s something we do (or struggle to do) all of the time.

Why is reality monitoring a challenge? To illustrate, let’s say you’re at the Louvre standing before the Mona Lisa. As you look at the painting, visual areas of your brain are busy representing the image with specific patterns of activity. So far, so good. But problems emerge if we rewind to a time before you saw the Mona Lisa at the Louvre. Let’s say you were about to head over to the museum and you imagined the special moment when you would gaze upon Da Vinci’s masterwork. When you imagined seeing the picture, you were activating the same visual areas of the brain in a similar pattern to when you would look at the masterpiece itself.*

When you finally return home from Paris and try to remember that magical moment at the Louvre, how will you be able to distinguish your memories of seeing the Mona Lisa from imagining her? Reality monitoring studies have asked this very question (minus the Mona Lisa). Their findings suggest that you’ll probably use additional details associated with the memory to ferret out the mnemonic wheat from the chaff. You might use memory of perceptual details, like how the lights reflected off the brushstrokes, or you might use details of what you thought or felt, like your surprise at the painting’s actual size. Studies find that people activate both visual areas (like the fusiform gyrus) and self-monitoring regions of the brain (like the medial prefrontal cortex) when they are deciding whether they saw or just imagined seeing a picture.

It’s important to know what you did and didn’t see, but another crucial and arguably more important facet of reality monitoring involves determining what you did and didn’t do. How do you distinguish memories of things you’ve actually done from those you’ve planned to do or imagined doing? You have to do this every day and it isn’t a trivial task. Perhaps you’ve left the house and headed to work, only to wonder en route if you’d locked the door. Even if you thought you did, it can be hard to tell whether you remember actually doing it or just thinking about doing it. The distinction has consequences. Going home and checking could make you late for work, but leaving your door unlocked all day could mean losing your possessions. So how do we tell the possibilities apart?

Valerie Brandt, Jon Simons, and colleagues at the University of Cambridge looked into this question and published their findings last month in the journal Cognitive, Affective, and Behavioral Neuroscience. For the first part of the experiment (the study phase), they sat healthy adult participants down in front of two giant boxes – one red and one blue – that each contained 80 ordinary objects. The experimenter would draw each object out of one of the two boxes, place it in front of the participant, and tell him or her to either perform or to imagine performing a logical action with the object. For example, when the object was a book, participants were told to either open or imagine opening it.

After the study phase, the experiment moved to a scanner for fMRI. During these scans, participants were shown photographs of all 160 of the studied objects and, for each item, were asked to indicate either 1) whether they had performed or merely imagined performing an action on that object, or 2) which box the object had been drawn from.** When the scans were over, the participants saw the pictures of the objects again and were asked to rate how much specific detail they’d recalled about encountering each object and how hard it had been to bring that particular memory to mind.

The scientists compared fMRI measures of brain activation during the reality-monitoring task (Did I use or imagine using that object?) with activation during the location task (Which box did this object come from?). One of the areas they found to be more active during reality monitoring was the supplementary motor area, a region involved in planning and executing movements of the body. Just as visual areas are activated for reality monitoring of visual memories, motor areas are activated when people evaluate their action memories. In other words, when you ask yourself whether you locked the door or just imagined it, you may be using details of motor aspects of the memory (e.g., pronating your wrist to turn the key in the lock) to make your decision.

The study’s authors also found greater activation in the anterior medial prefrontal cortex when they compared reality monitoring for actions participants performed with those they only imagined performing. The medial prefrontal cortex encompasses a respectable swath of the brain with a variety of functions that appear to include making self-referential judgments, or evaluating how you feel or think about experiences, sensations, and the like. Other experiments have implicated a role for this or nearby areas in reality monitoring of visual memories. The study by Brandt and Simons also found that activation of this medial prefrontal region during reality-monitoring trials correlated with the number of internal details the participants said they’d recalled in those trials. In other words, the more details participants remembered about their thoughts and feelings during the past actions, the busier this area appeared to be. So when faced with uncertainty about a past action, the medial prefrontal cortex may be piping up about the internal details of the memory. I must have locked the door because I remember simultaneously wondering when my package would arrive from Amazon, or, because I was also feeling sad about leaving my dog alone at home.

As I read these results, I found myself thinking about the topic of my prior post on OCD. Pathological checking is a common and often disruptive symptom of the illness. Although it may seem like a failure of reality monitoring, several behavioral studies have shown that people with OCD have normal reality monitoring for past actions. The difference is that people with checking symptoms of OCD have much lower confidence in the quality of their memories than others. It seems to be this distrust of their own memories, along with relentless anxiety, that drives them to double-check over and over again.

So the next time you find yourself wondering whether you actually locked the door, cut yourself some slack. Reality monitoring ain’t easy. All you can do is trust your brain not to lead you astray. Make a call and stick with it. You’re better off being wrong than being anxious about it – that is, unless you have really nice stuff.

_____

Photo credit: Liz (documentarist on Flickr), used via Creative Commons license

* Of course, the mental image you conjure of the painting is actually based on the memory of having seen it in ads, books, or posters before. In fact, a growing area of neuroscience research focuses on how imagining the future relies on the same brain areas involved in remembering the past. Imagination seems to be, in large part, a collage of old memories cut and pasted together to make something new.

**The study also had a baseline condition, used additional contrasts, and found additional activations that I didn’t mention for the sake of brevity. Check out the original article for full details.

Brandt, V., Bergström, Z., Buda, M., Henson, R., & Simons, J. (2014). Did I turn off the gas? Reality monitoring of everyday actions Cognitive, Affective, & Behavioral Neuroscience, 14 (1), 209-219 DOI: 10.3758/s13415-013-0189-z

The Slippery Question of Control in OCD

6058142799_d4422a8fe2_b

It’s nice to believe that you have control over your environment and your fate – that is until something bad happens that you’d rather not be responsible for. In today’s complex and interconnected world, it can be hard to figure out who or what causes various events to happen and to what degree you had a hand in shaping their outcomes. Yet in order to function, everyone has to create mental representations of causation and control. What happens when I press this button? Did my glib comment upset my friends? If I belch on the first date, will it scare her off?

People often believe they have more control over outcomes (particularly positive outcomes) than they actually do. Psychologists discovered this illusion of control in controlled experiments, but you can witness the same principle in many a living room now that March Madness is upon us. Of course, wearing your lucky underwear or sitting in your go-to La-Z-Boy isn’t going to help your team win the game, and the very idea that it might shows how easily one’s sense of personal control can become inflated. Decades ago, researchers discovered that the illusion of control is not universal. People suffering from depression tend not to fall for this illusion. That fact, along with similar findings from depression, gave rise to the term depressive realism. Two recent studies now suggest that patients with obsessive-compulsive disorder (OCD) may also represent contingency and estimate personal control differently from the norm.

OCD is something of a paradox when it comes to the concept of control. The illness has two characteristic features: obsessions based on fears or regrets that occupy a sufferer’s thoughts and make him or her anxious, and compulsions, or repetitive and unnecessary actions that may or may not relieve the anxiety. For decades, psychiatrists and psychologists have theorized that control lies at the heart of this cycle. Here’s how the NIMH website on OCD describes it (emphasis is mine):

The frequent upsetting thoughts are called obsessions. To try to control them, a person will feel an overwhelming urge to repeat certain rituals or behaviors called compulsions. People with OCD can’t control these obsessions and compulsions. Most of the time, the rituals end up controlling them.

In short, their obsessions cause them distress and they perform compulsions in an effort to regain some sense of control over their thoughts, fears, and anxieties. Yet in some cases, compulsions (like sports fans’ superstitions) seem to indicate an inflated sense of personal control. Based on this conventional model of OCD, you might predict that people with the illness will either underestimate or overestimate their personal control over events. So which did the studies find? In a word: both.

The latest study, which appeared this month in Frontiers in Psychology, used a classic experimental design to study the illusion of control. The authors tested 26 people with OCD and 26 comparison subjects. The subjects were shown an image of an unlit light bulb and told that their goal was to illuminate the light bulb as often as possible. On each trial, they could choose to either press or not press the space bar. After they made their decision, the light bulb either did or did not light up. Their job was to estimate, based on their trial-by-trial experimentation, how much control they had over the light bulb. Here’s the catch: the subjects had absolutely no control over the light bulb, which lit up or remained dark according to a fixed sequence.*

After 40 trials, subjects were asked to rate the degree of control they thought they had over the illumination of the light bulb, ranging from 0 (no control) to 100 (complete control). Estimates of control were consistently higher for the comparison subjects than for the subjects with OCD. In other words, the people with OCD believed they had less control – and since they actually had no control, that means that they were also more accurate than the comparison subjects. As the paper points out, this is a limitation of the study: it can’t tell us whether patients are generally prone to underestimating their control over events or if they’re simply more accurate that comparison subjects. To do that, it would need to have included situations in which subjects actually did have some degree of control over the outcomes.

Why wasn’t the light bulb study designed to distinguish between these alternatives? Because the authors were expecting the opposite result. They had designed their experiment to follow up on a 2008 study that found a heightened illusion of control among people with OCD. The earlier study used a different test. They showed subjects either neutral pictures of household items or disturbing pictures of distorted faces. The experimenters encouraged the subjects to try to control the presentation of images by pressing buttons on a keyboard and asked them to estimate their control over the images three times during the session. However, just like in the light bulb study, the presentation of the images was fixed in advance and could not be affected by the subjects’ button presses.

How can two studies of estimated control in OCD have opposite results? It seems that the devil is in the details. Prior studies with tasks like these have shown that healthy subjects’ control estimates depend on details like the frequency of the preferred outcome and whether the experimenter is physically in the room during testing.  Mental illness throws additional uncertainty into the mix. For example, the disturbing face images in the 2008 study might have made the subjects with OCD anxious, which could have triggered a different cognitive pattern. Still, both findings suggest that control estimation is abnormal for people with OCD, possibly in complex and situation-dependent ways.

These and other studies indicate that decision-making and representations of causality in OCD are altered in interesting and important ways. A better understanding of these differences could help us understand the illness and, in the process, might even shed light on the minor rituals and superstitions that are common to us all. Sadly, like a lucky pair of underwear, it probably won’t help your team get to the Final Four.

_____

Photo by Olga Reznik on Flickr, used via Creative Commons license

*The experiment also manipulated reinforcement (how often the light bulb lit up) and valence (whether the lit bulb earned them money or the unlit bulb cost them money) across different testing sections, but I don’t go into that here because the manipulations didn’t affect the results.

Gillan CM, Morein-Zamir S, Durieux AM, Fineberg NA, Sahakian BJ, & Robbins TW (2014). Obsessive-compulsive disorder patients have a reduced sense of control on the illusion of control task. Frontiers in Psychology, 5 PMID: 24659974

In the Blink of an Eye

4330664213_ab665a8419_o

It takes around 150 milliseconds (or about one sixth of a second) to blink your eyes. In other words, not long. That’s why you say something happened “in the blink of an eye” when an event passed so quickly that you were barely aware of it. Yet a new study shows that humans can process pictures at speeds that make an eye blink seem like a screening of Titanic. Even more, these results challenge a popular theory about how the brain creates your conscious experience of what you see.

To start, imagine your eyes and brain as a flight of stairs. I know, I know, but hear me out. Each step represents a stage in visual processing. At the bottom of the stairs you have the parts of the visual system that deal with the spots of darkness and light that make up whatever you’re looking at (let’s say an old family photograph). As you stare at the photograph, information about light and dark starts out at the bottom of the stairs in what neuroscientists called “low-level” visual areas like the retinas in your eyes and a swath of tissue tucked away at the very back of your brain called primary visual cortex, or V1.

Now imagine that the information about the photograph begins to climb our metaphorical neural staircase. Each time the information reaches a new step (a.k.a. visual brain area) it is transformed in ways that discard the details of light and dark and replace them with meaningful information about the picture. At one step, say, an area of your brain detects a face in the photograph. Higher up the flight, other areas might identify the face as your great-aunt Betsy’s, discern that her expression is sad, or note that she is gazing off to her right. By the time we reach the top of the stairs, the image is, in essence, a concept with personal significance. After it first strikes your eyes, it only takes visual information 100-150 milliseconds to climb to the top of the stairs, yet in that time your brain has translated a pattern of light and dark into meaning.

For many years, neuroscientists and psychologists believed that vision was essentially a sprint up this flight of stairs. You see something, you process it as the information moves to higher areas, and somewhere near the top of the stairs you become consciously aware of what you’re seeing. Yet intriguing results from patients with blindsight, along with other studies, seemed to suggest that visual awareness happens somewhere on the bottom of the stairs rather than at the top.

New, compelling demonstrations came from studies using transcranial magnetic stimulation, a method that can temporarily disrupt brain activity at a specific point in time. In one experiment, scientists used this technique to disrupt activity in V1 about 100 milliseconds after subjects looked at an image. At this point (100 milliseconds in), information about the image should already be near the top of the stairs, yet zapping lowly V1 at the bottom of the stairs interfered with the subjects’ ability to consciously perceive the image. From this and other studies, a new theory was born. In order to consciously see an image, visual information from the image that reaches the top of the stairs must return to the bottom and combine with ongoing activity in V1. This magical mixture of nitty-gritty visual details and extracted meaning somehow creates what we experience as visual awareness

In order for this model of visual processing to work, you would have to look at the photo of Aunt Betsy for at least 100 milliseconds in order to be consciously aware of it (since that’s how long it takes for the information to sprint up and down the metaphorical flight of stairs). But what would happen if you saw Aunt Betsy’s photo for less than 100 milliseconds and then immediately saw a picture of your old dog, Sparky? Once Aunt Betsy made it to the top of the stairs, she wouldn’t be able to return to the bottom stairs because Sparky has taken her place. Unable to return to V1, Aunt Betsy would never make it to your conscious awareness. In theory, you wouldn’t know that you’d seen her at all.

Mary Potter and colleagues at MIT tested this prediction and recently published their results in the journal Attention, Perception, & Psychophysics. They showed subjects brief pictures of complex scenes including people and objects in a style called rapid serial visual presentation (RSVP). You can find an example of an RSVP image stream here, although the images in the demo are more racy and are shown for longer than the pictures in the Potter study.

The RSVP image streams in the Potter study were strings of six photographs shown in quick succession. In some image streams, pictures were each shown for 80 milliseconds (or about half the time it takes to blink). Pictures in other streams were shown for 53, 27, or 13 milliseconds each. To give you a sense of scale, 13 milliseconds is about one tenth of an eye blink, or one hundredth of a second. It is also far less than time than Aunt Betsy would need to sprint to the top of the stairs, much less to return to the bottom.

At such short timescales, people can’t remember and report all of the pictures they see in an image stream. But are they aware of them at all? To test this, the scientists gave their subjects a written description of a target picture from the image stream (say, flowers) either just before the stream began or just after it ended. In either case, once the stream was over, the subject had to indicate whether an image fitting that description appeared in the stream. If it did appear, subjects had to pick which of two pictures fitting the description actually appeared in the stream.

Considering how quickly these pictures are shown, the task should be hard for people to do even when they know what they’re looking for. Why? Because “flowers” could describe an infinite number of photographs with different arrangements, shapes, and colors. Even when the subject is tipped off with the description in advance, he or she must process each photo in the stream well enough to recognize the meaning of the picture and compare it to the description. On top of that, this experiment effectively jams the metaphorical visual staircase full of images, leaving no room for visual info to return to V1 and create a conscious experience.

The situation is even more dire when people get the description of the target only after they’ve viewed the entire image stream. To answer correctly, subjects have to process and remember as many of the pictures from the stream as possible. None of this would be impressive under ordinary circumstances but, again, we’re talking 13 milliseconds here.

Sensitivity (computed from subject performance) on the RSVP image streams with 6 images. From Potter et al., 2013.

Sensitivity (computed from subject performance) on the RSVP image streams with 6 images. From Potter et al., 2013.

How did the subjects do? Surprisingly well. In all cases, they performed better than if they were randomly guessing – even when tested on the pictures shown for 13 milliseconds. In general, they scored higher when the pictures were shown longer. And like any test-taker could tell you, people do better when they know the test questions in advance. This pattern held up even when the scientists repeated the experiment with 12-image streams. As you might imagine, that makes for a very crowded visual staircase.

These results challenge the idea that visual awareness happens when information from the top of the stairs returns to V1. Still, they are by no means the theory’s death knell. It’s possible that the stairs are wider than we thought and that V1 is able (at least to some degree) to represent more than one image at a time. Another possibility is that the subjects in the study answered the questions using a vague sense of familiarity – one that might arise even if they were never overtly conscious of seeing the images. This is a particularly compelling explanation because there’s evidence that people process visual information like color and line orientation without awareness when late activity in V1 is disrupted. The subjects in the Potter study may have used this type of information   to guide their responses.

However things ultimately shake out with the theory of visual awareness, I love that these intriguing results didn’t come from a fancy brain scanner or from the coils of a transcranial magnetic stimulation device. With a handful of pictures, a computer screen, and some good-old-fashioned thinking, the authors addressed a potentially high-tech question in a low-tech way. It’s a reminder that fancy, expensive techniques aren’t the only way – or even necessarily the best way – to tackle questions about the brain. It also shows that findings don’t need colorful brain pictures or glow-in-the-dark mice in order to be cool. You can see in less than one-tenth of a blink of an eye. How frickin’ cool is that?

Photo credit: Ivan Clow on Flickr, used via Creative Commons license

Potter MC, Wyble B, Hagmann CE, & McCourt ES (2013). Detecting meaning in RSVP at 13 ms per picture. Attention, perception & psychophysics PMID: 24374558

Perfect Pitch Redux

5819184201_df0392f0e7_b

I can just hear the advertisement now.

Do you have perfect pitch? Would you like to? Then Depakote might be right for you . . .

Perfect pitch is the ability to name or produce a musical note without a reference note. While most children presumably have the capacity to learn perfect pitch, only one in about ten thousand adults can actually do it. That’s because children must receive extensive musical training as youngsters to develop it. Most adults with perfect pitch began studying music at six years of age or younger. By the time children turn nine, their window to learn perfect pitch has already closed. They may yet blossom into wonderful musicians but they will never be able to count perfect pitch among their talents.

Or might they after all?

Well no, probably not. But a new study, published in Frontiers in Systems Neuroscience, has opened the door to such questions. Its authors tested how young men learned to name notes when they were on or off of a drug called valproate (brand name: Depakote). Valproate is widely used to treat epilepsy and bipolar disorder. It’s part of a class of drugs called histone-deacetylase, or HDAC, inhibitors that fiddle with how DNA is stored and alter how genes are read out and translated into proteins.

The intricacies of how HDAC inhibitors affect gene expression and how those changes reduce seizures and mania are still up in the air. But while some scientists have been working those details out, others have been noticing that HDAC inhibitors help old mice learn new tricks. These drugs allow adult mice to adapt to visual and auditory changes in ways that are only otherwise possible for juvenile mice. In other words, HDAC inhibitors allowed mice to learn things beyond the typical window, or critical period, in which the brain is capable of that specific type of learning.

Judit Gervain, Allan Young, and the other authors of the current study set out to test whether HDAC inhibitors can reopen a learning window in humans as well. They randomly assigned their young male subjects to take valproate for either the first or the second half of the study. (Although I usually get my hackles up about the exclusion of female participants from biomedical studies, I understand their reason for doing so in this case. Valproate can cause severe birth defects. By testing men, the authors could be one hundred percent certain that their participants weren’t pregnant.) The subjects took valproate for one half of the study and a placebo for the other half . . . and of course they weren’t told which was which.

During the first half of the study, they trained twenty-four participants to learn six pitch classes. Instead of teaching them the formal names of these pitches in the twelve-tone musical system, they assigned proper names to each one (e.g., Eric, Rachel, or Francine), indicating that each is the name of a person who only plays one pitch class. The participants received this training online for up to ten minutes daily for seven days. During the second half of the study, eighteen of the same subjects underwent the same training with six new pitch classes and names. At the end of each seven-day training session, they heard the six pitch classes one at a time and, for each, answered the question: “Who played that note?”

fnsys-07-00102-g002

Study results showing better performance at naming tones for participants on valproate in the first half of the experiment. From: Gervain et al, 2013

The results? There was a whopping effect of treatment on performance in the first half of the study. The young men on valproate did significantly better than the men on placebo. That’s pretty cool and amazing. It is particularly impressive and surprising because the participants received very little training. The online training summed to a mere seventy minutes and some of the participants didn’t even complete all seven of the ten-minute sessions.

As cool as the main finding is, there are some odd aspects to the study. As you can see from the figure, the second half of the experiment (after the treatments were switched) doesn’t show the same result as the first. Here, participants on valproate perform no differently from those on placebo. The authors suggest that the training in the first half of the experiment interfered with learning in the second half – a plausible explanation (and one they might have predicted in advance). Still, at this point we can’t tell if we are looking at a case of proactive interference or a failure to replicate results. Only time and future experiments will tell.

There were two other odd aspects of the study that caught my eye. The authors used synthesized piano tones instead of pure tones because the former has additional cues like timbre that help people without perfect pitch complete the task. They also taught the participants to associate each note with the name of the person who supposedly plays it rather than the name of the actual note or some abstract stand-in identifier. Both choices make it easier for the participants to perform well on the task but call into question how similar the participants’ learning is to the specific phenomenon of perfect pitch. Perhaps the subjects on valproate in the first half of the experiment were relying on different cues (e.g., timbre instead of frequency). Likewise, associating proper names of people with notes may help subjects learn precisely because it recruits social processes and networks that people with perfect pitch don’t use for the task. If these social processes don’t have a critical period like perfect pitch judgment does, well then valproate might be boosting a very different kind of learning.

As the authors themselves point out, this small study is merely a “proof-of-concept,” albeit a dramatic one. It is not meant to be the final word on the subject. Still, I am curious to see where this leads. Might valproate’s success with seizures and mania have something to do with its ability to trigger new learning? And if HDAC inhibitors do alter the brain’s ability to learn skills that are typically crystallized by adulthood, how has that affected the millions of adults who have been taking these drugs for years? Yet again, only time and science will tell.

I, for one, will be waiting to hear what they have to say.

_______

Photo credit: Brandon Giesbrecht on Flickr, used via Creative Commons license

Gervain J, Vines BW, Chen LM, Seo RJ, Hensch TK, Werker JF, & Young AH (2013). Valproate reopens critical-period learning of absolute pitch. Frontiers in Systems Neuroscience, 7 PMID: 24348349

Known Unknowns

Why no one can say exactly how much is safe to drink while pregnant

8538709738_0e2f5bb2ab_b

I was waiting in the dining car of an Amtrak train recently when I looked up and saw that old familiar sign:

“According to the Surgeon General, women should not drink alcoholic beverages during pregnancy because of the risk of birth defects.”

One finds this warning everywhere: printed on bottles and menus or posted on placards at restaurants and even train cars barreling through Midwestern farmland in the middle of the night. The warnings are, of course, intended to reduce the number of cases of fetal alcohol syndrome in the United States. To that end, the Centers for Disease Control and Prevention (CDC) and the American Congress of Obstetricians and Gynecologists (ACOG) recommend that women avoid drinking any alcohol throughout their pregnancies.

Here’s how the CDC puts it:

“There is no known safe amount of alcohol to drink while pregnant.”

And here’s ACOG’s statement in 2008:

“. . . ACOG reiterates its long-standing position that no amount of alcohol consumption can be considered safe during pregnancy.”

Did you notice what they did there? These statements don’t actually say that no amount of alcohol is safe during pregnancy. They say that no safe amount is known and that no amount can be considered safe, respectively. Ultimately, these are statements of uncertainty. We don’t know how much is safe to drink, so it’s best if you don’t drink any at all.

Lest you think this is a merely a reflection of America’s puritanical roots, check out the recommendations of the U.K.’s National Health Service. While they make allowances for the fact that some women choose to drink, they still advise pregnant women to avoid alcohol altogether. As they say:

“If women want to avoid all possible alcohol-related risks, they should not drink alcohol during pregnancy because the evidence on this is limited.”

Yet it seems odd that the evidence is so limited. The damaging effects of binge drinking on fetal development were known in the 18th century and the first modern description of fetal alcohol syndrome was published in a French medical journal nearly 50 years ago. Six years later, in 1973, a group of researchers at the University of Washington documented the syndrome in The Lancet. Even then, people knew the cause of fetal alcohol syndrome: alcohol. And in the forty years since, fetal alcohol syndrome has become a well-known and well-studied illness. NIH alone devotes more than $30 million dollars annually to research in the field. So how come no one has answered the most pressing question (at least for pregnant women): How much is safe to drink?

One reason is that fetal alcohol syndrome isn’t like HIV. You can’t diagnose it with a blood test. Doctors rely on a characteristic pattern of facial abnormalities, growth delays and neural or mental problems – often in addition to evidence of prenatal alcohol exposure – to diagnose a child. Yet children exposed to and affected by alcohol during fetal development don’t always show all of these symptoms. Doctors and agencies now define fetal alcohol syndrome as the extreme end of a spectrum of disorders caused by prenatal alcohol exposure. The full spectrum, called fetal alcohol spectrum disorders (FASD), includes milder forms of the illness that involve subtler cognitive or behavioral problems and lack the classic facial features of the full-blown syndrome.

As you might imagine, milder cases of FASD are hard to identify. Pediatricians can miss the signs altogether. And there’s a fundamental difficulty in diagnosing the mildest cases of FASD. To put it crudely, if your child is slow, who’s to say whether the culprit is a little wine during pregnancy, genetics, too much television, too few vegetables, or god-knows-what-else? Unfortunately, identifying and understanding the mildest cases is crucial. These are the cases that worry pregnant women who drink lightly. They lie at the heart of the uncertainty voiced by the CDC, ACOG, and others. Most pregnant women would like to enjoy the occasional merlot or Sam Adams, but not if they thought it would rob their children of IQ points or otherwise limit their abilities – even just a little – down the line.

While it’s hard to pin down the subtlest cases in the clinic, scientists can still detect them by looking for differences between groups of children with different exposures. The most obvious way of testing this would be to randomly assign pregnant women to drink alcohol at different doses, but of course that experiment would be unethical and should never be done. Instead, researchers capitalize on the variability in how much women choose to drink during pregnancy (or at least how much they report that they drank, which may not always be the same thing.) In addition to interviewing moms about their drinking habits, the scientists test their children at different ages and look for correlations between prenatal alcohol exposure and test performance.

While essential, these studies can be messy and hard to interpret. When researchers do find correlations between moderate prenatal alcohol exposure and poor test performance, they can’t definitively claim that the former caused the latter (although it’s suggestive). A mysterious third variable (say, maternal cocaine use) might be responsible for them both. On the flip side, interpreting studies that don’t find correlations is even trickier.  It’s hard to show that one thing doesn’t affect another, particularly when you are interested in very small effects. To establish this with any confidence, scientists must show that it holds with large numbers of people and that they are using the right outcome measure (e.g., IQ score). FASD impairments can span language, movement, math skills, goal-directed behaviors, and social interactions. Any number of measures from wildly different tests might be relevant. If a given study doesn’t find a correlation between prenatal alcohol exposure and outcome measure, it might be because the study didn’t test enough children or didn’t choose the right test to pick up the subtle differences between groups.

When studies in humans get tricky, scientists often turn to animal models. FASD research has been no exception. These animal studies have helped us understand the physiological and biochemical mechanisms behind fetal alcohol syndrome, but they can’t tell us how much alcohol a pregnant woman can safely drink. Alcohol metabolism rates vary quite a bit between species. The sensitivity of developing neurons to alcohol may differ too. One study used computational modeling to predict that the blood alcohol level of a pregnant rat must be 10 times that of a pregnant human to wreak the same neural havoc on the fetus. Yet computational models are far from foolproof. Scientists simply don’t know precisely how a dose in a rat, monkey, or other animal would translate to a human mother and fetus.

And here’s the clincher: alcohol’s prenatal effects also differ between humans. Thanks to genetic differences, people metabolize alcohol at very different rates. The faster a pregnant woman clears alcohol from her system, the lower the exposure to her fetus. Other factors make a difference, too. Prenatal alcohol exposure seems to take a heavier toll on the fetuses of older mothers. The same goes for poor mothers, probably because of confounding factors like nutrition and stress. Taken together, these differences mean that if two pregnant women drink the same amount of alcohol at the same time, their fetuses might experience very different alcohol exposures and have very different outcomes. In short, there is no single limit to how much a pregnant woman can safely drink because every woman and every pregnancy is different.

As organizations like the CDC point out, the surest way to prevent FASD is to avoid alcohol entirely while pregnant. Ultimately, every expecting mother has to make her own decision about drinking based on her own understanding of the risk. She may hear strong opinions from friends, family, the blogosphere and conventional media. Lots of people will seem sure of many things and those are precisely the people that she should ignore.

When making any important decision, it’s best to know as much as you can – even when that means knowing how much remains unknown.

_____

Photo Credit: Uncalno Tekno on Flickr, used via Creative Commons license

Hurley TD, & Edenberg HJ (2012). Genes encoding enzymes involved in ethanol metabolism. Alcohol research : current reviews, 34 (3), 339-44 PMID: 23134050

Stoler JM, & Holmes LB (1999). Under-recognition of prenatal alcohol effects in infants of known alcohol abusing women. The Journal of Pediatrics, 135 (4), 430-6 PMID: 10518076

fMR-Why? Bad Science Meets Chocolate and Body Envy

275285919_a7816d7ded_b

Imagine this: You have bulimia nervosa, a psychiatric condition that traps you in an unhealthy cycle of binge eating and purging. You’ve been recruited to participate in a functional MRI experiment on this devastating illness. As you lie in the scanner, you are shown pictures of pizza, chocolate and other high-calorie foods and you’re told to imagine eating them. You do this for 72 pictures of delicious, fatty foods. At other points in the experiment, you see pictures of bodies (sans heads) of models clipped from a women’s magazine. You are told to compare your body to each of the bodies in the pictures. You do this 72 times, once for each skinny (and probably retouched) model’s body. The experience would have been unsettling enough for normal women trying to eat healthier or feel happier with their not-so-super-model bodies. But for women with bulimia, it must have truly been a hoot and a half.

Luckily, the misery was worth it. When the researchers publish their findings, they claim to have shown that patients with bulimia process body images differently. In their conclusions, they say that their results can inform how psychotherapists should treat patients with the illness. They even suggest that it might someday lead to direct interventions, such as a targeted zap to the head using transcranial magnetic stimulation.

My recommendation? Cover your therapist’s ears and stay away from the head zapper. This study shows nothing of the sort.

Functional MRI is a widely used and quite powerful method of probing the brain, but it is only useful for experiments that are thoughtfully conceived and carefully interpreted. Unfortunately, many fMRI papers that make it to publication are neither.

One of the most common problems in fMRI is making bad comparisons. All fMRI studies rely on comparisons because brains are all different and scanners are all different. If you are going to say that Region X becomes active when you see a picture of chocolate, you first have to answer that crucial question: compared to what? If you’re interested in how the brain reacts to unhealthy food in particular, you might compare looking at pictures of chocolate with looking at pictures of raisins or eggplant. And if you’re comparing these comparisons across subject groups (such as patients versus non-patients), both groups had better have the same the control condition. Otherwise, you’re not even comparing apples to oranges. You’re comparing apples to gym socks.

Sadly, that is just what these experimenters did. They compared brain blood flow when the subjects looked either at junk food or skinny women with blood flow during 36-second stretches of time when subjects just stared at a small, white ‘+’ on the screen. The authors say that using a more similar control condition (say, imagining using non-food objects like a lamp or a door) would be bad because patients with bulimia might respond to these objects differently than healthy subjects. This argument is nonsensical. There’s no reason to believe that people with bulimia feel any differently about doors or lamps than anyone else, but there’s plenty of reason to believe that they would spend 36-second moments of downtime before or after comparing their bodies to those of models either obsessing or trying not to obsess about how their bodies ‘measure up.’

In fact, I suspect that could not help but wonder if the authors didn’t originally intend to use this ‘+’ as the control condition. They actually had less crappy control conditions built into the experiment. As a control for imagining eating pizza and chocolate, the participants were also shown non-food objects like tools and told to imagine using them. They also saw interior décor photos and had to compare the furniture to those in their own homes – a control for comparing each model’s body to one’s own.

When the authors did their analyses using these (better) control conditions, they found very few differences between patients and non-patients. None, in fact, for the imagine-eating-junk-food portion of the study. For the comparing-oneself-to-models portion, they only found that patients showed less activation than controls in two regions of visual cortex. These regions may correspond to areas that specifically process body images. But would less activation in these regions mean that patients with bulimia process body images differently than other people? Not at all. If the patients were not looking at the pictures as much as non-patients or were more distracted/less attentive to them, you would see the same pattern of results. In short, the authors had no story to tell when they used the better controls. They had a ‘null result’ that would not get published.

354288651_2f3adbc016_b

Based on the design of their experiment, I suspect that find myself wondering if this was how they originally intended to analyze their data.* And it’s really the only sensible way to analyze these data. Experiments like these include the ‘+’ condition to establish a baseline (essentially, what you’re going to call ‘zero’). These ‘+’ blocks also correct for an unfortunate phenomenon called scanner drift that adds noise to the data.

It’s possible that I have to wonder if the authors decided to use the ‘+’ for their comparisons because they didn’t get any exciting results with the actual control conditions. If so, it unfortunately worked. Using the baseline condition, they found two differences between patient and non-patient activations in the food task and even more differences between the groups in the body task. Ultimately, the authors got their significant results and they got them published.  But those differences have nothing to do with the causes of bulimia and everything to do with what flits through people’s minds while they stare at a plus sign.

Unfortunately, this is just one example from a growing sea of bad fMRI studies out there. And while many people do wonderful work with the technique and advance the field, others do it a disservice and set us all back. From researchers to reviewers, publishers, science writers and reporters, we all need to proceed with caution and evaluate papers with a critical eye. The participants in our experiments deserve it. The public deserves it. Most of all, patients deserve the best information we can give them. Science done well and served to them straight.
____

Update: I’ve made a few small changes to this post to clarify my intent. I don’t personally know the study’s authors and have no insight into their actions, intentions, or motivations. In writing the piece, I hoped to bring attention to a widespread problem in fMRI research. Of the study’s authors I can only say that they did some seriously flawed research. Why, when, or how is as much your guess as mine.

Since posting this piece, I’ve contacted the editor of BMC Psychiatry regarding my concerns with the paper. Not only have I received no reply from her, but this paper is still listed as one of the ‘Editor’s Picks’ on their website as of 1/5/14.

____

*For curious fMRI folk: each run contained 6 food/body blocks, 6 non-food/décor blocks, and only 3 baseline ‘+’ blocks. That means they collected twice the data for the control conditions that they supposedly didn’t intend to use than for the ones that they did.

Photo #1 credit: MRI scanner, photo by Matthias Weinberger (cszar on Flickr), used via Creative Commons license

Photo #2 credit: Structural MRI of kiwi fruit by Dom McIntyre (McBadger on Flickr), used via Creative Commons license

Van den Eynde F, Giampietro V, Simmons A, Uher R, Andrew CM, Harvey PO, Campbell IC, & Schmidt U (2013). Brain responses to body image stimuli but not food are altered in women with bulimia nervosa. BMC Psychiatry, 13 (1) PMID: 24238299

claimtoken-52c471c993394

%d bloggers like this: