Zapping Brains, Seeing Scenes

4911994857_b0e24e01f4_o

More than fifteen years ago, neuroimagers found a region of the brain that seemed to be all about place. The region lies on the bottom surface of the temporal lobe near a fold called the parahippocampal gyrus, so it was called the parahippocampal place area, or PPA. You have two PPAs: one on the left side of your brain and one on the right. If you look at a picture of a house, an outdoor or indoor scene, or even an empty room, your PPAs will take notice. Since its discovery, hundreds of experiments have probed the place predilections of the PPA. Each time, the region demonstrated its dogged devotion to place. Less clear was exactly what type of scene information the PPA was representing and what it was doing with that information. A recent scientific paper now gives us a rare, direct glimpse at the inner workings of the PPA through the experience of a young man whose right PPA was stimulated with electrodes.

The young man in question wasn’t an overzealous grad student. He was a patient with severe epilepsy who was at the hospital to undergo brain surgery. When medications can’t bring a person’s seizures under control, surgery is one of few remaining option. The surgery involves removing the portion of the brain in which that patient’s seizures begin. Of course, removing brain tissue is not something one does lightly. Before a surgery, doctors use various techniques to determine in each patient where the seizures originate and also where crucial regions involved in language and movement are located. They do this so they will know which part of the brain to remove and which parts they must be sure not to remove. One of the ways of mapping these areas before surgery is to open the patient’s skull, plant electrodes into his or her brain, and monitor brain activity at the various electrode sites. This technique, called electrocorticography, allows doctors to both record brain activity and electrically stimulate the brain to map key areas. It is also the most powerful and direct look scientists can get into the human brain.

A group of researchers in New York headed by Ashesh Mehta and Pierre Mégevand documented the responses of the young man as they stimulated electrodes that were planted in and around his right PPA. During one stimulation, he described seeing a train station from the neighborhood where he lives. During another, he reported seeing a staircase and a closet stuffed with something blue. When they repeated the stimulation, he saw the same random indoor scene again. So stimulating the PPA can cause hallucinations of scenes that are both indoor and outdoor, familiar or unfamiliar. This suggests that specific scene representations in the brain may be both highly localized and complex. It is also just incredibly cool.

The doctor also stimulated an area involved in face processing and found that this made the patient see distortions in a face. Another study published in 2012 showed a similar effect in a different patient. While the patient looked at his doctor, the doctor stimulated the face area. As the patient reported, “You just turned into somebody else. Your face metamorphosed.” Here’s a link to a great video of that patient’s entire reaction and description.

The authors of the new study also stimulated a nearby region that had shown a complex response to both faces and scenes is previous testing. When they zapped this area, the patient saw something that made him chuckle. “I’m sorry. . . You all looked Italian. . . Like you were working in a pizza shop. That’s what I saw, aprons and whatnot. Yeah, almost like you were working in a pizzeria.”

Now wouldn’t we all love to know what that area does?

______

Photo credit: thisisbossi on Flickr, used via Creative Commons license

*In case you’re wondering, the patient underwent surgery and no longer suffers from seizures (although he still experiences auras).

Mégevand P, Groppe DM, Goldfinger MS, Hwang ST, Kingsley PB, Davidesco I, & Mehta AD (2014). Seeing scenes: topographic visual hallucinations evoked by direct electrical stimulation of the parahippocampal place area. The Journal of neuroscience : the official journal of the Society for Neuroscience, 34 (16), 5399-405 PMID: 24741031

Did I Do That? Distinguishing Real from Imagined Actions

473089805_b4cb670102_o

If you’re like most people, you spend a great deal of your time remembering past events and planning or imagining events that may happen in the future. While these activities have their uses, they also make it terribly hard to keep track of what you have and haven’t actually seen, heard, or done. Distinguishing between memories of real experiences and memories of imagined or dreamt experiences is called reality monitoring and it’s something we do (or struggle to do) all of the time.

Why is reality monitoring a challenge? To illustrate, let’s say you’re at the Louvre standing before the Mona Lisa. As you look at the painting, visual areas of your brain are busy representing the image with specific patterns of activity. So far, so good. But problems emerge if we rewind to a time before you saw the Mona Lisa at the Louvre. Let’s say you were about to head over to the museum and you imagined the special moment when you would gaze upon Da Vinci’s masterwork. When you imagined seeing the picture, you were activating the same visual areas of the brain in a similar pattern to when you would look at the masterpiece itself.*

When you finally return home from Paris and try to remember that magical moment at the Louvre, how will you be able to distinguish your memories of seeing the Mona Lisa from imagining her? Reality monitoring studies have asked this very question (minus the Mona Lisa). Their findings suggest that you’ll probably use additional details associated with the memory to ferret out the mnemonic wheat from the chaff. You might use memory of perceptual details, like how the lights reflected off the brushstrokes, or you might use details of what you thought or felt, like your surprise at the painting’s actual size. Studies find that people activate both visual areas (like the fusiform gyrus) and self-monitoring regions of the brain (like the medial prefrontal cortex) when they are deciding whether they saw or just imagined seeing a picture.

It’s important to know what you did and didn’t see, but another crucial and arguably more important facet of reality monitoring involves determining what you did and didn’t do. How do you distinguish memories of things you’ve actually done from those you’ve planned to do or imagined doing? You have to do this every day and it isn’t a trivial task. Perhaps you’ve left the house and headed to work, only to wonder en route if you’d locked the door. Even if you thought you did, it can be hard to tell whether you remember actually doing it or just thinking about doing it. The distinction has consequences. Going home and checking could make you late for work, but leaving your door unlocked all day could mean losing your possessions. So how do we tell the possibilities apart?

Valerie Brandt, Jon Simons, and colleagues at the University of Cambridge looked into this question and published their findings last month in the journal Cognitive, Affective, and Behavioral Neuroscience. For the first part of the experiment (the study phase), they sat healthy adult participants down in front of two giant boxes – one red and one blue – that each contained 80 ordinary objects. The experimenter would draw each object out of one of the two boxes, place it in front of the participant, and tell him or her to either perform or to imagine performing a logical action with the object. For example, when the object was a book, participants were told to either open or imagine opening it.

After the study phase, the experiment moved to a scanner for fMRI. During these scans, participants were shown photographs of all 160 of the studied objects and, for each item, were asked to indicate either 1) whether they had performed or merely imagined performing an action on that object, or 2) which box the object had been drawn from.** When the scans were over, the participants saw the pictures of the objects again and were asked to rate how much specific detail they’d recalled about encountering each object and how hard it had been to bring that particular memory to mind.

The scientists compared fMRI measures of brain activation during the reality-monitoring task (Did I use or imagine using that object?) with activation during the location task (Which box did this object come from?). One of the areas they found to be more active during reality monitoring was the supplementary motor area, a region involved in planning and executing movements of the body. Just as visual areas are activated for reality monitoring of visual memories, motor areas are activated when people evaluate their action memories. In other words, when you ask yourself whether you locked the door or just imagined it, you may be using details of motor aspects of the memory (e.g., pronating your wrist to turn the key in the lock) to make your decision.

The study’s authors also found greater activation in the anterior medial prefrontal cortex when they compared reality monitoring for actions participants performed with those they only imagined performing. The medial prefrontal cortex encompasses a respectable swath of the brain with a variety of functions that appear to include making self-referential judgments, or evaluating how you feel or think about experiences, sensations, and the like. Other experiments have implicated a role for this or nearby areas in reality monitoring of visual memories. The study by Brandt and Simons also found that activation of this medial prefrontal region during reality-monitoring trials correlated with the number of internal details the participants said they’d recalled in those trials. In other words, the more details participants remembered about their thoughts and feelings during the past actions, the busier this area appeared to be. So when faced with uncertainty about a past action, the medial prefrontal cortex may be piping up about the internal details of the memory. I must have locked the door because I remember simultaneously wondering when my package would arrive from Amazon, or, because I was also feeling sad about leaving my dog alone at home.

As I read these results, I found myself thinking about the topic of my prior post on OCD. Pathological checking is a common and often disruptive symptom of the illness. Although it may seem like a failure of reality monitoring, several behavioral studies have shown that people with OCD have normal reality monitoring for past actions. The difference is that people with checking symptoms of OCD have much lower confidence in the quality of their memories than others. It seems to be this distrust of their own memories, along with relentless anxiety, that drives them to double-check over and over again.

So the next time you find yourself wondering whether you actually locked the door, cut yourself some slack. Reality monitoring ain’t easy. All you can do is trust your brain not to lead you astray. Make a call and stick with it. You’re better off being wrong than being anxious about it – that is, unless you have really nice stuff.

_____

Photo credit: Liz (documentarist on Flickr), used via Creative Commons license

* Of course, the mental image you conjure of the painting is actually based on the memory of having seen it in ads, books, or posters before. In fact, a growing area of neuroscience research focuses on how imagining the future relies on the same brain areas involved in remembering the past. Imagination seems to be, in large part, a collage of old memories cut and pasted together to make something new.

**The study also had a baseline condition, used additional contrasts, and found additional activations that I didn’t mention for the sake of brevity. Check out the original article for full details.

Brandt, V., Bergström, Z., Buda, M., Henson, R., & Simons, J. (2014). Did I turn off the gas? Reality monitoring of everyday actions Cognitive, Affective, & Behavioral Neuroscience, 14 (1), 209-219 DOI: 10.3758/s13415-013-0189-z

In the Blink of an Eye

4330664213_ab665a8419_o

It takes around 150 milliseconds (or about one sixth of a second) to blink your eyes. In other words, not long. That’s why you say something happened “in the blink of an eye” when an event passed so quickly that you were barely aware of it. Yet a new study shows that humans can process pictures at speeds that make an eye blink seem like a screening of Titanic. Even more, these results challenge a popular theory about how the brain creates your conscious experience of what you see.

To start, imagine your eyes and brain as a flight of stairs. I know, I know, but hear me out. Each step represents a stage in visual processing. At the bottom of the stairs you have the parts of the visual system that deal with the spots of darkness and light that make up whatever you’re looking at (let’s say an old family photograph). As you stare at the photograph, information about light and dark starts out at the bottom of the stairs in what neuroscientists called “low-level” visual areas like the retinas in your eyes and a swath of tissue tucked away at the very back of your brain called primary visual cortex, or V1.

Now imagine that the information about the photograph begins to climb our metaphorical neural staircase. Each time the information reaches a new step (a.k.a. visual brain area) it is transformed in ways that discard the details of light and dark and replace them with meaningful information about the picture. At one step, say, an area of your brain detects a face in the photograph. Higher up the flight, other areas might identify the face as your great-aunt Betsy’s, discern that her expression is sad, or note that she is gazing off to her right. By the time we reach the top of the stairs, the image is, in essence, a concept with personal significance. After it first strikes your eyes, it only takes visual information 100-150 milliseconds to climb to the top of the stairs, yet in that time your brain has translated a pattern of light and dark into meaning.

For many years, neuroscientists and psychologists believed that vision was essentially a sprint up this flight of stairs. You see something, you process it as the information moves to higher areas, and somewhere near the top of the stairs you become consciously aware of what you’re seeing. Yet intriguing results from patients with blindsight, along with other studies, seemed to suggest that visual awareness happens somewhere on the bottom of the stairs rather than at the top.

New, compelling demonstrations came from studies using transcranial magnetic stimulation, a method that can temporarily disrupt brain activity at a specific point in time. In one experiment, scientists used this technique to disrupt activity in V1 about 100 milliseconds after subjects looked at an image. At this point (100 milliseconds in), information about the image should already be near the top of the stairs, yet zapping lowly V1 at the bottom of the stairs interfered with the subjects’ ability to consciously perceive the image. From this and other studies, a new theory was born. In order to consciously see an image, visual information from the image that reaches the top of the stairs must return to the bottom and combine with ongoing activity in V1. This magical mixture of nitty-gritty visual details and extracted meaning somehow creates what we experience as visual awareness

In order for this model of visual processing to work, you would have to look at the photo of Aunt Betsy for at least 100 milliseconds in order to be consciously aware of it (since that’s how long it takes for the information to sprint up and down the metaphorical flight of stairs). But what would happen if you saw Aunt Betsy’s photo for less than 100 milliseconds and then immediately saw a picture of your old dog, Sparky? Once Aunt Betsy made it to the top of the stairs, she wouldn’t be able to return to the bottom stairs because Sparky has taken her place. Unable to return to V1, Aunt Betsy would never make it to your conscious awareness. In theory, you wouldn’t know that you’d seen her at all.

Mary Potter and colleagues at MIT tested this prediction and recently published their results in the journal Attention, Perception, & Psychophysics. They showed subjects brief pictures of complex scenes including people and objects in a style called rapid serial visual presentation (RSVP). You can find an example of an RSVP image stream here, although the images in the demo are more racy and are shown for longer than the pictures in the Potter study.

The RSVP image streams in the Potter study were strings of six photographs shown in quick succession. In some image streams, pictures were each shown for 80 milliseconds (or about half the time it takes to blink). Pictures in other streams were shown for 53, 27, or 13 milliseconds each. To give you a sense of scale, 13 milliseconds is about one tenth of an eye blink, or one hundredth of a second. It is also far less than time than Aunt Betsy would need to sprint to the top of the stairs, much less to return to the bottom.

At such short timescales, people can’t remember and report all of the pictures they see in an image stream. But are they aware of them at all? To test this, the scientists gave their subjects a written description of a target picture from the image stream (say, flowers) either just before the stream began or just after it ended. In either case, once the stream was over, the subject had to indicate whether an image fitting that description appeared in the stream. If it did appear, subjects had to pick which of two pictures fitting the description actually appeared in the stream.

Considering how quickly these pictures are shown, the task should be hard for people to do even when they know what they’re looking for. Why? Because “flowers” could describe an infinite number of photographs with different arrangements, shapes, and colors. Even when the subject is tipped off with the description in advance, he or she must process each photo in the stream well enough to recognize the meaning of the picture and compare it to the description. On top of that, this experiment effectively jams the metaphorical visual staircase full of images, leaving no room for visual info to return to V1 and create a conscious experience.

The situation is even more dire when people get the description of the target only after they’ve viewed the entire image stream. To answer correctly, subjects have to process and remember as many of the pictures from the stream as possible. None of this would be impressive under ordinary circumstances but, again, we’re talking 13 milliseconds here.

Sensitivity (computed from subject performance) on the RSVP image streams with 6 images. From Potter et al., 2013.

Sensitivity (computed from subject performance) on the RSVP image streams with 6 images. From Potter et al., 2013.

How did the subjects do? Surprisingly well. In all cases, they performed better than if they were randomly guessing – even when tested on the pictures shown for 13 milliseconds. In general, they scored higher when the pictures were shown longer. And like any test-taker could tell you, people do better when they know the test questions in advance. This pattern held up even when the scientists repeated the experiment with 12-image streams. As you might imagine, that makes for a very crowded visual staircase.

These results challenge the idea that visual awareness happens when information from the top of the stairs returns to V1. Still, they are by no means the theory’s death knell. It’s possible that the stairs are wider than we thought and that V1 is able (at least to some degree) to represent more than one image at a time. Another possibility is that the subjects in the study answered the questions using a vague sense of familiarity – one that might arise even if they were never overtly conscious of seeing the images. This is a particularly compelling explanation because there’s evidence that people process visual information like color and line orientation without awareness when late activity in V1 is disrupted. The subjects in the Potter study may have used this type of information   to guide their responses.

However things ultimately shake out with the theory of visual awareness, I love that these intriguing results didn’t come from a fancy brain scanner or from the coils of a transcranial magnetic stimulation device. With a handful of pictures, a computer screen, and some good-old-fashioned thinking, the authors addressed a potentially high-tech question in a low-tech way. It’s a reminder that fancy, expensive techniques aren’t the only way – or even necessarily the best way – to tackle questions about the brain. It also shows that findings don’t need colorful brain pictures or glow-in-the-dark mice in order to be cool. You can see in less than one-tenth of a blink of an eye. How frickin’ cool is that?

Photo credit: Ivan Clow on Flickr, used via Creative Commons license

Potter MC, Wyble B, Hagmann CE, & McCourt ES (2013). Detecting meaning in RSVP at 13 ms per picture. Attention, perception & psychophysics PMID: 24374558

The Changing Face of Science: Part Two

In my last post, I wrote about how scientists are beginning to engage with the public, particularly via social media and blogs. Here, I will use my recent experiences at the AAAS conference to illustrate how social media are changing the business of science itself.

The AAAS conference was the first science meeting I’ve attended as an active tweeter. The experience opened my eyes. Throughout the event, scientists and science writers were tweeting interesting talks or points made in various sessions. Essentially, this gave me ears and eyes throughout the conference. For instance, during a slow moment in the session I was attending, I checked out the #AAAS hashtag on Twitter and saw several intriguing tweets from people in another session:

Screen Shot 2014-02-20 at 4.28.10 PM

Screen Shot 2014-02-20 at 4.28.35 PM Screen Shot 2014-02-20 at 12.35.51 PMScreen Shot 2014-02-20 at 4.29.00 PM

These tweets drew my attention to a talk that I would otherwise have missed completely. I could then decide if I wanted to switch to the other session or learn more about the speaker and her work later on. Even if I did neither, I’d learned a few interesting facts with minimal effort.

Twitter can be a very useful tool for scientists. Aside from its usefulness at conferences, it’s a great way to learn about new and exciting papers in your field. Those who aren’t on Twitter might be surprised to hear that it can be a source for academic papers rather than celebrity gossip. Ultimately, the information you glean from Twitter depends entirely on the people you choose to follow. Scientists often follow other scientists in their own or related fields. Thus, they’re more likely to come upon a great review on oligodendrocytes than news on Justin Bieber’s latest antics. Scientists and science writers form their own interconnected Twitter networks through which they share the type of content that interests them.

Katie Mack, an astrophysicist at the University of Melbourne, has logged some 32,000 tweets as @AstroKatie and has about 7,300 followers on Twitter to date. She recently explained on the blog Real Scientists why she joined Twitter in the first place:

“Twitter started out as an almost purely professional thing for me — I used it to keep up with what other physicists and astronomers were talking about, what people were saying at conferences, that kind of thing. It’s great for networking as well, and just kind of seeing what everyone is up to, in your own field and in other areas of science. Eventually I realized it could also be a great tool for outreach and for sharing my love of science with the world.”

Social media and the Internet more broadly have also made new avenues of scientific research possible. They’ve spurred citizen science projects and collaborative online databases like the International Nucleotide Sequence Database Collaboration. Yet social media and online content have also affected research on a smaller scale as individual scientists discover the science diamonds in the rough. For example, Amina Khan described in a recent Los Angeles Times article how a group of scientists mined online content to compare the strategies different animals use to swim. She writes:

“They culled 112 clips from sites like YouTube and Vimeo depicting 59 different species of flying and swimming animals in action, including moths, bats, birds and even humpback whales. They wanted to see where exactly the animals’ wings (or fins) bent most, and exactly how much they bent.”

Another wonderful example of the influence of YouTube on science came to my attention at the AAAS meeting when I attended a session on rhythmic entrainment in non-human animals. Rhythmic entrainment is the ability to match your movements to a regular beat, such as when you tap your foot to the rhythm of a song. Only five years ago it was widely believed that the ability to match a beat is unique to humans . . . that is, until Aniruddh Patel of Tufts University received an email from his friend.

As Dr. Patel described in the AAAS session, the friend wrote to share a link to a viral YouTube video of a cockatoo named Snowball getting down to the Backstreet Boys. What did Patel make of it? Although the bird certainly seemed to be keeping the beat, it was impossible to know what cues the animal was receiving off-screen. Instead of shrugging off the video or declaring it a fraud, Patel contacted the woman who posted it. She agreed to collaborate with Patel and let him test Snowball under carefully controlled conditions. Remarkably, Snowball was still able to dance to various beats. Patel and his colleagues published their results in 2009, upending the field of beat perception.

That finding sparked a string of new experiments with various species and an entertaining lineup of speakers and animal videos at the AAAS session. Among them, I had the pleasure of watching a sea lion nodding along to “Boogie Wonderland” and a bonobo pounding on a drum.

In essence, the Internet and social media are bringing new opportunities to the doorsteps of scientists. As Dr. Patel’s experience shows, it’s wise to open the door and invite them in. Like everything else in modern society, science does not lie beyond the reach of social media. And thank goodness for that.

_____

Patel, Aniruddh D., Iversen, John R., Bregman, Micah R., & Schulz, Irena (2009). Experimental Evidence for Synchronization to a Musical Beat in a Nonhuman Animal Current Biology, 19 (10), 827-830 DOI: 10.1016/j.cub.2009.03.038

fMR-Why? Bad Science Meets Chocolate and Body Envy

275285919_a7816d7ded_b

Imagine this: You have bulimia nervosa, a psychiatric condition that traps you in an unhealthy cycle of binge eating and purging. You’ve been recruited to participate in a functional MRI experiment on this devastating illness. As you lie in the scanner, you are shown pictures of pizza, chocolate and other high-calorie foods and you’re told to imagine eating them. You do this for 72 pictures of delicious, fatty foods. At other points in the experiment, you see pictures of bodies (sans heads) of models clipped from a women’s magazine. You are told to compare your body to each of the bodies in the pictures. You do this 72 times, once for each skinny (and probably retouched) model’s body. The experience would have been unsettling enough for normal women trying to eat healthier or feel happier with their not-so-super-model bodies. But for women with bulimia, it must have truly been a hoot and a half.

Luckily, the misery was worth it. When the researchers publish their findings, they claim to have shown that patients with bulimia process body images differently. In their conclusions, they say that their results can inform how psychotherapists should treat patients with the illness. They even suggest that it might someday lead to direct interventions, such as a targeted zap to the head using transcranial magnetic stimulation.

My recommendation? Cover your therapist’s ears and stay away from the head zapper. This study shows nothing of the sort.

Functional MRI is a widely used and quite powerful method of probing the brain, but it is only useful for experiments that are thoughtfully conceived and carefully interpreted. Unfortunately, many fMRI papers that make it to publication are neither.

One of the most common problems in fMRI is making bad comparisons. All fMRI studies rely on comparisons because brains are all different and scanners are all different. If you are going to say that Region X becomes active when you see a picture of chocolate, you first have to answer that crucial question: compared to what? If you’re interested in how the brain reacts to unhealthy food in particular, you might compare looking at pictures of chocolate with looking at pictures of raisins or eggplant. And if you’re comparing these comparisons across subject groups (such as patients versus non-patients), both groups had better have the same the control condition. Otherwise, you’re not even comparing apples to oranges. You’re comparing apples to gym socks.

Sadly, that is just what these experimenters did. They compared brain blood flow when the subjects looked either at junk food or skinny women with blood flow during 36-second stretches of time when subjects just stared at a small, white ‘+’ on the screen. The authors say that using a more similar control condition (say, imagining using non-food objects like a lamp or a door) would be bad because patients with bulimia might respond to these objects differently than healthy subjects. This argument is nonsensical. There’s no reason to believe that people with bulimia feel any differently about doors or lamps than anyone else, but there’s plenty of reason to believe that they would spend 36-second moments of downtime before or after comparing their bodies to those of models either obsessing or trying not to obsess about how their bodies ‘measure up.’

In fact, I suspect that could not help but wonder if the authors didn’t originally intend to use this ‘+’ as the control condition. They actually had less crappy control conditions built into the experiment. As a control for imagining eating pizza and chocolate, the participants were also shown non-food objects like tools and told to imagine using them. They also saw interior décor photos and had to compare the furniture to those in their own homes – a control for comparing each model’s body to one’s own.

When the authors did their analyses using these (better) control conditions, they found very few differences between patients and non-patients. None, in fact, for the imagine-eating-junk-food portion of the study. For the comparing-oneself-to-models portion, they only found that patients showed less activation than controls in two regions of visual cortex. These regions may correspond to areas that specifically process body images. But would less activation in these regions mean that patients with bulimia process body images differently than other people? Not at all. If the patients were not looking at the pictures as much as non-patients or were more distracted/less attentive to them, you would see the same pattern of results. In short, the authors had no story to tell when they used the better controls. They had a ‘null result’ that would not get published.

354288651_2f3adbc016_b

Based on the design of their experiment, I suspect that find myself wondering if this was how they originally intended to analyze their data.* And it’s really the only sensible way to analyze these data. Experiments like these include the ‘+’ condition to establish a baseline (essentially, what you’re going to call ‘zero’). These ‘+’ blocks also correct for an unfortunate phenomenon called scanner drift that adds noise to the data.

It’s possible that I have to wonder if the authors decided to use the ‘+’ for their comparisons because they didn’t get any exciting results with the actual control conditions. If so, it unfortunately worked. Using the baseline condition, they found two differences between patient and non-patient activations in the food task and even more differences between the groups in the body task. Ultimately, the authors got their significant results and they got them published.  But those differences have nothing to do with the causes of bulimia and everything to do with what flits through people’s minds while they stare at a plus sign.

Unfortunately, this is just one example from a growing sea of bad fMRI studies out there. And while many people do wonderful work with the technique and advance the field, others do it a disservice and set us all back. From researchers to reviewers, publishers, science writers and reporters, we all need to proceed with caution and evaluate papers with a critical eye. The participants in our experiments deserve it. The public deserves it. Most of all, patients deserve the best information we can give them. Science done well and served to them straight.
____

Update: I’ve made a few small changes to this post to clarify my intent. I don’t personally know the study’s authors and have no insight into their actions, intentions, or motivations. In writing the piece, I hoped to bring attention to a widespread problem in fMRI research. Of the study’s authors I can only say that they did some seriously flawed research. Why, when, or how is as much your guess as mine.

Since posting this piece, I’ve contacted the editor of BMC Psychiatry regarding my concerns with the paper. Not only have I received no reply from her, but this paper is still listed as one of the ‘Editor’s Picks’ on their website as of 1/5/14.

____

*For curious fMRI folk: each run contained 6 food/body blocks, 6 non-food/décor blocks, and only 3 baseline ‘+’ blocks. That means they collected twice the data for the control conditions that they supposedly didn’t intend to use than for the ones that they did.

Photo #1 credit: MRI scanner, photo by Matthias Weinberger (cszar on Flickr), used via Creative Commons license

Photo #2 credit: Structural MRI of kiwi fruit by Dom McIntyre (McBadger on Flickr), used via Creative Commons license

Van den Eynde F, Giampietro V, Simmons A, Uher R, Andrew CM, Harvey PO, Campbell IC, & Schmidt U (2013). Brain responses to body image stimuli but not food are altered in women with bulimia nervosa. BMC Psychiatry, 13 (1) PMID: 24238299

claimtoken-52c471c993394

The Trouble with (and without) Fish

375664720_80723a9428_b

This week I’m posting a piece from my archives (August, 2011) that I’ve updated a little. Two things brought this post to mind: 1) the recent EPA report that women have become better informed about mercury and are making better choices at the fish counter and 2) remarkable updates from my scientist friend who is blogging her way through the world’s oceans as she collects water samples to catalog mercury levels around the globe. Both demonstrate that we are making some progress in studying and alerting people to the mercury in our waters and our fish. NB: when I say “now that I’m pregnant,” it’s 2011 me talking.

_________

Once upon a time in a vast ocean, life evolved. And then, over many millions of years, neurons and spinal cords and eyes developed, nourished all the while in a gentle bath of nutrients and algae.

Our brains and eyes are distant descendants of those early nervous systems formed in the sea. And even though our ancestors eventually sprouted legs and waddled out of the ocean, the neural circuitry of modern humans is still dependent on certain nutrients that their water-logged predecessors had in abundance.

This obscure fact about a distant evolution has recently turned into a major annoyance for me now that I’m pregnant. In fact, whether they know it or not, all pregnant women are trapped in a no-win dilemma over what they put into their stomachs. Take, for instance, a popular guidebook for pregnant women. On one page, it advocates eating lots of seafood while pregnant, explaining that fish contain key nutrients that the developing eyes and brain of the fetus will need. A few pages later, however, the author warns that seafood contains methylmercury, a neurotoxic pollutant, and that fish intake should be strictly curtailed. What is a well-meaning pregnant lady to do?

On a visceral level, nothing sounds worse than poisoning your child, so many women reduce their seafood intake while pregnant. I have spoken with women who cut all seafood out of their diet while pregnant, for fear that a little exposure could prove to be too much. They had good reason to be worried. Extreme methylmercury poisoning episodes in Japan and Iraq in past decades have shown that excessive methylmercury intake during pregnancy can cause developmental delays, deafness, blindness, and seizures in the babies exposed.

But what happens if pregnant women eliminate seafood from their diet altogether? Without careful supplementation of vital nutrients found in marine ecosystems, children face neural setbacks or developmental delays on a massive scale. Consider deficiencies in iodine, a key nutrient readily found in seafood. Its scarcity in the modern land-based diet was causing mental retardation in children – and sparked the creation of iodized salt (salt supplemented with iodine) to ensure that the nutritional need was met.

4566018341_8bc00c8aef_o

Perhaps the hardest nutrient to get without seafood is an omega-3 fatty acid known as DHA. In recent years, scientists have learned that this particular fatty acid is essential for proper brain development and functioning, yet it is almost impossible to get from non-aquatic dietary sources. At the grocery store, you’ll find vegetarian products that claim to fill those needs by supplying the biochemical precursor to DHA (found in flaxseed, walnuts, and soybean oils), but it’s not clear that the precursor will do the trick. Our bodies take a while to synthesize DHA from its precursor. In fact, we may burn much of the precursor for energy before we manage to convert it to DHA.

The best way for pregnant women to meet the needs of their growing babies is to eat food from marine sources. Yet thanks to global practices of burning coal and disposing of industrial and medical waste, any seafood women eat will expose their offspring to some amount of methylmercury. There’s no simple solution to this problem, although studies suggest that child outcomes are best when women consume ample seafood while avoiding species with higher levels of methylmercury (such as shark, tilefish, walleye, pike, and some types of tuna). It also matters where the fish was caught. Mercury levels will be higher in fish from mercury-polluted waters – one of the reasons that it’s important to catalog mercury levels around the globe.

Unless we start cleaning up our oceans, pregnant women will continue to face this awful decision each time they sit down at the dinner table. Far worse, we may face future generations with lower IQs and developmental delays regardless of which choice their mothers make. Thanks to shoddy environmental oversight, we may be saddling our children with brains that don’t work as well as our own. And that is something I truly can’t swallow.

____

Photo credits:

Photo 1: by Gideon (malias) on Flickr, used via Creative Commons license

Photo 2: by @Doug88888 on Flickr, used via Creative Commons license

Another Time, Another Place

7052169001_4ac18f1ef6_b

Whenever I visit my childhood home outside of Chicago I try to make it to the local pancake house. The buttery pancakes would be reason enough, but they’re not the only reason I stop by. A stroll through that pancake house is truly a stroll down memory lane. Each table I pass triggers a memory of a meal shared with different people in different decades of my life. One moment I’m eating German pancakes with my college boyfriend. The next, I am passing menus to my new husband’s family.  The next, I am celebrating my eighth grade graduation with my parents and older brother.

Memories return you to a specific time and place. Consider so-called flashbulb memories, or vivid memories of  dramatic moments that caught you off-guard. I remember exactly where I was when I heard that a plane had struck one of the Twin Towers and, later, when I learned that my father had died. I remember that I was sitting on the living room rug in my Somerville apartment when I watched Columbia transform from a space shuttle into a streak of fire across the sky. Is it helpful to remember where I was sitting? Not in the slightest. But in the murky, mysterious realm of memory, when and what are inextricably linked with where.

Mention the word “memory” to neuroscientists and you’re sure to get them thinking of the hippocampus, a sliver of tissue nestled deep inside each hemisphere of the brain. The hippocampus has been synonymous with memory since the late 1950s, when William Scoville and Brenda Milner described a patient who was incapable of forming new memories after both of his hippocampi were removed. Since then, throngs of neuroscientists have devoted their careers to studying the hippocampus. Among other revelations, they’ve discovered a class of neurons called place cells that represent (you guessed it) information about place.

How do cells represent place? To illustrate, let’s say you’re in your favorite coffee shop. Some of the place cells in your hippocampus will fire like crazy when you walk through the entrance. Others save their enthusiasm until you are waiting in line to order your latte, stopping at the counter for milk and sugar, or settling in at your favorite table. When you physically occupy their place-of-interest, they go nuts – like a neural alarm signaling your location. At this moment, you are here!

The same principle applies to my experience at the pancake house. Different place cells fire at different tables. In essence, these sets of cells provide a unique neural code for each space I can occupy in the restaurant. And this code has been with me for a while. When I sat in the corner booth after my graduation from middle school, I formed a memory of that celebration that included the code for that particular spot. Decades later, sitting in that booth or even walking past it can trigger a similar code in my brain, one that elicits the rest of that dusty old memory.

While eternally cool, place cells have become old news in hippocampal research. The new hippocampal hotness is studying “time cells”. These recently discovered neurons prefer to fire at different intervals after an event (say, ten seconds versus one minute after you step into the coffee shop). This research fad is a bit amusing, as it turns out that place cells and “time cells” are one and the same. This fact hasn’t stopped scientists from referring to “time cells,” but it has forced them to typically use the term in quotation marks.

As scientists studied the time code in the hippocampal cells of rats, a flaw in their experiments became clear. Their studies recorded the neural activity of moving rats, which means that the firing patterns observed by the scientists could reflect changes in time, changes in the rat’s location, or in its motion.

Two recent papers addressed this issue and clarified the nature of “time cells” in the hippocampus. The first of these appeared in the journal Neuron in June of this year. The paper, by Benjamin Kraus, Michael Hasselmo, and collaborators at Boston University, describes an experiment that has as much to do with your time spent sweating it out at the gym as it does with your memory of past events. The scientists recorded the activity of hippocampal cells in rats as they ran on a treadmill or moved around in a simple maze. Since the rat remained in the same location as it ran on the treadmill, the researchers could decouple the rat’s location from the passage of time and the distance the rat ran. Since the authors could vary the speed of the treadmill, they could also piece apart the related variables of time and distance.

The scientists found that “time cells” still produced a time code when location was kept constant (on the treadmill). Using some fancy modeling, they also showed that the activity of most “time cells” reflected a combination of elapsed time and distance run, but a smaller number of “time cells” seemed to care only about time or distance. They also found that these same cells behaved like normal place cells when the rat walked around a simple maze. In short, place cells (a.k.a. “time cells”) can convey information about place, time, and distance travelled to varying degrees that also change under different conditions.

A second paper on the subject came out in a September issue of The Journal of Neuroscience. The authors, Christopher MacDonald, Howard Eichenbaum*, and colleagues (also from Boston University) eliminated the variable of location by physically restraining the rats from moving with a special headpiece that attached to the rats’ heads. This headpiece locked into the testing apparatus so that the rats couldn’t move their heads during testing. Unlike the fitness buff rats in the prior study, these rats were given a memory task. They got a whiff of an odor and then another whiff of an odor a few seconds later. If the second odor was the same as the first, the rat licked its waterspout and got a reward (a drop of water). If the two odors were different, the rat was not supposed to lick.

Even though the rats were completely immobile, the rats’ “time cells” showed a strong time code. Different cells fired at different times during the delay. These cells also seemed to represent what information (in this case, the odors presented for the task). The scientists found that the overall pattern of “time cell” firing was more similar when the rats remembered the same odor than when they remembered different odors across trials.

In short, place/time cells can represent what, when, and where in a variety of ways, depending on a variety of factors. This representation is flexible – just as memory must be in order for you to remember the date of your anniversary, the feel of your first kiss, and the items on your next shopping list. The remarkable thing about memory is that it is both flexible and robust, meaning that it is resistant to degradation or being swamped out by noise. It can return us to times, places, and experiences that are far away and decades past. For that, we can thank the hippocampus, neural codes, and a set of remarkable cells with an identity crisis.

_____

Photo credit: Stu Rapley on Flickr, used via Creative Commons License

*Howard Eichenbaum was also a middle author on the Neuron paper. Much of the recent work on “time cells” has come from his lab and affiliated labs at Boston University.

Kraus BJ, Robinson RJ 2nd, White JA, Eichenbaum H, & Hasselmo ME (2013). Hippocampal “time cells”: time versus path integration. Neuron, 78 (6), 1090-1101 PMID: 23707613

MacDonald CJ, Carrow S, Place R, & Eichenbaum H (2013). Distinct hippocampal time cell sequences represent odor memories in immobilized rats. The Journal of Neuroscience : the official journal of the Society for Neuroscience, 33 (36), 14607-14616 PMID: 24005311

%d bloggers like this: