Zapping Brains, Seeing Scenes

4911994857_b0e24e01f4_o

More than fifteen years ago, neuroimagers found a region of the brain that seemed to be all about place. The region lies on the bottom surface of the temporal lobe near a fold called the parahippocampal gyrus, so it was called the parahippocampal place area, or PPA. You have two PPAs: one on the left side of your brain and one on the right. If you look at a picture of a house, an outdoor or indoor scene, or even an empty room, your PPAs will take notice. Since its discovery, hundreds of experiments have probed the place predilections of the PPA. Each time, the region demonstrated its dogged devotion to place. Less clear was exactly what type of scene information the PPA was representing and what it was doing with that information. A recent scientific paper now gives us a rare, direct glimpse at the inner workings of the PPA through the experience of a young man whose right PPA was stimulated with electrodes.

The young man in question wasn’t an overzealous grad student. He was a patient with severe epilepsy who was at the hospital to undergo brain surgery. When medications can’t bring a person’s seizures under control, surgery is one of few remaining option. The surgery involves removing the portion of the brain in which that patient’s seizures begin. Of course, removing brain tissue is not something one does lightly. Before a surgery, doctors use various techniques to determine in each patient where the seizures originate and also where crucial regions involved in language and movement are located. They do this so they will know which part of the brain to remove and which parts they must be sure not to remove. One of the ways of mapping these areas before surgery is to open the patient’s skull, plant electrodes into his or her brain, and monitor brain activity at the various electrode sites. This technique, called electrocorticography, allows doctors to both record brain activity and electrically stimulate the brain to map key areas. It is also the most powerful and direct look scientists can get into the human brain.

A group of researchers in New York headed by Ashesh Mehta and Pierre Mégevand documented the responses of the young man as they stimulated electrodes that were planted in and around his right PPA. During one stimulation, he described seeing a train station from the neighborhood where he lives. During another, he reported seeing a staircase and a closet stuffed with something blue. When they repeated the stimulation, he saw the same random indoor scene again. So stimulating the PPA can cause hallucinations of scenes that are both indoor and outdoor, familiar or unfamiliar. This suggests that specific scene representations in the brain may be both highly localized and complex. It is also just incredibly cool.

The doctor also stimulated an area involved in face processing and found that this made the patient see distortions in a face. Another study published in 2012 showed a similar effect in a different patient. While the patient looked at his doctor, the doctor stimulated the face area. As the patient reported, “You just turned into somebody else. Your face metamorphosed.” Here’s a link to a great video of that patient’s entire reaction and description.

The authors of the new study also stimulated a nearby region that had shown a complex response to both faces and scenes is previous testing. When they zapped this area, the patient saw something that made him chuckle. “I’m sorry. . . You all looked Italian. . . Like you were working in a pizza shop. That’s what I saw, aprons and whatnot. Yeah, almost like you were working in a pizzeria.”

Now wouldn’t we all love to know what that area does?

______

Photo credit: thisisbossi on Flickr, used via Creative Commons license

*In case you’re wondering, the patient underwent surgery and no longer suffers from seizures (although he still experiences auras).

Mégevand P, Groppe DM, Goldfinger MS, Hwang ST, Kingsley PB, Davidesco I, & Mehta AD (2014). Seeing scenes: topographic visual hallucinations evoked by direct electrical stimulation of the parahippocampal place area. The Journal of neuroscience : the official journal of the Society for Neuroscience, 34 (16), 5399-405 PMID: 24741031

Did I Do That? Distinguishing Real from Imagined Actions

473089805_b4cb670102_o

If you’re like most people, you spend a great deal of your time remembering past events and planning or imagining events that may happen in the future. While these activities have their uses, they also make it terribly hard to keep track of what you have and haven’t actually seen, heard, or done. Distinguishing between memories of real experiences and memories of imagined or dreamt experiences is called reality monitoring and it’s something we do (or struggle to do) all of the time.

Why is reality monitoring a challenge? To illustrate, let’s say you’re at the Louvre standing before the Mona Lisa. As you look at the painting, visual areas of your brain are busy representing the image with specific patterns of activity. So far, so good. But problems emerge if we rewind to a time before you saw the Mona Lisa at the Louvre. Let’s say you were about to head over to the museum and you imagined the special moment when you would gaze upon Da Vinci’s masterwork. When you imagined seeing the picture, you were activating the same visual areas of the brain in a similar pattern to when you would look at the masterpiece itself.*

When you finally return home from Paris and try to remember that magical moment at the Louvre, how will you be able to distinguish your memories of seeing the Mona Lisa from imagining her? Reality monitoring studies have asked this very question (minus the Mona Lisa). Their findings suggest that you’ll probably use additional details associated with the memory to ferret out the mnemonic wheat from the chaff. You might use memory of perceptual details, like how the lights reflected off the brushstrokes, or you might use details of what you thought or felt, like your surprise at the painting’s actual size. Studies find that people activate both visual areas (like the fusiform gyrus) and self-monitoring regions of the brain (like the medial prefrontal cortex) when they are deciding whether they saw or just imagined seeing a picture.

It’s important to know what you did and didn’t see, but another crucial and arguably more important facet of reality monitoring involves determining what you did and didn’t do. How do you distinguish memories of things you’ve actually done from those you’ve planned to do or imagined doing? You have to do this every day and it isn’t a trivial task. Perhaps you’ve left the house and headed to work, only to wonder en route if you’d locked the door. Even if you thought you did, it can be hard to tell whether you remember actually doing it or just thinking about doing it. The distinction has consequences. Going home and checking could make you late for work, but leaving your door unlocked all day could mean losing your possessions. So how do we tell the possibilities apart?

Valerie Brandt, Jon Simons, and colleagues at the University of Cambridge looked into this question and published their findings last month in the journal Cognitive, Affective, and Behavioral Neuroscience. For the first part of the experiment (the study phase), they sat healthy adult participants down in front of two giant boxes – one red and one blue – that each contained 80 ordinary objects. The experimenter would draw each object out of one of the two boxes, place it in front of the participant, and tell him or her to either perform or to imagine performing a logical action with the object. For example, when the object was a book, participants were told to either open or imagine opening it.

After the study phase, the experiment moved to a scanner for fMRI. During these scans, participants were shown photographs of all 160 of the studied objects and, for each item, were asked to indicate either 1) whether they had performed or merely imagined performing an action on that object, or 2) which box the object had been drawn from.** When the scans were over, the participants saw the pictures of the objects again and were asked to rate how much specific detail they’d recalled about encountering each object and how hard it had been to bring that particular memory to mind.

The scientists compared fMRI measures of brain activation during the reality-monitoring task (Did I use or imagine using that object?) with activation during the location task (Which box did this object come from?). One of the areas they found to be more active during reality monitoring was the supplementary motor area, a region involved in planning and executing movements of the body. Just as visual areas are activated for reality monitoring of visual memories, motor areas are activated when people evaluate their action memories. In other words, when you ask yourself whether you locked the door or just imagined it, you may be using details of motor aspects of the memory (e.g., pronating your wrist to turn the key in the lock) to make your decision.

The study’s authors also found greater activation in the anterior medial prefrontal cortex when they compared reality monitoring for actions participants performed with those they only imagined performing. The medial prefrontal cortex encompasses a respectable swath of the brain with a variety of functions that appear to include making self-referential judgments, or evaluating how you feel or think about experiences, sensations, and the like. Other experiments have implicated a role for this or nearby areas in reality monitoring of visual memories. The study by Brandt and Simons also found that activation of this medial prefrontal region during reality-monitoring trials correlated with the number of internal details the participants said they’d recalled in those trials. In other words, the more details participants remembered about their thoughts and feelings during the past actions, the busier this area appeared to be. So when faced with uncertainty about a past action, the medial prefrontal cortex may be piping up about the internal details of the memory. I must have locked the door because I remember simultaneously wondering when my package would arrive from Amazon, or, because I was also feeling sad about leaving my dog alone at home.

As I read these results, I found myself thinking about the topic of my prior post on OCD. Pathological checking is a common and often disruptive symptom of the illness. Although it may seem like a failure of reality monitoring, several behavioral studies have shown that people with OCD have normal reality monitoring for past actions. The difference is that people with checking symptoms of OCD have much lower confidence in the quality of their memories than others. It seems to be this distrust of their own memories, along with relentless anxiety, that drives them to double-check over and over again.

So the next time you find yourself wondering whether you actually locked the door, cut yourself some slack. Reality monitoring ain’t easy. All you can do is trust your brain not to lead you astray. Make a call and stick with it. You’re better off being wrong than being anxious about it – that is, unless you have really nice stuff.

_____

Photo credit: Liz (documentarist on Flickr), used via Creative Commons license

* Of course, the mental image you conjure of the painting is actually based on the memory of having seen it in ads, books, or posters before. In fact, a growing area of neuroscience research focuses on how imagining the future relies on the same brain areas involved in remembering the past. Imagination seems to be, in large part, a collage of old memories cut and pasted together to make something new.

**The study also had a baseline condition, used additional contrasts, and found additional activations that I didn’t mention for the sake of brevity. Check out the original article for full details.

Brandt, V., Bergström, Z., Buda, M., Henson, R., & Simons, J. (2014). Did I turn off the gas? Reality monitoring of everyday actions Cognitive, Affective, & Behavioral Neuroscience, 14 (1), 209-219 DOI: 10.3758/s13415-013-0189-z

In the Blink of an Eye

4330664213_ab665a8419_o

It takes around 150 milliseconds (or about one sixth of a second) to blink your eyes. In other words, not long. That’s why you say something happened “in the blink of an eye” when an event passed so quickly that you were barely aware of it. Yet a new study shows that humans can process pictures at speeds that make an eye blink seem like a screening of Titanic. Even more, these results challenge a popular theory about how the brain creates your conscious experience of what you see.

To start, imagine your eyes and brain as a flight of stairs. I know, I know, but hear me out. Each step represents a stage in visual processing. At the bottom of the stairs you have the parts of the visual system that deal with the spots of darkness and light that make up whatever you’re looking at (let’s say an old family photograph). As you stare at the photograph, information about light and dark starts out at the bottom of the stairs in what neuroscientists called “low-level” visual areas like the retinas in your eyes and a swath of tissue tucked away at the very back of your brain called primary visual cortex, or V1.

Now imagine that the information about the photograph begins to climb our metaphorical neural staircase. Each time the information reaches a new step (a.k.a. visual brain area) it is transformed in ways that discard the details of light and dark and replace them with meaningful information about the picture. At one step, say, an area of your brain detects a face in the photograph. Higher up the flight, other areas might identify the face as your great-aunt Betsy’s, discern that her expression is sad, or note that she is gazing off to her right. By the time we reach the top of the stairs, the image is, in essence, a concept with personal significance. After it first strikes your eyes, it only takes visual information 100-150 milliseconds to climb to the top of the stairs, yet in that time your brain has translated a pattern of light and dark into meaning.

For many years, neuroscientists and psychologists believed that vision was essentially a sprint up this flight of stairs. You see something, you process it as the information moves to higher areas, and somewhere near the top of the stairs you become consciously aware of what you’re seeing. Yet intriguing results from patients with blindsight, along with other studies, seemed to suggest that visual awareness happens somewhere on the bottom of the stairs rather than at the top.

New, compelling demonstrations came from studies using transcranial magnetic stimulation, a method that can temporarily disrupt brain activity at a specific point in time. In one experiment, scientists used this technique to disrupt activity in V1 about 100 milliseconds after subjects looked at an image. At this point (100 milliseconds in), information about the image should already be near the top of the stairs, yet zapping lowly V1 at the bottom of the stairs interfered with the subjects’ ability to consciously perceive the image. From this and other studies, a new theory was born. In order to consciously see an image, visual information from the image that reaches the top of the stairs must return to the bottom and combine with ongoing activity in V1. This magical mixture of nitty-gritty visual details and extracted meaning somehow creates what we experience as visual awareness

In order for this model of visual processing to work, you would have to look at the photo of Aunt Betsy for at least 100 milliseconds in order to be consciously aware of it (since that’s how long it takes for the information to sprint up and down the metaphorical flight of stairs). But what would happen if you saw Aunt Betsy’s photo for less than 100 milliseconds and then immediately saw a picture of your old dog, Sparky? Once Aunt Betsy made it to the top of the stairs, she wouldn’t be able to return to the bottom stairs because Sparky has taken her place. Unable to return to V1, Aunt Betsy would never make it to your conscious awareness. In theory, you wouldn’t know that you’d seen her at all.

Mary Potter and colleagues at MIT tested this prediction and recently published their results in the journal Attention, Perception, & Psychophysics. They showed subjects brief pictures of complex scenes including people and objects in a style called rapid serial visual presentation (RSVP). You can find an example of an RSVP image stream here, although the images in the demo are more racy and are shown for longer than the pictures in the Potter study.

The RSVP image streams in the Potter study were strings of six photographs shown in quick succession. In some image streams, pictures were each shown for 80 milliseconds (or about half the time it takes to blink). Pictures in other streams were shown for 53, 27, or 13 milliseconds each. To give you a sense of scale, 13 milliseconds is about one tenth of an eye blink, or one hundredth of a second. It is also far less than time than Aunt Betsy would need to sprint to the top of the stairs, much less to return to the bottom.

At such short timescales, people can’t remember and report all of the pictures they see in an image stream. But are they aware of them at all? To test this, the scientists gave their subjects a written description of a target picture from the image stream (say, flowers) either just before the stream began or just after it ended. In either case, once the stream was over, the subject had to indicate whether an image fitting that description appeared in the stream. If it did appear, subjects had to pick which of two pictures fitting the description actually appeared in the stream.

Considering how quickly these pictures are shown, the task should be hard for people to do even when they know what they’re looking for. Why? Because “flowers” could describe an infinite number of photographs with different arrangements, shapes, and colors. Even when the subject is tipped off with the description in advance, he or she must process each photo in the stream well enough to recognize the meaning of the picture and compare it to the description. On top of that, this experiment effectively jams the metaphorical visual staircase full of images, leaving no room for visual info to return to V1 and create a conscious experience.

The situation is even more dire when people get the description of the target only after they’ve viewed the entire image stream. To answer correctly, subjects have to process and remember as many of the pictures from the stream as possible. None of this would be impressive under ordinary circumstances but, again, we’re talking 13 milliseconds here.

Sensitivity (computed from subject performance) on the RSVP image streams with 6 images. From Potter et al., 2013.

Sensitivity (computed from subject performance) on the RSVP image streams with 6 images. From Potter et al., 2013.

How did the subjects do? Surprisingly well. In all cases, they performed better than if they were randomly guessing – even when tested on the pictures shown for 13 milliseconds. In general, they scored higher when the pictures were shown longer. And like any test-taker could tell you, people do better when they know the test questions in advance. This pattern held up even when the scientists repeated the experiment with 12-image streams. As you might imagine, that makes for a very crowded visual staircase.

These results challenge the idea that visual awareness happens when information from the top of the stairs returns to V1. Still, they are by no means the theory’s death knell. It’s possible that the stairs are wider than we thought and that V1 is able (at least to some degree) to represent more than one image at a time. Another possibility is that the subjects in the study answered the questions using a vague sense of familiarity – one that might arise even if they were never overtly conscious of seeing the images. This is a particularly compelling explanation because there’s evidence that people process visual information like color and line orientation without awareness when late activity in V1 is disrupted. The subjects in the Potter study may have used this type of information   to guide their responses.

However things ultimately shake out with the theory of visual awareness, I love that these intriguing results didn’t come from a fancy brain scanner or from the coils of a transcranial magnetic stimulation device. With a handful of pictures, a computer screen, and some good-old-fashioned thinking, the authors addressed a potentially high-tech question in a low-tech way. It’s a reminder that fancy, expensive techniques aren’t the only way – or even necessarily the best way – to tackle questions about the brain. It also shows that findings don’t need colorful brain pictures or glow-in-the-dark mice in order to be cool. You can see in less than one-tenth of a blink of an eye. How frickin’ cool is that?

Photo credit: Ivan Clow on Flickr, used via Creative Commons license

Potter MC, Wyble B, Hagmann CE, & McCourt ES (2013). Detecting meaning in RSVP at 13 ms per picture. Attention, perception & psychophysics PMID: 24374558

The Changing Face of Science: Part Two

In my last post, I wrote about how scientists are beginning to engage with the public, particularly via social media and blogs. Here, I will use my recent experiences at the AAAS conference to illustrate how social media are changing the business of science itself.

The AAAS conference was the first science meeting I’ve attended as an active tweeter. The experience opened my eyes. Throughout the event, scientists and science writers were tweeting interesting talks or points made in various sessions. Essentially, this gave me ears and eyes throughout the conference. For instance, during a slow moment in the session I was attending, I checked out the #AAAS hashtag on Twitter and saw several intriguing tweets from people in another session:

Screen Shot 2014-02-20 at 4.28.10 PM

Screen Shot 2014-02-20 at 4.28.35 PM Screen Shot 2014-02-20 at 12.35.51 PMScreen Shot 2014-02-20 at 4.29.00 PM

These tweets drew my attention to a talk that I would otherwise have missed completely. I could then decide if I wanted to switch to the other session or learn more about the speaker and her work later on. Even if I did neither, I’d learned a few interesting facts with minimal effort.

Twitter can be a very useful tool for scientists. Aside from its usefulness at conferences, it’s a great way to learn about new and exciting papers in your field. Those who aren’t on Twitter might be surprised to hear that it can be a source for academic papers rather than celebrity gossip. Ultimately, the information you glean from Twitter depends entirely on the people you choose to follow. Scientists often follow other scientists in their own or related fields. Thus, they’re more likely to come upon a great review on oligodendrocytes than news on Justin Bieber’s latest antics. Scientists and science writers form their own interconnected Twitter networks through which they share the type of content that interests them.

Katie Mack, an astrophysicist at the University of Melbourne, has logged some 32,000 tweets as @AstroKatie and has about 7,300 followers on Twitter to date. She recently explained on the blog Real Scientists why she joined Twitter in the first place:

“Twitter started out as an almost purely professional thing for me — I used it to keep up with what other physicists and astronomers were talking about, what people were saying at conferences, that kind of thing. It’s great for networking as well, and just kind of seeing what everyone is up to, in your own field and in other areas of science. Eventually I realized it could also be a great tool for outreach and for sharing my love of science with the world.”

Social media and the Internet more broadly have also made new avenues of scientific research possible. They’ve spurred citizen science projects and collaborative online databases like the International Nucleotide Sequence Database Collaboration. Yet social media and online content have also affected research on a smaller scale as individual scientists discover the science diamonds in the rough. For example, Amina Khan described in a recent Los Angeles Times article how a group of scientists mined online content to compare the strategies different animals use to swim. She writes:

“They culled 112 clips from sites like YouTube and Vimeo depicting 59 different species of flying and swimming animals in action, including moths, bats, birds and even humpback whales. They wanted to see where exactly the animals’ wings (or fins) bent most, and exactly how much they bent.”

Another wonderful example of the influence of YouTube on science came to my attention at the AAAS meeting when I attended a session on rhythmic entrainment in non-human animals. Rhythmic entrainment is the ability to match your movements to a regular beat, such as when you tap your foot to the rhythm of a song. Only five years ago it was widely believed that the ability to match a beat is unique to humans . . . that is, until Aniruddh Patel of Tufts University received an email from his friend.

As Dr. Patel described in the AAAS session, the friend wrote to share a link to a viral YouTube video of a cockatoo named Snowball getting down to the Backstreet Boys. What did Patel make of it? Although the bird certainly seemed to be keeping the beat, it was impossible to know what cues the animal was receiving off-screen. Instead of shrugging off the video or declaring it a fraud, Patel contacted the woman who posted it. She agreed to collaborate with Patel and let him test Snowball under carefully controlled conditions. Remarkably, Snowball was still able to dance to various beats. Patel and his colleagues published their results in 2009, upending the field of beat perception.

That finding sparked a string of new experiments with various species and an entertaining lineup of speakers and animal videos at the AAAS session. Among them, I had the pleasure of watching a sea lion nodding along to “Boogie Wonderland” and a bonobo pounding on a drum.

In essence, the Internet and social media are bringing new opportunities to the doorsteps of scientists. As Dr. Patel’s experience shows, it’s wise to open the door and invite them in. Like everything else in modern society, science does not lie beyond the reach of social media. And thank goodness for that.

_____

Patel, Aniruddh D., Iversen, John R., Bregman, Micah R., & Schulz, Irena (2009). Experimental Evidence for Synchronization to a Musical Beat in a Nonhuman Animal Current Biology, 19 (10), 827-830 DOI: 10.1016/j.cub.2009.03.038

The Trouble with (and without) Fish

375664720_80723a9428_b

This week I’m posting a piece from my archives (August, 2011) that I’ve updated a little. Two things brought this post to mind: 1) the recent EPA report that women have become better informed about mercury and are making better choices at the fish counter and 2) remarkable updates from my scientist friend who is blogging her way through the world’s oceans as she collects water samples to catalog mercury levels around the globe. Both demonstrate that we are making some progress in studying and alerting people to the mercury in our waters and our fish. NB: when I say “now that I’m pregnant,” it’s 2011 me talking.

_________

Once upon a time in a vast ocean, life evolved. And then, over many millions of years, neurons and spinal cords and eyes developed, nourished all the while in a gentle bath of nutrients and algae.

Our brains and eyes are distant descendants of those early nervous systems formed in the sea. And even though our ancestors eventually sprouted legs and waddled out of the ocean, the neural circuitry of modern humans is still dependent on certain nutrients that their water-logged predecessors had in abundance.

This obscure fact about a distant evolution has recently turned into a major annoyance for me now that I’m pregnant. In fact, whether they know it or not, all pregnant women are trapped in a no-win dilemma over what they put into their stomachs. Take, for instance, a popular guidebook for pregnant women. On one page, it advocates eating lots of seafood while pregnant, explaining that fish contain key nutrients that the developing eyes and brain of the fetus will need. A few pages later, however, the author warns that seafood contains methylmercury, a neurotoxic pollutant, and that fish intake should be strictly curtailed. What is a well-meaning pregnant lady to do?

On a visceral level, nothing sounds worse than poisoning your child, so many women reduce their seafood intake while pregnant. I have spoken with women who cut all seafood out of their diet while pregnant, for fear that a little exposure could prove to be too much. They had good reason to be worried. Extreme methylmercury poisoning episodes in Japan and Iraq in past decades have shown that excessive methylmercury intake during pregnancy can cause developmental delays, deafness, blindness, and seizures in the babies exposed.

But what happens if pregnant women eliminate seafood from their diet altogether? Without careful supplementation of vital nutrients found in marine ecosystems, children face neural setbacks or developmental delays on a massive scale. Consider deficiencies in iodine, a key nutrient readily found in seafood. Its scarcity in the modern land-based diet was causing mental retardation in children – and sparked the creation of iodized salt (salt supplemented with iodine) to ensure that the nutritional need was met.

4566018341_8bc00c8aef_o

Perhaps the hardest nutrient to get without seafood is an omega-3 fatty acid known as DHA. In recent years, scientists have learned that this particular fatty acid is essential for proper brain development and functioning, yet it is almost impossible to get from non-aquatic dietary sources. At the grocery store, you’ll find vegetarian products that claim to fill those needs by supplying the biochemical precursor to DHA (found in flaxseed, walnuts, and soybean oils), but it’s not clear that the precursor will do the trick. Our bodies take a while to synthesize DHA from its precursor. In fact, we may burn much of the precursor for energy before we manage to convert it to DHA.

The best way for pregnant women to meet the needs of their growing babies is to eat food from marine sources. Yet thanks to global practices of burning coal and disposing of industrial and medical waste, any seafood women eat will expose their offspring to some amount of methylmercury. There’s no simple solution to this problem, although studies suggest that child outcomes are best when women consume ample seafood while avoiding species with higher levels of methylmercury (such as shark, tilefish, walleye, pike, and some types of tuna). It also matters where the fish was caught. Mercury levels will be higher in fish from mercury-polluted waters – one of the reasons that it’s important to catalog mercury levels around the globe.

Unless we start cleaning up our oceans, pregnant women will continue to face this awful decision each time they sit down at the dinner table. Far worse, we may face future generations with lower IQs and developmental delays regardless of which choice their mothers make. Thanks to shoddy environmental oversight, we may be saddling our children with brains that don’t work as well as our own. And that is something I truly can’t swallow.

____

Photo credits:

Photo 1: by Gideon (malias) on Flickr, used via Creative Commons license

Photo 2: by @Doug88888 on Flickr, used via Creative Commons license

%d bloggers like this: