Zapping Brains, Seeing Scenes


More than fifteen years ago, neuroimagers found a region of the brain that seemed to be all about place. The region lies on the bottom surface of the temporal lobe near a fold called the parahippocampal gyrus, so it was called the parahippocampal place area, or PPA. You have two PPAs: one on the left side of your brain and one on the right. If you look at a picture of a house, an outdoor or indoor scene, or even an empty room, your PPAs will take notice. Since its discovery, hundreds of experiments have probed the place predilections of the PPA. Each time, the region demonstrated its dogged devotion to place. Less clear was exactly what type of scene information the PPA was representing and what it was doing with that information. A recent scientific paper now gives us a rare, direct glimpse at the inner workings of the PPA through the experience of a young man whose right PPA was stimulated with electrodes.

The young man in question wasn’t an overzealous grad student. He was a patient with severe epilepsy who was at the hospital to undergo brain surgery. When medications can’t bring a person’s seizures under control, surgery is one of few remaining option. The surgery involves removing the portion of the brain in which that patient’s seizures begin. Of course, removing brain tissue is not something one does lightly. Before a surgery, doctors use various techniques to determine in each patient where the seizures originate and also where crucial regions involved in language and movement are located. They do this so they will know which part of the brain to remove and which parts they must be sure not to remove. One of the ways of mapping these areas before surgery is to open the patient’s skull, plant electrodes into his or her brain, and monitor brain activity at the various electrode sites. This technique, called electrocorticography, allows doctors to both record brain activity and electrically stimulate the brain to map key areas. It is also the most powerful and direct look scientists can get into the human brain.

A group of researchers in New York headed by Ashesh Mehta and Pierre Mégevand documented the responses of the young man as they stimulated electrodes that were planted in and around his right PPA. During one stimulation, he described seeing a train station from the neighborhood where he lives. During another, he reported seeing a staircase and a closet stuffed with something blue. When they repeated the stimulation, he saw the same random indoor scene again. So stimulating the PPA can cause hallucinations of scenes that are both indoor and outdoor, familiar or unfamiliar. This suggests that specific scene representations in the brain may be both highly localized and complex. It is also just incredibly cool.

The doctor also stimulated an area involved in face processing and found that this made the patient see distortions in a face. Another study published in 2012 showed a similar effect in a different patient. While the patient looked at his doctor, the doctor stimulated the face area. As the patient reported, “You just turned into somebody else. Your face metamorphosed.” Here’s a link to a great video of that patient’s entire reaction and description.

The authors of the new study also stimulated a nearby region that had shown a complex response to both faces and scenes is previous testing. When they zapped this area, the patient saw something that made him chuckle. “I’m sorry. . . You all looked Italian. . . Like you were working in a pizza shop. That’s what I saw, aprons and whatnot. Yeah, almost like you were working in a pizzeria.”

Now wouldn’t we all love to know what that area does?


Photo credit: thisisbossi on Flickr, used via Creative Commons license

*In case you’re wondering, the patient underwent surgery and no longer suffers from seizures (although he still experiences auras).

Mégevand P, Groppe DM, Goldfinger MS, Hwang ST, Kingsley PB, Davidesco I, & Mehta AD (2014). Seeing scenes: topographic visual hallucinations evoked by direct electrical stimulation of the parahippocampal place area. The Journal of neuroscience : the official journal of the Society for Neuroscience, 34 (16), 5399-405 PMID: 24741031

In the Blink of an Eye


It takes around 150 milliseconds (or about one sixth of a second) to blink your eyes. In other words, not long. That’s why you say something happened “in the blink of an eye” when an event passed so quickly that you were barely aware of it. Yet a new study shows that humans can process pictures at speeds that make an eye blink seem like a screening of Titanic. Even more, these results challenge a popular theory about how the brain creates your conscious experience of what you see.

To start, imagine your eyes and brain as a flight of stairs. I know, I know, but hear me out. Each step represents a stage in visual processing. At the bottom of the stairs you have the parts of the visual system that deal with the spots of darkness and light that make up whatever you’re looking at (let’s say an old family photograph). As you stare at the photograph, information about light and dark starts out at the bottom of the stairs in what neuroscientists called “low-level” visual areas like the retinas in your eyes and a swath of tissue tucked away at the very back of your brain called primary visual cortex, or V1.

Now imagine that the information about the photograph begins to climb our metaphorical neural staircase. Each time the information reaches a new step (a.k.a. visual brain area) it is transformed in ways that discard the details of light and dark and replace them with meaningful information about the picture. At one step, say, an area of your brain detects a face in the photograph. Higher up the flight, other areas might identify the face as your great-aunt Betsy’s, discern that her expression is sad, or note that she is gazing off to her right. By the time we reach the top of the stairs, the image is, in essence, a concept with personal significance. After it first strikes your eyes, it only takes visual information 100-150 milliseconds to climb to the top of the stairs, yet in that time your brain has translated a pattern of light and dark into meaning.

For many years, neuroscientists and psychologists believed that vision was essentially a sprint up this flight of stairs. You see something, you process it as the information moves to higher areas, and somewhere near the top of the stairs you become consciously aware of what you’re seeing. Yet intriguing results from patients with blindsight, along with other studies, seemed to suggest that visual awareness happens somewhere on the bottom of the stairs rather than at the top.

New, compelling demonstrations came from studies using transcranial magnetic stimulation, a method that can temporarily disrupt brain activity at a specific point in time. In one experiment, scientists used this technique to disrupt activity in V1 about 100 milliseconds after subjects looked at an image. At this point (100 milliseconds in), information about the image should already be near the top of the stairs, yet zapping lowly V1 at the bottom of the stairs interfered with the subjects’ ability to consciously perceive the image. From this and other studies, a new theory was born. In order to consciously see an image, visual information from the image that reaches the top of the stairs must return to the bottom and combine with ongoing activity in V1. This magical mixture of nitty-gritty visual details and extracted meaning somehow creates what we experience as visual awareness

In order for this model of visual processing to work, you would have to look at the photo of Aunt Betsy for at least 100 milliseconds in order to be consciously aware of it (since that’s how long it takes for the information to sprint up and down the metaphorical flight of stairs). But what would happen if you saw Aunt Betsy’s photo for less than 100 milliseconds and then immediately saw a picture of your old dog, Sparky? Once Aunt Betsy made it to the top of the stairs, she wouldn’t be able to return to the bottom stairs because Sparky has taken her place. Unable to return to V1, Aunt Betsy would never make it to your conscious awareness. In theory, you wouldn’t know that you’d seen her at all.

Mary Potter and colleagues at MIT tested this prediction and recently published their results in the journal Attention, Perception, & Psychophysics. They showed subjects brief pictures of complex scenes including people and objects in a style called rapid serial visual presentation (RSVP). You can find an example of an RSVP image stream here, although the images in the demo are more racy and are shown for longer than the pictures in the Potter study.

The RSVP image streams in the Potter study were strings of six photographs shown in quick succession. In some image streams, pictures were each shown for 80 milliseconds (or about half the time it takes to blink). Pictures in other streams were shown for 53, 27, or 13 milliseconds each. To give you a sense of scale, 13 milliseconds is about one tenth of an eye blink, or one hundredth of a second. It is also far less than time than Aunt Betsy would need to sprint to the top of the stairs, much less to return to the bottom.

At such short timescales, people can’t remember and report all of the pictures they see in an image stream. But are they aware of them at all? To test this, the scientists gave their subjects a written description of a target picture from the image stream (say, flowers) either just before the stream began or just after it ended. In either case, once the stream was over, the subject had to indicate whether an image fitting that description appeared in the stream. If it did appear, subjects had to pick which of two pictures fitting the description actually appeared in the stream.

Considering how quickly these pictures are shown, the task should be hard for people to do even when they know what they’re looking for. Why? Because “flowers” could describe an infinite number of photographs with different arrangements, shapes, and colors. Even when the subject is tipped off with the description in advance, he or she must process each photo in the stream well enough to recognize the meaning of the picture and compare it to the description. On top of that, this experiment effectively jams the metaphorical visual staircase full of images, leaving no room for visual info to return to V1 and create a conscious experience.

The situation is even more dire when people get the description of the target only after they’ve viewed the entire image stream. To answer correctly, subjects have to process and remember as many of the pictures from the stream as possible. None of this would be impressive under ordinary circumstances but, again, we’re talking 13 milliseconds here.

Sensitivity (computed from subject performance) on the RSVP image streams with 6 images. From Potter et al., 2013.

Sensitivity (computed from subject performance) on the RSVP image streams with 6 images. From Potter et al., 2013.

How did the subjects do? Surprisingly well. In all cases, they performed better than if they were randomly guessing – even when tested on the pictures shown for 13 milliseconds. In general, they scored higher when the pictures were shown longer. And like any test-taker could tell you, people do better when they know the test questions in advance. This pattern held up even when the scientists repeated the experiment with 12-image streams. As you might imagine, that makes for a very crowded visual staircase.

These results challenge the idea that visual awareness happens when information from the top of the stairs returns to V1. Still, they are by no means the theory’s death knell. It’s possible that the stairs are wider than we thought and that V1 is able (at least to some degree) to represent more than one image at a time. Another possibility is that the subjects in the study answered the questions using a vague sense of familiarity – one that might arise even if they were never overtly conscious of seeing the images. This is a particularly compelling explanation because there’s evidence that people process visual information like color and line orientation without awareness when late activity in V1 is disrupted. The subjects in the Potter study may have used this type of information   to guide their responses.

However things ultimately shake out with the theory of visual awareness, I love that these intriguing results didn’t come from a fancy brain scanner or from the coils of a transcranial magnetic stimulation device. With a handful of pictures, a computer screen, and some good-old-fashioned thinking, the authors addressed a potentially high-tech question in a low-tech way. It’s a reminder that fancy, expensive techniques aren’t the only way – or even necessarily the best way – to tackle questions about the brain. It also shows that findings don’t need colorful brain pictures or glow-in-the-dark mice in order to be cool. You can see in less than one-tenth of a blink of an eye. How frickin’ cool is that?

Photo credit: Ivan Clow on Flickr, used via Creative Commons license

Potter MC, Wyble B, Hagmann CE, & McCourt ES (2013). Detecting meaning in RSVP at 13 ms per picture. Attention, perception & psychophysics PMID: 24374558

Stuff and Brains Part 2: How Tools Come In Handy

298571748_c18ca5d78b_bHumans learn about objects by exploring them. I once described how my infant daughter explored objects, discovering their uses and properties through trial and error, observation, and plenty of dead ends. Her modest experiments illustrated a more universal truth: that from our earliest moments, our experience with objects in the world is fundamentally tied to our senses, to the ways we physically interact with them, and to the purposes they serve.

Last week I wrote about object-selective cortex, the part of visual cortex that lets us recognize people and stuff. I mentioned that this swath of the brain is speckled with several areas that specialize in processing certain object classes (e.g., faces, bodies, and scenes). If you consider object-selective cortex as a whole, you find that these specialized areas fit within a broader organization based on whether the to-be-recognized object is animate (a living, moving thing) or, if not, whether it’s large or small. While this may sound like a wacky way to divvy up object recognition, I mentioned some plausible reasons why your brain might map objects this way.

That’s the big-picture view. But what happens if we zoom in and explore one little bit of object-selective cortex in detail? Would we see a meaningful organization at this scale too? The answer, dear reader, is yes. In fact, this type of micro-organization can tell us volumes about how we recognize, understand, and use the objects around us.

For a beautiful example, let’s travel to the extrastriate body area (EBA).* The EBA is involved in visually recognizing bodies. Your EBA is active when you see a human body, regardless of whether the body is clothed or unclothed. It’s also active when you see parts of a body or even (to a lesser degree) when you see abstract body representations like stick figures. In 2010, scientists from Northumbria University used fMRI to ‘zoom in’ on the EBA in the left hemisphere. The team found that a chunk of the left EBA is specifically interested in pictures of hands, as opposed to other parts of the body. In essence, they found a micro-organization within the EBA, segregating hands from other body parts.

Before we talk more about hands, let’s visit another object-selective area in the same vicinity: the tool-selective area on the middle temporal gyrus. No kidding, your visual cortex has areas devoted to tools! The tool area on the middle temporal gyrus is engaged when you see a picture of a tool, be it a hammer, a stapler, or a fork. Patients with brain damage in this region tend to have trouble recalling information about the actions paired with common tools. But what counts as a tool for this region? One research group tried to answer this question by training adult subjects to use unfamiliar objects as tools. Using fMRI, the group showed that pictures of these objects activated the tool area after but not before training. In short, the brain dynamically reorganizes object recognition, or at least tool recognition, based on new experiences with objects.

But the story doesn’t end there. In 2012, the same group that discovered the hand area reported another find: that the hand area and the tool area overlap – a lot. What does this overlap mean? In essence, the same spot of cortex is active both when you see a hand and when you see a screwdriver or a pair of scissors. Notice that this goes against the broad divisions mentioned in my last post, since hands are animate and screwdrivers are not. Here, scale makes all the difference. When you zoom out, you see that object-selective cortex is broadly divvied up based on object animacy and size, but these divisions aren’t absolute and ubiquitous. Up close, you can find tiny bits of cortex that buck the trend, each with its own idiosyncratic combination of preferences.

Screen Shot 2013-10-24 at 3.39.27 PM

Figure from Bracci et al, 2012, showing the overlap of hand and tool areas in the left hemispheres of all but one of their subjects. Each slice represents the overlap (shown in cyan) in a different subject.

While each local mix of preferences may be idiosyncratic, it is probably not accidental. To save space and speed up reactions, brain organization is very well optimized. Chances are good that hands and tools overlap in the brain for a reason. But what might that reason be? It might stem from the fact that hands are intimately linked with tools in your visual experience. Since hands grip tools, you tend to see them together. You also tend to see faces and bodies together (that is, unless you’re watching a horror film.) And as it turns out, the face area and the body area on the bottom temporal surface of the right hemisphere appear to partially overlap as well. Could this be because faces and bodies, like hands and tools, tend to co-occur in our visual experience? It’s possible. Humans are quite sensitive to the statistical properties of our experience with objects.

But there’s another, quite different explanation for why faces overlap with bodies and tools overlap with hands in object-selective cortex. Brain organization tends to be dictated by where information needs to go next. (In essence, how the information will be used). The 2012 paper presents evidence that the overlapping hand/tool area is communicating with other areas of the brain that guide object-directed actions. The paper also cites another fMRI study that suggests the overlapping face and body areas in the right hemisphere communicate with parts of the brain involved in social interactions. In short, recognizing either a face or a body provides information that the social regions in your brain may need, while visual information about hands or tools may be invaluable when it comes time for reaching, grabbing, lifting, or stapling stuff.

Hands and tools. Faces and bodies. These are just a small sample of the many kinds of objects and creatures we see every day of our lives. Just imagine if we knew the micro-organization of every millimeter of object-selective cortex. Now that would be a map, one you started shaping from your earliest days on this earth. It would be a record of your lifetime of adventures with people and with stuff.


*Is it just me or does this post seem like an episode of The Magic School Bus?

Photo credits

Hands photo: Carmen Maria on Flickr

Brain images: Bracci et al, 2012 in The Journal of Neurophysiology

Bracci S, Cavina-Pratesi C, Ietswaart M, Caramazza A, & Peelen MV (2012). Closely overlapping responses to tools and hands in left lateral occipitotemporal cortex. Journal of neurophysiology, 107 (5), 1443-56 PMID: 22131379

Mapping a World of Stuff onto the Brain

4872199920_660cd8fb05_bWhile we have five wonderful senses, humans rely most on our sense of sight. The allocation of real estate in the brain reflects this hegemony; a far greater chunk of your cerebral cortex is dedicated to vision than to any other sense. So when you encounter people, objects, and animals in the world, you typically use visual information to tell your lover from a toothbrush from your cat. And while it would be reasonable to expect your brain to process all of these items in the same way, it does nothing of the sort. Instead, the visual cortex segregates and plays favorites.

The most dramatic examples of this segregation occur whenever you look at other people.  Within the large chunk of  visual cortex dedicated to object recognition, two areas in each hemisphere specifically process faces (the FFA and OFA) and two areas in each hemisphere specifically process bodies (the FBA and EBA). In each case, one of these areas is located on the side of the brain (near the back) while the other is tucked away on the bottom surface of the temporal lobe. It’s clear that these areas are important for recognizing faces and bodies. Damage to the face area FFA can profoundly impair one’s ability to recognize faces, while direct electrical stimulation of the same area can temporarily distort perception of a face. And when scientists used a magnetic pulse to momentarily disrupt activity in either the face area OFA or the body area EBA of healthy adults, their participants had difficulty discriminating between similar faces or similar bodies, respectively.

Yet the segregation of objects in your visual cortex doesn’t end there. Scientists have long known that visual information about scenes – including the landmarks and buildings that often define them – is processed separately as well. In fact we have at least two scene areas per hemisphere in classic visual cortex: one on the side of the brain (TOS) and one on the lower surface (PPA).*

But what about other types of objects? If you looked at pictures of a trampoline, a screwdriver, a lamppost, and a toad, would they follow the same path through your visual cortex? The answer is no. In a recent study, Talia Konkle and Alfonso Caramazza at Harvard showed people pictures of a wide range of animals and objects while scanning them with fMRI. They studied the activations in visual cortex for each image and used them to compute something they called preference maps. The preference maps indicated whether each bit of cortex preferred animals or objects and, separately, small or large things. When they combined these maps they found zones of visual cortex that preferred large objects, small objects, or animals of any size.** For large objects and animals (with two zones each), one zone was located on the side of the brain and the other on the lower surface. The only zone that preferred small objects over both large ones and animals lay right at the edge of the brain, smack dab between the side of the brain and its lower surface. The face and body areas fit almost entirely within the animal zones, while the scene areas lay within the large-object zones.

Screen Shot 2013-10-17 at 11.43.06 AM

Figure from Konkle & Caramazza, 2013 showing where the face areas, body areas, scene areas, and ‘preference zones’ were in one participant. Each gray blob represents the right hemisphere of the brain, with the left side of each blob representing the back of the brain. The top two brains show a side view while the bottom two show the bottom surface of the cortex.

It may seem odd that object representation in visual cortex is organized based on such arbitrary dimensions. Why should it matter whether the thing you see is big or small, made of cotton or has a cottontail? The study’s authors argue that these divisions make sense if one considers the various ways we use different types of objects. For instance, small objects are generally useful because you can manipulate and interact with them. Recognizing an apple, axe, or comb allows you to eat, chop, or fix your ‘do, respectively – so long as the visual information about these objects gets passed along to brain areas involved in reaching and grasping movements.

Objects like buildings, trees, or couches are obviously too large to be lifted or manipulated. Since they stay put, you’re more likely to use them as landmarks to help you navigate through a neighborhood, park, or room. But you can only use these objects this way if you send the visual information about them to brain regions involved in navigation.

Finally, we have living things, which can move, bite, and behave unpredictably. While a large animal like an elephant might trample you, a small one like a venomous spider or snake could be more lethal still. And don’t even get me started on people! In short, an animal’s size doesn’t determine how you will or won’t interact with it; you need to be ready to predict any animal’s behavior and react accordingly. Should you pet that dog or run from it? Communication between the animal-preferring zones of visual cortex and the social prediction centers in your brain might help you reach the right answer before it’s too late.

What’s the upshot of all this using, manipulating, predicting and fleeing? A wonderful and miraculous map of all the stuff in your world. It’s a modest little map – no larger than a napkin and half the thickness of an iPhone 5 – that represents a vast array of creatures, things, and people based on what they mean to you. How frickin’ amazing is that?

* I’ll get back to this mysterious pattern in a future post.

** I find it interesting that people generally approach the game Twenty Questions with the same category distinctions. The first two questions are almost invariably: Is it alive? And is it bigger than a breadbox?


Photo credits

Elephant and bird: Ludovic Hirlimann on Flickr, used via Creative Commons license

Figure with brains: Talia Konkle & Alfonso Caramazza in The Journal of Neuroscience

Konkle T, & Caramazza A (2013). Tripartite organization of the ventral stream by animacy and object size. The Journal of neuroscience : the official journal of the Society for Neuroscience, 33 (25), 10235-42 PMID: 23785139

Konkle T, & Caramazza A (2013). Tripartite organization of the ventral stream by animacy and object size The Journal of Neuroscience DOI: 10.1523/JNEUROSCI.0983-13.2013

Looking Schizophrenia in the Eye

272994276_3c83654e97_bMore than a century ago, scientists discovered something usual about how people with schizophrenia move their eyes. The men, psychologist and inventor Raymond Dodge and psychiatrist Allen Diefendorf, were trying out one of Dodge’s inventions: an early incarnation of the modern eye tracker. When they used it on psychiatric patients, they found that most of their subjects with schizophrenia had a funny way of following a moving object with their eyes.

When a healthy person watches a smoothly moving object (say, an airplane crossing the sky), she tracks the plane with a smooth, continuous eye movement to match its displacement. This action is called smooth pursuit. But smooth pursuit isn’t smooth for most patients with schizophrenia. Their eyes often fall behind and they make a series of quick, tiny jerks to catch up or even dart ahead of their target. For the better part of a century, this movement pattern would remain a mystery. But in recent decades, scientific discoveries have led to a better understanding of smooth pursuit eye movements – both in health and in disease.

Scientists now know that smooth pursuit involves a lot more than simply moving your eyes. To illustrate, let’s say a sexy jogger catches your eye on the street. When you first see the runner, your eyes are stationary and his or her image is moving across your retinas at some relatively constant rate. Your visual system (in particular, your visual motion-processing area MT) must first determine this rate. Then your eyes can move to catch up with the target and match its speed. If you do this well, the jogger’s image will no longer be moving relative to your retinas. From your visual system’s perspective, the jogger is running in place and his or her surroundings are moving instead. From both visual cues and signals about your eye movements, your brain can predict where the jogger is headed and keep moving your eyes at just the right speed to keep pace.

Although the smooth pursuit abnormalities in schizophrenia may sound like a movement problem, they appear to reflect a problem with perception. Sensitive visual tests show that motion perception is disrupted in many patients. They can’t tell the difference between the speeds of two objects or integrate complex motion information as well as healthy controls. A functional MRI study helped explain why. The study found that people with schizophrenia activated their motion-processing area MT less than controls while doing motion-processing tasks. The next logical question – why MT doesn’t work as well for patients – remains unanswered for now.

In my last two posts I wrote about how delusions can develop in healthy people who don’t suffer from psychosis. The same is true of not-so-smooth pursuit. In particular, healthy relatives of patients with schizophrenia tend to have jerkier pursuit movements than subjects without a family history of the illness. They are also impaired at some of the same motion-processing tests that stymie patients. This pattern, along with the results of twin studies, suggests that smooth pursuit dysfunction is inherited. Following up on this idea, two studies have compared subjects’ genotypes with the inheritance patterns of smooth pursuit problems within families. While they couldn’t identify exactly which gene was involved (a limitation of the technique), they both tracked the culprit gene to the same genetic neighborhood on the sixth chromosome.

Despite this progress, the tale of smooth pursuit in schizophrenia is more complex than it appears. For one, there’s evidence that smooth pursuit problems differ for patients with different forms of the disorder. Patients with negative symptoms (like social withdrawal or no outward signs of emotion) may have problems with the first step of smooth pursuit: judging the target’s speed and moving their eyes to catch up. Meanwhile, those with more positive symptoms (like delusions or hallucinations) may have more trouble with the second step: predicting the future movement of the target and keeping pace with their eyes.

It’s also unclear exactly how common these problems are among patients; depending on the study, as many as 95% or as few as 12% of patients may have disrupted smooth pursuit. The studies that found the highest rates of smooth pursuit dysfunction in patients also found rates as high as 19% for the problems among healthy controls. These differences may boil down to the details of how the eye movements were measured in the different experiments. Still, the studies all agreed that people with schizophrenia are far more likely to have smooth pursuit problems than healthy controls. What the studies don’t agree on is how specific these problems are to schizophrenia compared with other psychiatric illnesses. Some studies have found smooth pursuit abnormalities in patients with bipolar disorder and major depression as well as in their close relatives; other studies have not.

Despite these messy issues, a group of scientists at the University of Aberdeen in Scotland recently tried to tell whether subjects had schizophrenia based on their eye movements alone. In addition to smooth pursuit, they used two other measures: the subject’s ability to fix her gaze on a stable target and how she looked at pictures of complex scenes. Most patients have trouble holding their eyes still in the presence of distractors and, when shown a meaningful picture, they tend to look at fewer objects or features in the scene.

Taking the results from all three measures into account, the group could distinguish between a new set of patients with schizophrenia and new healthy controls with an accuracy of 87.8%. While this rate is high, keep in mind that the scientists removed real-world messiness by selecting controls without other psychiatric illnesses or close relatives with psychosis. This makes their demonstration a lot less impressive – and a lot less useful in the real world. I don’t think this method will ever become a viable alternative to diagnosing schizophrenia based on their clinical symptoms, but the approach may hold promise in a similar vein: identifying young people who are at risk for developing the illness. Finding these individuals and helping them sooner could truly mean the difference between life and death.


Photo credit: Travis Nep Smith on Flickr, used via Creative Commons License

Benson PJ, Beedie SA, Shephard E, Giegling I, Rujescu D, & St Clair D (2012). Simple viewing tests can detect eye movement abnormalities that distinguish schizophrenia cases from controls with exceptional accuracy. Biological psychiatry, 72 (9), 716-24 PMID: 22621999

Eyes Wide Shut


In the middle of the 20th century, experimental psychologists began to notice a strange interaction between human vision and time. If they showed people flashes of light close together in time, subjects experienced the flashes as if they all occurred simultaneously. When they asked people to detect faint images, the speed of their subjects’ responses waxed and waned according to a mysterious but predictable rhythm. Taken together, the results pointed to one conclusion: that human vision operates within a particular time window – about 100 milliseconds, or one-tenth of a second.

This discovery sparked a controversy about the nature of vision. Pretty much anyone with a pair of eyes will tell you that vision feels smooth and unbroken. But is it truly as continuous as it feels, or might it occur in discrete chunks of time? Could the cohesive experience of vision be nothing more than an illusion?

Enthusiasm for the idea of discrete visual processing faded over the years, although it was never disproven. Science is not immune to fads; ideas often fall in and out of favor. Besides, vision-in-chunks was a hard sell. It was counterintuitive and contrary to people’s subjective experience. Vision scientists set it aside and moved on to new questions and controversies instead.

The debate resurfaced in the last twenty years, sparked by the discovery of a new twist on an old optical illusion. Scientists have long known about the wagon wheel illusion, which makes it appear as if the wheels of moving cars (or wagons) in films are either turning in the wrong direction or not turning at all. The illusion is caused by a technical glitch: the combination of the periodic rotating wheel and the frame rate of the movie. Your brain doesn’t get enough examples of the spinning wheel to know its direction and speed. But in 1996, scientists discovered that the illusion also occurred in the real world. When hubcaps, tires, and modified LPs turned at certain rates, their direction appeared to reverse. Scientists dug the idea of discrete vision out of a trunk in the attic, dusted it off, and tried it out to explain the effect. In essence, the visual system might have a frame rate of its own. Cross this frame rate with an object rotating at a certain frequency and you’re left seeing tires spin backwards. It seemed to make sense.

In a clever set of experiments, the neuroscientist and author David Eagleman (of Incognito and Sum fame) shot this explanation down. He and his colleague, Keith Kline, chalked the illusion up to tiring motion-processing cells instead. Still, the debate about the nature of vision was reignited. Several neuroscientists became intrigued with the notion of vision-in-chunks and began to think about it in relation to a particular type of brain rhythm that cycles at a rate of – you guessed it – about ten times per second.

In recent years, a slew of experiments have supported the idea that certain aspects of vision happen in discrete packets of time – and that these packets are roughly one-tenth of a second long. The brain rhythms that correspond to this timing – called alpha waves – have acted as the missing link. Brain rhythms essentially tamp down activity in a brain area at a regular interval, like a librarian who keeps shushing a crowd of noisy kids. Cells in a given part of the brain momentarily fall silent but, as kids will do, they start right up again once the shushing is done.

Work by Rufin VanRullen at the Université de Toulouse and, separately, by Kyle Mathewson at the University of Illinois show how this periodic shushing can affect visual perception. For example, Mathewson and colleagues were able to predict whether a subject would detect a briefly flashed circle based on its timing relative to the alpha wave in that subject’s visual cortex. This and other studies like it demonstrate that alpha waves are not always helpful. If something appears at the wrong moment in your rhythm, you could be slower to see it or you might just miss it altogether. In other words, every tenth of a second you might be just a little bit blind.

If you’re a healthy skeptic, you may be wondering how well such experiments reflect vision in the real world. Unless your computer’s on the fritz, you probably don’t spend much time staring at circles on a screen. Does the 10-per-second frame rate apply when you’re looking at the complex objects and people that populate your everyday world?

Enter Frédéric Gosselin and colleagues from the Université de Montréal. Last month they published a simple study in the journal Cognition that tested the idea of discrete vision using pictures of human faces. They made the faces hard to see by bathing them in different amounts of visual ‘noise’ (like the static on a misbehaving television). Subjects had to identify each face as one of six that they had learned in advance. But while they were trying to identify each face, the amount of static on the face kept changing. In fact, Gosselin and colleagues were cycling the amount of static to see how its rate and phase (timing relative to the appearance of each new face) affected their subjects’ performance. They figured that if visual processing is discrete and varies with time, then subjects should perform best when their moments of best vision coincided with the moments of least static obscuring the face.

What did they find? People were best at identifying the faces when the static cycled at 10 or 15 times per second. Gosselin and colleagues suggest that the ideal rate may be somewhere between the two (a possibility that they can’t test after-the-fact). Their results imply that the visual alpha wave affects face recognition – a task that people do every day. But it may only affect it a little. The difference between the subjects’ best accuracy (when the static cycling was set just right) and their worst accuracy was only 7%. In the end, the alpha wave is one of many factors that determine perception. And even when these rhythms are shushing visual cortex, it’s not enough to shut down the entire area. Some troublemakers keep yapping right through it.

When it comes to alpha waves and the nature of discrete visual processing, scientists have their work cut out for them. For example, while some studies found that perception was affected by an ongoing visual alpha wave, others found that visual events (like the appearance of a new image) triggered new alpha waves in visual cortex. In fact, brain rhythms are not by any means exclusive; different rhythms can be layered one upon the other within a brain area, making it harder to pull out the role of any one of them.  For now it’s at least safe to say that visual processing is nowhere near as smooth and continuous as it appears. Your vision flickers and occasionally fails. As if your brain dims the lights, you have moments when you see less and miss more – moments that may happen tens of thousands of times each hour.

This fact raises a troubling question. Why would the brain have rhythms that interfere with perception? Paradoxically enough, discrete visual processing and alpha waves may actually give your visual perception its smooth, cohesive feel. In the last post I mentioned how you move your eyes about 2 or 3 times per second. Your visual system must somehow stitch together the information from these separate glimpses that are offset from each other both in time and space. Alpha waves allow visual information to echo in the brain. They may stabilize visual representations over time, allowing them to linger long enough for the brain, that master seamstress, to do her work.


Photo credit: Tom Conger on Flickr with Creative Commons license

Blais C, Arguin M, & Gosselin F (2013). Human visual processing oscillates: Evidence from a classification image technique Cognition, 128 (3), 353-62 PMID: 23764998

Sight Unseen


Eyelids. They come in handy for sandstorms, eye shadow, and poolside naps. You don’t see much when they’re closed, but when they’re open you have an all-access pass to the visible world around you. Right? Well, not exactly. Here at Garden of the Mind, the next two posts are dedicated to the ways that you are blind – every day – and with your eyes wide open.

One of the ways you experience everyday blindness has to do with the movements of your eyes. If you stuck a camera in your retina and recorded the images that fall on your eye, the footage would be nauseating. Think The Blair Witch Project, only worse. That’s because you move your eyes about once every half a second – more often than your heart beats. You make these eye movements constantly, without intention or even awareness. Why? Because, thanks to inequalities in the eye and visual areas of the brain, your peripheral vision is abysmal. It’s true even if you have 20/20 vision. You don’t sense that you are legally blind in your peripheral vision because you compensate by moving your eyes from place to place. Like snapping a series of overlapping photographs to create a panoramic picture, you move your eyes to catch different parts of a scene and your brain stitches these ‘shots’ together.

As it turns out, the brain is a wonderful seamstress. All this glancing and stitching leaves us with a visual experience that feels cohesive and smooth – nothing like the Frankenstein creation it actually is. One reason this beautiful self-deception works is that we turn off much of our visual system every time we move our eyes. You can test this out by facing a mirror and moving your eyes quickly back and forth (as if you are looking at your right and left ears). Try as you might, you won’t be able to catch your eyes moving. It’s not because they’re moving too little for you to see; a friend looking over your shoulder would clearly see them darting back and forth. You can feel them moving yourself if you gently rest your fingers below your lower lashes.

It would be an overstatement to say that you are completely blind every time you move your eyes. While some aspects of visual processing (like that of motion) are switched off, others (like that of image contrast) seem to stay on. Still, this means that twice per second, or 7,200 times each hour, your brain shuts you out of your own sense of sight.  In these moments you are denied access to full visual awareness. You are left, so to speak, in the dark.

Photo credit: Pete Georgiev on Flickr under Creative Commons license

%d bloggers like this: