Stuff and Brains Part 2: How Tools Come In Handy

298571748_c18ca5d78b_bHumans learn about objects by exploring them. I once described how my infant daughter explored objects, discovering their uses and properties through trial and error, observation, and plenty of dead ends. Her modest experiments illustrated a more universal truth: that from our earliest moments, our experience with objects in the world is fundamentally tied to our senses, to the ways we physically interact with them, and to the purposes they serve.

Last week I wrote about object-selective cortex, the part of visual cortex that lets us recognize people and stuff. I mentioned that this swath of the brain is speckled with several areas that specialize in processing certain object classes (e.g., faces, bodies, and scenes). If you consider object-selective cortex as a whole, you find that these specialized areas fit within a broader organization based on whether the to-be-recognized object is animate (a living, moving thing) or, if not, whether it’s large or small. While this may sound like a wacky way to divvy up object recognition, I mentioned some plausible reasons why your brain might map objects this way.

That’s the big-picture view. But what happens if we zoom in and explore one little bit of object-selective cortex in detail? Would we see a meaningful organization at this scale too? The answer, dear reader, is yes. In fact, this type of micro-organization can tell us volumes about how we recognize, understand, and use the objects around us.

For a beautiful example, let’s travel to the extrastriate body area (EBA).* The EBA is involved in visually recognizing bodies. Your EBA is active when you see a human body, regardless of whether the body is clothed or unclothed. It’s also active when you see parts of a body or even (to a lesser degree) when you see abstract body representations like stick figures. In 2010, scientists from Northumbria University used fMRI to ‘zoom in’ on the EBA in the left hemisphere. The team found that a chunk of the left EBA is specifically interested in pictures of hands, as opposed to other parts of the body. In essence, they found a micro-organization within the EBA, segregating hands from other body parts.

Before we talk more about hands, let’s visit another object-selective area in the same vicinity: the tool-selective area on the middle temporal gyrus. No kidding, your visual cortex has areas devoted to tools! The tool area on the middle temporal gyrus is engaged when you see a picture of a tool, be it a hammer, a stapler, or a fork. Patients with brain damage in this region tend to have trouble recalling information about the actions paired with common tools. But what counts as a tool for this region? One research group tried to answer this question by training adult subjects to use unfamiliar objects as tools. Using fMRI, the group showed that pictures of these objects activated the tool area after but not before training. In short, the brain dynamically reorganizes object recognition, or at least tool recognition, based on new experiences with objects.

But the story doesn’t end there. In 2012, the same group that discovered the hand area reported another find: that the hand area and the tool area overlap – a lot. What does this overlap mean? In essence, the same spot of cortex is active both when you see a hand and when you see a screwdriver or a pair of scissors. Notice that this goes against the broad divisions mentioned in my last post, since hands are animate and screwdrivers are not. Here, scale makes all the difference. When you zoom out, you see that object-selective cortex is broadly divvied up based on object animacy and size, but these divisions aren’t absolute and ubiquitous. Up close, you can find tiny bits of cortex that buck the trend, each with its own idiosyncratic combination of preferences.

Screen Shot 2013-10-24 at 3.39.27 PM

Figure from Bracci et al, 2012, showing the overlap of hand and tool areas in the left hemispheres of all but one of their subjects. Each slice represents the overlap (shown in cyan) in a different subject.

While each local mix of preferences may be idiosyncratic, it is probably not accidental. To save space and speed up reactions, brain organization is very well optimized. Chances are good that hands and tools overlap in the brain for a reason. But what might that reason be? It might stem from the fact that hands are intimately linked with tools in your visual experience. Since hands grip tools, you tend to see them together. You also tend to see faces and bodies together (that is, unless you’re watching a horror film.) And as it turns out, the face area and the body area on the bottom temporal surface of the right hemisphere appear to partially overlap as well. Could this be because faces and bodies, like hands and tools, tend to co-occur in our visual experience? It’s possible. Humans are quite sensitive to the statistical properties of our experience with objects.

But there’s another, quite different explanation for why faces overlap with bodies and tools overlap with hands in object-selective cortex. Brain organization tends to be dictated by where information needs to go next. (In essence, how the information will be used). The 2012 paper presents evidence that the overlapping hand/tool area is communicating with other areas of the brain that guide object-directed actions. The paper also cites another fMRI study that suggests the overlapping face and body areas in the right hemisphere communicate with parts of the brain involved in social interactions. In short, recognizing either a face or a body provides information that the social regions in your brain may need, while visual information about hands or tools may be invaluable when it comes time for reaching, grabbing, lifting, or stapling stuff.

Hands and tools. Faces and bodies. These are just a small sample of the many kinds of objects and creatures we see every day of our lives. Just imagine if we knew the micro-organization of every millimeter of object-selective cortex. Now that would be a map, one you started shaping from your earliest days on this earth. It would be a record of your lifetime of adventures with people and with stuff.

______

*Is it just me or does this post seem like an episode of The Magic School Bus?

Photo credits

Hands photo: Carmen Maria on Flickr

Brain images: Bracci et al, 2012 in The Journal of Neurophysiology

Bracci S, Cavina-Pratesi C, Ietswaart M, Caramazza A, & Peelen MV (2012). Closely overlapping responses to tools and hands in left lateral occipitotemporal cortex. Journal of neurophysiology, 107 (5), 1443-56 PMID: 22131379

Mapping a World of Stuff onto the Brain

4872199920_660cd8fb05_bWhile we have five wonderful senses, humans rely most on our sense of sight. The allocation of real estate in the brain reflects this hegemony; a far greater chunk of your cerebral cortex is dedicated to vision than to any other sense. So when you encounter people, objects, and animals in the world, you typically use visual information to tell your lover from a toothbrush from your cat. And while it would be reasonable to expect your brain to process all of these items in the same way, it does nothing of the sort. Instead, the visual cortex segregates and plays favorites.

The most dramatic examples of this segregation occur whenever you look at other people.  Within the large chunk of  visual cortex dedicated to object recognition, two areas in each hemisphere specifically process faces (the FFA and OFA) and two areas in each hemisphere specifically process bodies (the FBA and EBA). In each case, one of these areas is located on the side of the brain (near the back) while the other is tucked away on the bottom surface of the temporal lobe. It’s clear that these areas are important for recognizing faces and bodies. Damage to the face area FFA can profoundly impair one’s ability to recognize faces, while direct electrical stimulation of the same area can temporarily distort perception of a face. And when scientists used a magnetic pulse to momentarily disrupt activity in either the face area OFA or the body area EBA of healthy adults, their participants had difficulty discriminating between similar faces or similar bodies, respectively.

Yet the segregation of objects in your visual cortex doesn’t end there. Scientists have long known that visual information about scenes – including the landmarks and buildings that often define them – is processed separately as well. In fact we have at least two scene areas per hemisphere in classic visual cortex: one on the side of the brain (TOS) and one on the lower surface (PPA).*

But what about other types of objects? If you looked at pictures of a trampoline, a screwdriver, a lamppost, and a toad, would they follow the same path through your visual cortex? The answer is no. In a recent study, Talia Konkle and Alfonso Caramazza at Harvard showed people pictures of a wide range of animals and objects while scanning them with fMRI. They studied the activations in visual cortex for each image and used them to compute something they called preference maps. The preference maps indicated whether each bit of cortex preferred animals or objects and, separately, small or large things. When they combined these maps they found zones of visual cortex that preferred large objects, small objects, or animals of any size.** For large objects and animals (with two zones each), one zone was located on the side of the brain and the other on the lower surface. The only zone that preferred small objects over both large ones and animals lay right at the edge of the brain, smack dab between the side of the brain and its lower surface. The face and body areas fit almost entirely within the animal zones, while the scene areas lay within the large-object zones.

Screen Shot 2013-10-17 at 11.43.06 AM

Figure from Konkle & Caramazza, 2013 showing where the face areas, body areas, scene areas, and ‘preference zones’ were in one participant. Each gray blob represents the right hemisphere of the brain, with the left side of each blob representing the back of the brain. The top two brains show a side view while the bottom two show the bottom surface of the cortex.

It may seem odd that object representation in visual cortex is organized based on such arbitrary dimensions. Why should it matter whether the thing you see is big or small, made of cotton or has a cottontail? The study’s authors argue that these divisions make sense if one considers the various ways we use different types of objects. For instance, small objects are generally useful because you can manipulate and interact with them. Recognizing an apple, axe, or comb allows you to eat, chop, or fix your ‘do, respectively – so long as the visual information about these objects gets passed along to brain areas involved in reaching and grasping movements.

Objects like buildings, trees, or couches are obviously too large to be lifted or manipulated. Since they stay put, you’re more likely to use them as landmarks to help you navigate through a neighborhood, park, or room. But you can only use these objects this way if you send the visual information about them to brain regions involved in navigation.

Finally, we have living things, which can move, bite, and behave unpredictably. While a large animal like an elephant might trample you, a small one like a venomous spider or snake could be more lethal still. And don’t even get me started on people! In short, an animal’s size doesn’t determine how you will or won’t interact with it; you need to be ready to predict any animal’s behavior and react accordingly. Should you pet that dog or run from it? Communication between the animal-preferring zones of visual cortex and the social prediction centers in your brain might help you reach the right answer before it’s too late.

What’s the upshot of all this using, manipulating, predicting and fleeing? A wonderful and miraculous map of all the stuff in your world. It’s a modest little map – no larger than a napkin and half the thickness of an iPhone 5 – that represents a vast array of creatures, things, and people based on what they mean to you. How frickin’ amazing is that?

* I’ll get back to this mysterious pattern in a future post.

** I find it interesting that people generally approach the game Twenty Questions with the same category distinctions. The first two questions are almost invariably: Is it alive? And is it bigger than a breadbox?

___

Photo credits

Elephant and bird: Ludovic Hirlimann on Flickr, used via Creative Commons license

Figure with brains: Talia Konkle & Alfonso Caramazza in The Journal of Neuroscience

Konkle T, & Caramazza A (2013). Tripartite organization of the ventral stream by animacy and object size. The Journal of neuroscience : the official journal of the Society for Neuroscience, 33 (25), 10235-42 PMID: 23785139

Konkle T, & Caramazza A (2013). Tripartite organization of the ventral stream by animacy and object size The Journal of Neuroscience DOI: 10.1523/JNEUROSCI.0983-13.2013

Looking Schizophrenia in the Eye

272994276_3c83654e97_bMore than a century ago, scientists discovered something usual about how people with schizophrenia move their eyes. The men, psychologist and inventor Raymond Dodge and psychiatrist Allen Diefendorf, were trying out one of Dodge’s inventions: an early incarnation of the modern eye tracker. When they used it on psychiatric patients, they found that most of their subjects with schizophrenia had a funny way of following a moving object with their eyes.

When a healthy person watches a smoothly moving object (say, an airplane crossing the sky), she tracks the plane with a smooth, continuous eye movement to match its displacement. This action is called smooth pursuit. But smooth pursuit isn’t smooth for most patients with schizophrenia. Their eyes often fall behind and they make a series of quick, tiny jerks to catch up or even dart ahead of their target. For the better part of a century, this movement pattern would remain a mystery. But in recent decades, scientific discoveries have led to a better understanding of smooth pursuit eye movements – both in health and in disease.

Scientists now know that smooth pursuit involves a lot more than simply moving your eyes. To illustrate, let’s say a sexy jogger catches your eye on the street. When you first see the runner, your eyes are stationary and his or her image is moving across your retinas at some relatively constant rate. Your visual system (in particular, your visual motion-processing area MT) must first determine this rate. Then your eyes can move to catch up with the target and match its speed. If you do this well, the jogger’s image will no longer be moving relative to your retinas. From your visual system’s perspective, the jogger is running in place and his or her surroundings are moving instead. From both visual cues and signals about your eye movements, your brain can predict where the jogger is headed and keep moving your eyes at just the right speed to keep pace.

Although the smooth pursuit abnormalities in schizophrenia may sound like a movement problem, they appear to reflect a problem with perception. Sensitive visual tests show that motion perception is disrupted in many patients. They can’t tell the difference between the speeds of two objects or integrate complex motion information as well as healthy controls. A functional MRI study helped explain why. The study found that people with schizophrenia activated their motion-processing area MT less than controls while doing motion-processing tasks. The next logical question – why MT doesn’t work as well for patients – remains unanswered for now.

In my last two posts I wrote about how delusions can develop in healthy people who don’t suffer from psychosis. The same is true of not-so-smooth pursuit. In particular, healthy relatives of patients with schizophrenia tend to have jerkier pursuit movements than subjects without a family history of the illness. They are also impaired at some of the same motion-processing tests that stymie patients. This pattern, along with the results of twin studies, suggests that smooth pursuit dysfunction is inherited. Following up on this idea, two studies have compared subjects’ genotypes with the inheritance patterns of smooth pursuit problems within families. While they couldn’t identify exactly which gene was involved (a limitation of the technique), they both tracked the culprit gene to the same genetic neighborhood on the sixth chromosome.

Despite this progress, the tale of smooth pursuit in schizophrenia is more complex than it appears. For one, there’s evidence that smooth pursuit problems differ for patients with different forms of the disorder. Patients with negative symptoms (like social withdrawal or no outward signs of emotion) may have problems with the first step of smooth pursuit: judging the target’s speed and moving their eyes to catch up. Meanwhile, those with more positive symptoms (like delusions or hallucinations) may have more trouble with the second step: predicting the future movement of the target and keeping pace with their eyes.

It’s also unclear exactly how common these problems are among patients; depending on the study, as many as 95% or as few as 12% of patients may have disrupted smooth pursuit. The studies that found the highest rates of smooth pursuit dysfunction in patients also found rates as high as 19% for the problems among healthy controls. These differences may boil down to the details of how the eye movements were measured in the different experiments. Still, the studies all agreed that people with schizophrenia are far more likely to have smooth pursuit problems than healthy controls. What the studies don’t agree on is how specific these problems are to schizophrenia compared with other psychiatric illnesses. Some studies have found smooth pursuit abnormalities in patients with bipolar disorder and major depression as well as in their close relatives; other studies have not.

Despite these messy issues, a group of scientists at the University of Aberdeen in Scotland recently tried to tell whether subjects had schizophrenia based on their eye movements alone. In addition to smooth pursuit, they used two other measures: the subject’s ability to fix her gaze on a stable target and how she looked at pictures of complex scenes. Most patients have trouble holding their eyes still in the presence of distractors and, when shown a meaningful picture, they tend to look at fewer objects or features in the scene.

Taking the results from all three measures into account, the group could distinguish between a new set of patients with schizophrenia and new healthy controls with an accuracy of 87.8%. While this rate is high, keep in mind that the scientists removed real-world messiness by selecting controls without other psychiatric illnesses or close relatives with psychosis. This makes their demonstration a lot less impressive – and a lot less useful in the real world. I don’t think this method will ever become a viable alternative to diagnosing schizophrenia based on their clinical symptoms, but the approach may hold promise in a similar vein: identifying young people who are at risk for developing the illness. Finding these individuals and helping them sooner could truly mean the difference between life and death.

_____

Photo credit: Travis Nep Smith on Flickr, used via Creative Commons License

Benson PJ, Beedie SA, Shephard E, Giegling I, Rujescu D, & St Clair D (2012). Simple viewing tests can detect eye movement abnormalities that distinguish schizophrenia cases from controls with exceptional accuracy. Biological psychiatry, 72 (9), 716-24 PMID: 22621999

Delusions: Making Sense of Mistaken Senses

6738201_646b9e485b_o

For a common affliction that strikes people of every culture and walk of life, schizophrenia has remained something of an enigma. Scientists talk about dopamine and glutamate, nicotinic receptors and hippocampal atrophy, but they’ve made little progress in explaining psychosis as it unfolds on the level of thoughts, beliefs, and experiences. Approximately one percent of the world’s population suffers from schizophrenia. Add to that the comparable numbers of people who suffer from affective psychoses (certain types of bipolar disorder and depression) or psychosis from neurodegenerative disorders like Alzheimer’s disease. All told, upwards of 3% of the population have known psychosis first-hand. These individuals have experienced how it transformed their sensations, emotions, and beliefs. Why hasn’t science made more progress explaining this level of the illness? What have those slouches at the National Institute of Mental Health been up to?

There are several reasons why psychosis has proved a tough nut to crack. First and foremost, neuroscience is still struggling to understand the biology of complex phenomena like thoughts and memories in the healthy brain. Add to that the incredible diversity of psychosis: how one psychotic patient might be silent and unresponsive while another is excitable and talking up a storm. Finally, a host of confounding factors plague most studies of psychosis. Let’s say a scientist discovers that a particular brain area tends to be smaller in patients with schizophrenia than healthy controls. The difference might have played a role in causing the illness in these patients, it might be a direct result of the illness, or it might be the result of anti-psychotic medications, chronic stress, substance abuse, poor nutrition, or other factors that disproportionately affect patients.

So what’s a well-meaning neuroscientist to do? One intriguing approach is to study psychosis in healthy people. They don’t have the litany of confounding experiences and exposures that make patients such problematic subjects. Yet at first glance, the approach seems to have a fatal flaw. How can you study psychosis in people who don’t have it? It sounds as crazy as studying malaria in someone who’s never had the bug.

In fact, this approach is possible because schizophrenia is a very different illness from malaria or HIV. Unlike communicable diseases, it is a developmental illness triggered by both genetic and environmental factors. These factors affect us all to varying degrees and cause all of us – clinically psychotic or not – to land somewhere on a spectrum of psychotic traits. Just as people who don’t suffer from anxiety disorders can still differ in their tendency to be anxious, nonpsychotic individuals can differ in their tendency to develop delusions or have perceptual disturbances. One review estimates that 1 to 3% of nonpsychotic people harbor major delusional beliefs, while another 5 to 6% have less severe delusions. An additional 10 to 15% of the general population may experience milder delusional thoughts on a regular basis.

Delusions are a common symptom of schizophrenia and were once thought to reflect the poor reasoning abilities of a broken brain. More recently, a growing number of physicians and scientists have opted for a different explanation. According to this model, patients first experience the surprising and mysterious perceptual disturbances that result from their illness. These could be full-blown hallucinations or they could be subtler abnormalities, like the inability to ignore a persistent noise. Patients then adopt delusions in a natural (if misguided) attempt to explain their odd experiences.

An intriguing study from the early 1960s illustrates how rapidly delusions can develop in healthy subjects when expectations and perceptions inexplicably conflict. The study, run on twenty college students at the University of Copenhagen, involved a version of the trick now known as the rubber hand illusion. Each subject was instructed to trace a straight line while his or her hand was inside a box with a secret mirror. For several trials, the subject watched his or her own hand trace the line correctly. Then the experimenters surreptitiously changed the mirror position so that the subject was now watching someone else’s hand trace the straight line – until the sham hand unexpectedly veered off to the right! All of the subjects experienced the visible (sham) hand as their own and felt that an involuntary movement had sent it off course. After several trials with this misbehaving hand, the subjects offered explanations for the deviation. Some chalked it up to their own fatigue or inattention while others came up with wilder, tech-based explanations:

 . . . five subjects described that they felt something strange and queer outside themselves, which pressed their hand to the right or resisted their free mobility. They suggested that ‘magnets’, ‘unidentified forces’, ‘invisible traces under the paper’, or the like, could be the cause.

In other words, delusions may be a normal reaction to the unexpected and inexplicable. Under strange enough circumstances, anyone might develop them – but some of us are more likely to than others.

My next post will describe a clever experiment that planted a delusion-like belief in the heads of healthy subjects and used trickery and fMRI to see how it influenced some more than others. So stay tuned. In the meantime, you may want to ask yourself which members of your family and friends are prone to delusional thinking. Or ask yourself honestly: could it be you?

_______

Photo credit: MiniTar on Flickr, available through Creative Commons

Near-Death Experiment

2568975142_5cdb987617_o

If you own a tv, radio, or computer, you’ve probably heard about the recent neuroscience experiment that studied after-death brain activity in rats. Perhaps you’ve seen it under titles like: Near-death experiences are ‘electrical surge in dying brain’ or Near-death experiences exposed: Surge of brain activity after the heart stops may trigger paranormal visions. You may have heard some jargon about brainwaves and frequency coupling or some such. What does it mean? It is time to chuck your rosary, or at least your copy of Proof of Heaven? (The answer to the latter, in case you’re wondering, is yes.)

The article that caused such a stir was penned by researchers at the University of Michigan and published in the scientific journal PNAS. The experiment was simple and so obvious that I immediately wondered why no one had done it before. The scientists implanted six electrodes in the surface of the rat’s brain. They recorded from the electrodes while the rat was awake and then anesthetized. Finally, they injected a solution into the rat’s heart to make it stop beating and recorded in activity in the rat’s brain while it died. None of these steps are unique. Neuroscientists often place electrodes in the brains of living rats and certainly lab rats are anesthetized and sacrificed on a daily basis. The crucial change that these scientists made was recording after the animal’s death.

What happened once its heart stopped?  A lot, probably more than anyone would have expected. In the first 30 seconds, the researchers observed rapid and coordinated neural activity in the rat’s brain. Unlike under anesthesia, when the rat’s brain was quieter than its wakeful norm, the dying brain was as active and, by some measures, more active than it was when fully awake and alive. We’re not talking about zombie rats here – this activity faded and disappeared beyond the 30-second window after cardiac arrest. Still, something dramatic and consistent happened in those dying moments. The brain activity was essentially the same across all nine rats that died from cardiac arrest and eight other rats that the scientists sacrificed using carbon dioxide inhalation. The results were no fluke.

Of course, these findings (and the headlines touting them in the news) beg the question: is this activity the neural basis for near-death experiences? The answer, of course, is we don’t know. We obviously can’t ask the rats what they experienced, if they experienced anything at all. Still, the activity during the 30-second window wasn’t drastically different from the brain’s wakeful activity, at least according to some of their measures. It’s certainly possible, maybe even probable, that the rat experienced something during this time. That fact alone is intriguing. To say more, we’ll need more grants, more studies, and more dead rats.

For the time being, I’m sure people will spin these results according to their pre-existing beliefs. Some will probably say that the brain activity at death is the physiological echo of God coaxing the soul from the body. And who am I say it ain’t so? But there are certainly other explanations. Neural rhythms arise naturally from the wiring of the brain. Neurons form an incredible number of circuits, or wiring loops, that reverberate. Each neuron is a complex little creature in its own right: electrically charged, tiny, tentacled, and bustling with messenger molecules, neurotransmitters, and ions. When neurons are deprived of oxygen and energy, their electrical charges change drastically, which can cause them to fire errant signals at each other. Without input from the outside world, these errant signals may harmonize in ways that reflect the internal wiring of the system. It’s a little like playing a trumpet. When you blow into the trumpet, your breath is a chaotic rush of air, yet it emerges as a clear and orderly tone. An organized system can make order out of chaos. The same might be said of your brain. And if it turns out that this type of coordinated brain activity actually does cause a special experience when you die, consider it an accidental symphony that plays you one last song before you go.

______

Photo credit: Paul Stocker on Flickr, used via Creative Commons license

ResearchBlogging.org

Borjigin J, Lee U, Liu T, Pal D, Huff S, Klarr D, Sloboda J, Hernandez J, Wang MM, & Mashour GA (2013). Surge of neurophysiological coherence and connectivity in the dying brain. Proceedings of the National Academy of Sciences of the United States of America PMID: 23940340

Mother’s Ruin, Moralists, and the Circuitous Path of Science

William_Hogarth_-_Gin_Lane

Update: Since posting this piece, I’ve come across a paper that questions ancient knowledge about the effects of prenatal alcohol exposure. In particular, the author makes a compelling argument that the biblical story mentioned below has nothing to do with the safety of drinking wine while pregnant. Another paper (sorry, paywall) suggests that the “rhetoric of rediscovery” about the potential harm of alcohol during pregnancy was part of a coordinated attempt by “moral entrepreneurs” to sell a moralist concept to the American public in the late 1970s. All of which goes to show: when science involves controversial topics, its tortuous path just keeps on twisting.

If you ask someone to draw you a roadmap of science, you’re likely to get something linear and orderly: a one-way highway, perhaps, with new ideas and discoveries converging upon it like so many on-ramps. We like to think of science as something that slowly and deliberately moves in the right direction. It doesn’t seem like a proper place for off-ramps, not to mention detours, dead-ends, or roundabouts.

In reality, science is messy and more than a little fickle. As I mentioned in the last post, research is not immune to fads. Ideas fall in and out of fashion based on the political, financial, and social winds of the time. I’m not just talking about wacky ideas either. Even the idea that drinking during pregnancy can harm a developing fetus has had its share of rises and falls.

The belief that drinking while pregnant is harmful has been around since antiquity, popping up among the Ancient Greeks and even appearing in the Old Testament when an angel instructs Samson’s mother to abstain from alcohol while pregnant. Yet the belief was far from universal across different epochs and different peoples. In fact, it took a special kind of disaster for England and, in turn, America to rediscover this idea in the 18th century. The disaster was an epidemic . . . of people drunk on gin.

By the close of the 17th century, bickering between England and France caused the British to restrict the import of French brandy and encourage the local production of gin. Soon gin was cheap and freely available to even the poor and working classes. The Gin Epidemic was underway. Rampant drunkenness became a fact of life in England by 1720 and would persist for several decades after. During this time, gin was particularly popular among the ladies – a fact that earned it the nickname “Mother’s Ruin.”

Soon after the start of the Gin Epidemic, a new constellation of abnormalities became common in newborns. Physicians wondered if heavy prenatal exposure to alcohol disrupted fetal development. In 1726, England’s College of Physicians argued that gin was “a cause of weak, feeble and distempered children.” Other physicians noted the rise in miscarriages, stillbirths, and early infant mortality. And by the end of this gin-drenched era, Britain’s scientific community had little doubt that prenatal alcohol could irreversibly harm a developing fetus.

The notion eventually trickled across the Atlantic Ocean and took hold in America. By the early 19th century, American physicians like Benjamin Rush began to discourage the widespread use of alcohol-based treatments for morning sickness and other pregnancy-related ailments. By the middle of the century, research on the effects of prenatal alcohol exposure had become a talking point for the growing temperance movement. Medical temperance journals sprung up with names like Journal of Inebriety and Scientific Temperance Journal. Soon religious and moralistic figures were using the harmful effects of alcohol on fetal development to bolster their claims that all alcohol is evil and should be banned. They often couched the findings in inflammatory language, full of condemnations and reproach. In the end, their tactics worked. The 18th Amendment to the U.S. Constitution was ratified in 1919, outlawing the production, transportation, and sale of alcohol on American soil.

When the nation finally emerged from Prohibition more than thirteen years later, it had fundamentally changed. People were disillusioned with the temperance movement and wary of the moralistic rhetoric that had once seemed so persuasive. They discounted the old familiar lines from teetotal preachers – including those about the harms of drinking while pregnant. Scientists rejected studies published in medical temperance journals and began to deny that alcohol was harmful during pregnancy. In 1942, the prestigious Journal of the American Medical Association published a response to a reader’s question about drinking during pregnancy which said that even large amounts of alcohol had not been shown to be harmful to the developing human fetus. In 1948, an article in The Practitioner recommended that pregnant women drink alcohol with meals to aid digestion. Science was, in essence, back to square one yet again.

It wasn’t until 1973 that physicians rediscovered and named the constellation of features that characterize infants exposed to alcohol in the womb. The disease, fetal alcohol syndrome, is now an accepted medical phenomenon. Modern doctors and medical journals now caution women to avoid alcohol while pregnant. After a few political and religious detours, we’ve finally made it back to where we were in 1900. That’s the funny thing about science: it isn’t always fast or direct or immune to its cultural milieu. But if we all just have faith and keep driving, we’re bound to get there eventually. I’m almost sure of it.

______

Photo Credit: Gin Lane by William Hogarth 1751 (re-engraving by Samuel Davenport circa 1806). Image in public domain and obtained from Wikipedia.

Eyes Wide Shut

3717066825_cf1b3f86a3_o

In the middle of the 20th century, experimental psychologists began to notice a strange interaction between human vision and time. If they showed people flashes of light close together in time, subjects experienced the flashes as if they all occurred simultaneously. When they asked people to detect faint images, the speed of their subjects’ responses waxed and waned according to a mysterious but predictable rhythm. Taken together, the results pointed to one conclusion: that human vision operates within a particular time window – about 100 milliseconds, or one-tenth of a second.

This discovery sparked a controversy about the nature of vision. Pretty much anyone with a pair of eyes will tell you that vision feels smooth and unbroken. But is it truly as continuous as it feels, or might it occur in discrete chunks of time? Could the cohesive experience of vision be nothing more than an illusion?

Enthusiasm for the idea of discrete visual processing faded over the years, although it was never disproven. Science is not immune to fads; ideas often fall in and out of favor. Besides, vision-in-chunks was a hard sell. It was counterintuitive and contrary to people’s subjective experience. Vision scientists set it aside and moved on to new questions and controversies instead.

The debate resurfaced in the last twenty years, sparked by the discovery of a new twist on an old optical illusion. Scientists have long known about the wagon wheel illusion, which makes it appear as if the wheels of moving cars (or wagons) in films are either turning in the wrong direction or not turning at all. The illusion is caused by a technical glitch: the combination of the periodic rotating wheel and the frame rate of the movie. Your brain doesn’t get enough examples of the spinning wheel to know its direction and speed. But in 1996, scientists discovered that the illusion also occurred in the real world. When hubcaps, tires, and modified LPs turned at certain rates, their direction appeared to reverse. Scientists dug the idea of discrete vision out of a trunk in the attic, dusted it off, and tried it out to explain the effect. In essence, the visual system might have a frame rate of its own. Cross this frame rate with an object rotating at a certain frequency and you’re left seeing tires spin backwards. It seemed to make sense.

In a clever set of experiments, the neuroscientist and author David Eagleman (of Incognito and Sum fame) shot this explanation down. He and his colleague, Keith Kline, chalked the illusion up to tiring motion-processing cells instead. Still, the debate about the nature of vision was reignited. Several neuroscientists became intrigued with the notion of vision-in-chunks and began to think about it in relation to a particular type of brain rhythm that cycles at a rate of – you guessed it – about ten times per second.

In recent years, a slew of experiments have supported the idea that certain aspects of vision happen in discrete packets of time – and that these packets are roughly one-tenth of a second long. The brain rhythms that correspond to this timing – called alpha waves – have acted as the missing link. Brain rhythms essentially tamp down activity in a brain area at a regular interval, like a librarian who keeps shushing a crowd of noisy kids. Cells in a given part of the brain momentarily fall silent but, as kids will do, they start right up again once the shushing is done.

Work by Rufin VanRullen at the Université de Toulouse and, separately, by Kyle Mathewson at the University of Illinois show how this periodic shushing can affect visual perception. For example, Mathewson and colleagues were able to predict whether a subject would detect a briefly flashed circle based on its timing relative to the alpha wave in that subject’s visual cortex. This and other studies like it demonstrate that alpha waves are not always helpful. If something appears at the wrong moment in your rhythm, you could be slower to see it or you might just miss it altogether. In other words, every tenth of a second you might be just a little bit blind.

If you’re a healthy skeptic, you may be wondering how well such experiments reflect vision in the real world. Unless your computer’s on the fritz, you probably don’t spend much time staring at circles on a screen. Does the 10-per-second frame rate apply when you’re looking at the complex objects and people that populate your everyday world?

Enter Frédéric Gosselin and colleagues from the Université de Montréal. Last month they published a simple study in the journal Cognition that tested the idea of discrete vision using pictures of human faces. They made the faces hard to see by bathing them in different amounts of visual ‘noise’ (like the static on a misbehaving television). Subjects had to identify each face as one of six that they had learned in advance. But while they were trying to identify each face, the amount of static on the face kept changing. In fact, Gosselin and colleagues were cycling the amount of static to see how its rate and phase (timing relative to the appearance of each new face) affected their subjects’ performance. They figured that if visual processing is discrete and varies with time, then subjects should perform best when their moments of best vision coincided with the moments of least static obscuring the face.

What did they find? People were best at identifying the faces when the static cycled at 10 or 15 times per second. Gosselin and colleagues suggest that the ideal rate may be somewhere between the two (a possibility that they can’t test after-the-fact). Their results imply that the visual alpha wave affects face recognition – a task that people do every day. But it may only affect it a little. The difference between the subjects’ best accuracy (when the static cycling was set just right) and their worst accuracy was only 7%. In the end, the alpha wave is one of many factors that determine perception. And even when these rhythms are shushing visual cortex, it’s not enough to shut down the entire area. Some troublemakers keep yapping right through it.

When it comes to alpha waves and the nature of discrete visual processing, scientists have their work cut out for them. For example, while some studies found that perception was affected by an ongoing visual alpha wave, others found that visual events (like the appearance of a new image) triggered new alpha waves in visual cortex. In fact, brain rhythms are not by any means exclusive; different rhythms can be layered one upon the other within a brain area, making it harder to pull out the role of any one of them.  For now it’s at least safe to say that visual processing is nowhere near as smooth and continuous as it appears. Your vision flickers and occasionally fails. As if your brain dims the lights, you have moments when you see less and miss more – moments that may happen tens of thousands of times each hour.

This fact raises a troubling question. Why would the brain have rhythms that interfere with perception? Paradoxically enough, discrete visual processing and alpha waves may actually give your visual perception its smooth, cohesive feel. In the last post I mentioned how you move your eyes about 2 or 3 times per second. Your visual system must somehow stitch together the information from these separate glimpses that are offset from each other both in time and space. Alpha waves allow visual information to echo in the brain. They may stabilize visual representations over time, allowing them to linger long enough for the brain, that master seamstress, to do her work.

_____

Photo credit: Tom Conger on Flickr with Creative Commons license

Blais C, Arguin M, & Gosselin F (2013). Human visual processing oscillates: Evidence from a classification image technique Cognition, 128 (3), 353-62 PMID: 23764998

%d bloggers like this: