Neural Conspiracy Theories

140775790_e3e122cd65_bLast month, a paper quietly appeared in The Journal of Neuroscience to little fanfare and scant media attention (with these exceptions). The study revolved around a clever and almost diabolical premise: that using perceptual trickery and outright deception, its authors could plant a delusion-like belief in the heads of healthy subjects. Before you call the ethics police, I should mention that the belief wasn’t a delusion in the formal sense of the word. It didn’t cause the subjects any distress and was limited to the unique materials used in the study. Still, it provided a model delusion that scientists Katharina Schmack, Philipp Sterzer, and colleagues could study to investigate the interplay of perception and belief in healthy subjects. The experiment is quite involved, so I’ll stick to the coolest and most relevant details.

As I mentioned in my last post, delusions are not exclusive to people suffering from psychosis. Many people who are free of any diagnosable mental illness still have a tendency to develop them, although the frequency and severity of these delusions differ across individuals. There are some good reasons to conduct studies like this one on healthy people rather than psychiatric patients. Healthy subjects are a heck of a lot easier to recruit, easier to work with, and less affected by confounding factors like medication and stress.

Schmack, Sterzer, and colleagues designed their experiment to test the idea that delusions arise from two distinct but related processes. First, a person experiences perceptual disturbances. According to the group’s model, these disturbances actually reflect poor expectation signals as the brain processes information from the senses. In theory, these poor signals would make irrelevant or commonplace sights, sounds, and sensations seem surprising and important. Without an explanation for this unexpected weirdness, the individual comes up with a delusion to make sense of it all. Once the delusion is in place, so-called higher areas of the brain (those that do more complex things like ponder, theorize, and believe) generate new expectation signals based on the delusion. These signals feed back on so-called lower sensory areas and actually bias the person’s perception of the outside world based on the delusion. According to the authors, this would explain why people become so convinced of their delusions: they are constantly perceiving confirmatory evidence. Strangely enough, this model sounds like a paranoid delusion in its own right. Various regions of your brain may be colluding to fool your senses into making you believe a lie!

To test the idea, the experimenters first had to toy with their subjects’ senses. They did so by capitalizing on a quirk of the visual system: that when people are shown two conflicting images separately to their two eyes, they don’t perceive both images at once. Instead, perception alternates between the two. In the first part of this experiment, the two images were actually movies of moving dots that appeared to form a 3-D sphere spinning either to the left (for one eye) or to the right (for the other). For this ambiguous visual condition, subjects were equally likely to see a sphere spinning to the right or to the left at any given moment in time, with it switching direction periodically.

Now the experimenters went about planting the fake belief. They gave the subjects a pair of transparent glasses and told them that the lenses contained polarizing filters that would make the sphere appear to spin more in one of the two directions. In fact, the lenses were made of simple plastic and could do no such thing. Once the subjects had the glasses on, the experimenters began showing the same movie to both eyes. While this change allowed the scientists to control exactly what the subjects saw, the subjects had no idea that the visual setup had changed. In this unambiguous condition, all subjects saw a sphere that alternated direction (just as the ambiguous sphere had done), except that this sphere spun far more in one of the two directions. This visual trick, paired with the story about polarized lenses, was meant to make subjects believe that the glasses caused the change in perception.

After that clever setup, the scientists were ready to see how the model delusion would affect each subject’s actual perception. While the subject continued to wear the glasses, they were shown the two original, conflicting movies to their two separate eyes. In the first part of the experiment, this ambiguous condition caused subjects to see a rotating sphere that alternated equally between spinning to the left and right. But if their new belief about the glasses biased their perception of the spinning sphere, they would now report seeing the sphere spin more often in the belief-consistent direction.

What happened? Subjects did see the sphere spin more in the belief-consistent direction. While the effect was small, it was still impressive that they could bias perception at all, considering the simplicity of the images. They also found that each subject’s delusional conviction score (how convinced they were by their delusional thoughts in everyday life) correlated with this effect. The more the subject believed her real-life delusional thoughts, the more her belief about the glasses affected her perception of the ambiguous spinning sphere.

But there’s a hitch. What if subjects were reporting the motion bias because they thought that was what they were supposed to see and not because they actually saw it? To answer this question, they recruited a new batch of participants and ran the experiment again in a scanner using fMRI.

Since the subjects’ task hinged on motion perception, Sterzer and colleagues first looked at the activity in a brain area called MT that processes visual motion. By analyzing the patterns of fMRI activity in this area, the scientists confirmed that subjects were accurately reporting the motion they perceived. That may sound far-fetched, but this kind of ‘mind reading’ with fMRI  has been done quite successfully for basic visual properties like motion.

The group also studied activity throughout the brain while their glasses-wearing subjects learned the false belief (unambiguous condition) and allowed the false belief to more or less affect their perception (ambiguous condition). They found that belief-based perceptual bias correlated with activity in the left orbitofrontal cortex, a region just behind the eyes that is involved in decision-making and expectation. In essence, subjects with more activity in this region during both conditions tended to also report lopsided spin directions that confirmed their expectations during the ambiguous condition. And here’s the cherry on top: subjects with higher delusional conviction scores appeared to have greater communication between left orbitofrontal cortex and motion-processing area MT during the ambiguous visual condition. Although fMRI can’t directly measure communication between areas and can’t tell us the direction of communication, this pattern suggests that the left orbitofrontal cortex may be directly responsible for biasing motion perception in delusion-prone subjects.

All told, the results of the experiment seem to tell a neat story that fits the authors’ model about delusions. Yet there are a couple of caveats worth mentioning. First, the key finding of their study – that a person’s delusional conviction score correlates with his or her belief-based motion perception bias – is built upon a quirky and unnatural aspect of human vision that may or may not reflect more typical sensory processes. Second, it’s hard to say how clinically relevant the results are. No one knows for certain if delusions arise by the same neural mechanisms in the general population as they do in patients with illnesses like schizophrenia. It has been argued that they probably do because the same risk factors pop up for patients as for non-psychotic people with delusions: unemployment, social difficulties, urban surroundings, mood disturbances and drug or alcohol abuse. Then again, this group is probably also at the highest risk for getting hit by a bus, dying from an curable disease, or suffering any number of misfortunes that disproportionately affect people in vulnerable circumstances. So the jury is still about on the clinical applicability of these results.

Despite the study’s limitations, it was brilliantly designed and tells a compelling tale about how the brain conspires to manipulate perception based on beliefs. It also implicates a culprit in this neural conspiracy. Dare I say ringleader? Mastermind? Somebody cue the close up of orbitofrontal cortex cackling and stroking a cat.

_____

Photo credit: Daniel Horacio Agostini (dhammza) on Flickr, used through Creative Commons license

Schmack K, Gòmez-Carrillo de Castro A, Rothkirch M, Sekutowicz M, Rössler H, Haynes JD, Heinz A, Petrovic P, & Sterzer P (2013). Delusions and the role of beliefs in perceptual inference. The Journal of Neuroscience, 33 (34), 13701-13712 PMID: 23966692

Modernity, Madness, and the History of Neuroscience

4666194636_a4d78d506e_o

I recently read a wonderful piece in Aeon Magazine about how technology shapes psychotic delusions. As the author, Mike Jay, explains:

Persecutory delusions, for example, can be found throughout history and across cultures; but within this category a desert nomad is more likely to believe that he is being buried alive in sand by a djinn, and an urban American that he has been implanted with a microchip and is being monitored by the CIA.

While delusional people of the past may have fretted over spirits, witches, demons and ghouls, today they often worry about wireless signals controlling their minds or hidden cameras recording their lives for a reality TV show. Indeed, reality TV is ubiquitous in our culture and experiments in remote mind-control (albeit on a limited scale) have been popping up recently in the news. As psychiatrist Joel Gold of NYU and philosopher Ian Gold of McGill University wrote in 2012: “For an illness that is often characterized as a break with reality, psychosis keeps remarkably up to date.”

Whatever the time or the place, new technologies are pervasive and salient. They are on the tips of our tongues and, eventually, at the tips of our fingers. Psychotic or not, we are all captivated by technological advances. They provide us with new analogies and new ways of explaining the all-but-unexplainable. And where else do we attempt to explain the mysteries of the world, if not through science?

As I read Jay’s piece on psychosis, it struck me that science has historically had the same habit of co-opting modern technologies for explanatory purposes. In the case of neuroscience, scientists and physicians across cultures and ages have invoked the  innovations of their day to explain the mind’s mysteries. For instance, the science of antiquity was rooted in the physical properties of matter and the mechanical interactions between them. Around 7th century BC, empires began constructing great aqueducts to bring water to their growing cities. The great engineering challenge of the day was to control and guide the flow of water across great distances. It was in this scientific milieu that the ancient Greeks devised a model for the workings of the mind. They believed that a person’s thoughts, feelings, intellect and soul were physical stuff: specifically, an invisible, weightless fluid called psychic pneuma. Around 200 AD, a physician and scientist of the Roman Empire (known for its masterful aqueducts) would revise and clarify the theory. The physician, Galen, believed that pneuma fills the brain cavities called ventricles and circulates through white matter pathways in the brain and nerves in the body just as water flows through a tube. As psychic pneuma traveled throughout the body, it carried sensation and movement to the extremities. Although the idea may sound farfetched to us today, this model of the brain persisted for more than a millennium and influenced Renaissance thinkers including Descartes.

By the 18th century, however, the science world was a-buzz with two strange new forces: electricity and magnetism. At the same time, physicians and anatomists began to think of the brain itself as the stuff that gives rise to thought and feeling, rather than a maze of vats and tunnels that move fluid around. In the 179os, Luigi Galvani’s experiments zapping frog legs showed that nerves communicate with muscles using electricity. So in the 19th century, just as inventors were harnessing electricity to run motors and light up the darkness, scientists reconceived the brain as an organ of electricity. It was a wise innovation and one supported by experiments, but also driven by the technical advances of the day.

Science was revolutionized once again with the advent of modern computers in the 1940s and ‘50s. In the 1950s, the new technology sparked a surge of research and theories that used the computer as an analogy for the brain. Psychologists began to treat mental events like computer processes, which can be broken up and analyzed as a set of discrete steps. They equated brain areas to processors and neural activity in these areas to the computations carried out by computers. Just as computers rule our modern technological world, this way of thinking about the brain still profoundly influences how neuroscience and psychology research is carried out and interpreted. Today, some labs cut out the middleman (the brain) entirely. Results from computer models of the brain are regularly published in neuroscience journals, sometimes without any data from an actual physical brain.

I’m sure there are other examples from the history of neuroscience in general and certainly from the history of science as a whole. Please comment and share any other ways that technology has shaped the models, themes, and analogies of science!

Additional sources:

Crivellato E & Ribatti D (2007) Soul, mind, brain: Greek philosophy and the birth of neuroscience. Brain Research Bulletin 71:327-336.

Karenberg A (2009) Cerebral Localization in the Eighteenth Century – An Overview. Journal of the History of the Neurosciences, 18:248-253.

_________

Photo Credit: dominiqueb on Flickr, available through Creative Commons

Near-Death Experiment

2568975142_5cdb987617_o

If you own a tv, radio, or computer, you’ve probably heard about the recent neuroscience experiment that studied after-death brain activity in rats. Perhaps you’ve seen it under titles like: Near-death experiences are ‘electrical surge in dying brain’ or Near-death experiences exposed: Surge of brain activity after the heart stops may trigger paranormal visions. You may have heard some jargon about brainwaves and frequency coupling or some such. What does it mean? It is time to chuck your rosary, or at least your copy of Proof of Heaven? (The answer to the latter, in case you’re wondering, is yes.)

The article that caused such a stir was penned by researchers at the University of Michigan and published in the scientific journal PNAS. The experiment was simple and so obvious that I immediately wondered why no one had done it before. The scientists implanted six electrodes in the surface of the rat’s brain. They recorded from the electrodes while the rat was awake and then anesthetized. Finally, they injected a solution into the rat’s heart to make it stop beating and recorded in activity in the rat’s brain while it died. None of these steps are unique. Neuroscientists often place electrodes in the brains of living rats and certainly lab rats are anesthetized and sacrificed on a daily basis. The crucial change that these scientists made was recording after the animal’s death.

What happened once its heart stopped?  A lot, probably more than anyone would have expected. In the first 30 seconds, the researchers observed rapid and coordinated neural activity in the rat’s brain. Unlike under anesthesia, when the rat’s brain was quieter than its wakeful norm, the dying brain was as active and, by some measures, more active than it was when fully awake and alive. We’re not talking about zombie rats here – this activity faded and disappeared beyond the 30-second window after cardiac arrest. Still, something dramatic and consistent happened in those dying moments. The brain activity was essentially the same across all nine rats that died from cardiac arrest and eight other rats that the scientists sacrificed using carbon dioxide inhalation. The results were no fluke.

Of course, these findings (and the headlines touting them in the news) beg the question: is this activity the neural basis for near-death experiences? The answer, of course, is we don’t know. We obviously can’t ask the rats what they experienced, if they experienced anything at all. Still, the activity during the 30-second window wasn’t drastically different from the brain’s wakeful activity, at least according to some of their measures. It’s certainly possible, maybe even probable, that the rat experienced something during this time. That fact alone is intriguing. To say more, we’ll need more grants, more studies, and more dead rats.

For the time being, I’m sure people will spin these results according to their pre-existing beliefs. Some will probably say that the brain activity at death is the physiological echo of God coaxing the soul from the body. And who am I say it ain’t so? But there are certainly other explanations. Neural rhythms arise naturally from the wiring of the brain. Neurons form an incredible number of circuits, or wiring loops, that reverberate. Each neuron is a complex little creature in its own right: electrically charged, tiny, tentacled, and bustling with messenger molecules, neurotransmitters, and ions. When neurons are deprived of oxygen and energy, their electrical charges change drastically, which can cause them to fire errant signals at each other. Without input from the outside world, these errant signals may harmonize in ways that reflect the internal wiring of the system. It’s a little like playing a trumpet. When you blow into the trumpet, your breath is a chaotic rush of air, yet it emerges as a clear and orderly tone. An organized system can make order out of chaos. The same might be said of your brain. And if it turns out that this type of coordinated brain activity actually does cause a special experience when you die, consider it an accidental symphony that plays you one last song before you go.

______

Photo credit: Paul Stocker on Flickr, used via Creative Commons license

ResearchBlogging.org

Borjigin J, Lee U, Liu T, Pal D, Huff S, Klarr D, Sloboda J, Hernandez J, Wang MM, & Mashour GA (2013). Surge of neurophysiological coherence and connectivity in the dying brain. Proceedings of the National Academy of Sciences of the United States of America PMID: 23940340

Eyes Wide Shut

3717066825_cf1b3f86a3_o

In the middle of the 20th century, experimental psychologists began to notice a strange interaction between human vision and time. If they showed people flashes of light close together in time, subjects experienced the flashes as if they all occurred simultaneously. When they asked people to detect faint images, the speed of their subjects’ responses waxed and waned according to a mysterious but predictable rhythm. Taken together, the results pointed to one conclusion: that human vision operates within a particular time window – about 100 milliseconds, or one-tenth of a second.

This discovery sparked a controversy about the nature of vision. Pretty much anyone with a pair of eyes will tell you that vision feels smooth and unbroken. But is it truly as continuous as it feels, or might it occur in discrete chunks of time? Could the cohesive experience of vision be nothing more than an illusion?

Enthusiasm for the idea of discrete visual processing faded over the years, although it was never disproven. Science is not immune to fads; ideas often fall in and out of favor. Besides, vision-in-chunks was a hard sell. It was counterintuitive and contrary to people’s subjective experience. Vision scientists set it aside and moved on to new questions and controversies instead.

The debate resurfaced in the last twenty years, sparked by the discovery of a new twist on an old optical illusion. Scientists have long known about the wagon wheel illusion, which makes it appear as if the wheels of moving cars (or wagons) in films are either turning in the wrong direction or not turning at all. The illusion is caused by a technical glitch: the combination of the periodic rotating wheel and the frame rate of the movie. Your brain doesn’t get enough examples of the spinning wheel to know its direction and speed. But in 1996, scientists discovered that the illusion also occurred in the real world. When hubcaps, tires, and modified LPs turned at certain rates, their direction appeared to reverse. Scientists dug the idea of discrete vision out of a trunk in the attic, dusted it off, and tried it out to explain the effect. In essence, the visual system might have a frame rate of its own. Cross this frame rate with an object rotating at a certain frequency and you’re left seeing tires spin backwards. It seemed to make sense.

In a clever set of experiments, the neuroscientist and author David Eagleman (of Incognito and Sum fame) shot this explanation down. He and his colleague, Keith Kline, chalked the illusion up to tiring motion-processing cells instead. Still, the debate about the nature of vision was reignited. Several neuroscientists became intrigued with the notion of vision-in-chunks and began to think about it in relation to a particular type of brain rhythm that cycles at a rate of – you guessed it – about ten times per second.

In recent years, a slew of experiments have supported the idea that certain aspects of vision happen in discrete packets of time – and that these packets are roughly one-tenth of a second long. The brain rhythms that correspond to this timing – called alpha waves – have acted as the missing link. Brain rhythms essentially tamp down activity in a brain area at a regular interval, like a librarian who keeps shushing a crowd of noisy kids. Cells in a given part of the brain momentarily fall silent but, as kids will do, they start right up again once the shushing is done.

Work by Rufin VanRullen at the Université de Toulouse and, separately, by Kyle Mathewson at the University of Illinois show how this periodic shushing can affect visual perception. For example, Mathewson and colleagues were able to predict whether a subject would detect a briefly flashed circle based on its timing relative to the alpha wave in that subject’s visual cortex. This and other studies like it demonstrate that alpha waves are not always helpful. If something appears at the wrong moment in your rhythm, you could be slower to see it or you might just miss it altogether. In other words, every tenth of a second you might be just a little bit blind.

If you’re a healthy skeptic, you may be wondering how well such experiments reflect vision in the real world. Unless your computer’s on the fritz, you probably don’t spend much time staring at circles on a screen. Does the 10-per-second frame rate apply when you’re looking at the complex objects and people that populate your everyday world?

Enter Frédéric Gosselin and colleagues from the Université de Montréal. Last month they published a simple study in the journal Cognition that tested the idea of discrete vision using pictures of human faces. They made the faces hard to see by bathing them in different amounts of visual ‘noise’ (like the static on a misbehaving television). Subjects had to identify each face as one of six that they had learned in advance. But while they were trying to identify each face, the amount of static on the face kept changing. In fact, Gosselin and colleagues were cycling the amount of static to see how its rate and phase (timing relative to the appearance of each new face) affected their subjects’ performance. They figured that if visual processing is discrete and varies with time, then subjects should perform best when their moments of best vision coincided with the moments of least static obscuring the face.

What did they find? People were best at identifying the faces when the static cycled at 10 or 15 times per second. Gosselin and colleagues suggest that the ideal rate may be somewhere between the two (a possibility that they can’t test after-the-fact). Their results imply that the visual alpha wave affects face recognition – a task that people do every day. But it may only affect it a little. The difference between the subjects’ best accuracy (when the static cycling was set just right) and their worst accuracy was only 7%. In the end, the alpha wave is one of many factors that determine perception. And even when these rhythms are shushing visual cortex, it’s not enough to shut down the entire area. Some troublemakers keep yapping right through it.

When it comes to alpha waves and the nature of discrete visual processing, scientists have their work cut out for them. For example, while some studies found that perception was affected by an ongoing visual alpha wave, others found that visual events (like the appearance of a new image) triggered new alpha waves in visual cortex. In fact, brain rhythms are not by any means exclusive; different rhythms can be layered one upon the other within a brain area, making it harder to pull out the role of any one of them.  For now it’s at least safe to say that visual processing is nowhere near as smooth and continuous as it appears. Your vision flickers and occasionally fails. As if your brain dims the lights, you have moments when you see less and miss more – moments that may happen tens of thousands of times each hour.

This fact raises a troubling question. Why would the brain have rhythms that interfere with perception? Paradoxically enough, discrete visual processing and alpha waves may actually give your visual perception its smooth, cohesive feel. In the last post I mentioned how you move your eyes about 2 or 3 times per second. Your visual system must somehow stitch together the information from these separate glimpses that are offset from each other both in time and space. Alpha waves allow visual information to echo in the brain. They may stabilize visual representations over time, allowing them to linger long enough for the brain, that master seamstress, to do her work.

_____

Photo credit: Tom Conger on Flickr with Creative Commons license

Blais C, Arguin M, & Gosselin F (2013). Human visual processing oscillates: Evidence from a classification image technique Cognition, 128 (3), 353-62 PMID: 23764998

Sight Unseen

6238705478_384842373b_o

Eyelids. They come in handy for sandstorms, eye shadow, and poolside naps. You don’t see much when they’re closed, but when they’re open you have an all-access pass to the visible world around you. Right? Well, not exactly. Here at Garden of the Mind, the next two posts are dedicated to the ways that you are blind – every day – and with your eyes wide open.

One of the ways you experience everyday blindness has to do with the movements of your eyes. If you stuck a camera in your retina and recorded the images that fall on your eye, the footage would be nauseating. Think The Blair Witch Project, only worse. That’s because you move your eyes about once every half a second – more often than your heart beats. You make these eye movements constantly, without intention or even awareness. Why? Because, thanks to inequalities in the eye and visual areas of the brain, your peripheral vision is abysmal. It’s true even if you have 20/20 vision. You don’t sense that you are legally blind in your peripheral vision because you compensate by moving your eyes from place to place. Like snapping a series of overlapping photographs to create a panoramic picture, you move your eyes to catch different parts of a scene and your brain stitches these ‘shots’ together.

As it turns out, the brain is a wonderful seamstress. All this glancing and stitching leaves us with a visual experience that feels cohesive and smooth – nothing like the Frankenstein creation it actually is. One reason this beautiful self-deception works is that we turn off much of our visual system every time we move our eyes. You can test this out by facing a mirror and moving your eyes quickly back and forth (as if you are looking at your right and left ears). Try as you might, you won’t be able to catch your eyes moving. It’s not because they’re moving too little for you to see; a friend looking over your shoulder would clearly see them darting back and forth. You can feel them moving yourself if you gently rest your fingers below your lower lashes.

It would be an overstatement to say that you are completely blind every time you move your eyes. While some aspects of visual processing (like that of motion) are switched off, others (like that of image contrast) seem to stay on. Still, this means that twice per second, or 7,200 times each hour, your brain shuts you out of your own sense of sight.  In these moments you are denied access to full visual awareness. You are left, so to speak, in the dark.

Photo credit: Pete Georgiev on Flickr under Creative Commons license

%d bloggers like this: