Another Time, Another Place

7052169001_4ac18f1ef6_b

Whenever I visit my childhood home outside of Chicago I try to make it to the local pancake house. The buttery pancakes would be reason enough, but they’re not the only reason I stop by. A stroll through that pancake house is truly a stroll down memory lane. Each table I pass triggers a memory of a meal shared with different people in different decades of my life. One moment I’m eating German pancakes with my college boyfriend. The next, I am passing menus to my new husband’s family.  The next, I am celebrating my eighth grade graduation with my parents and older brother.

Memories return you to a specific time and place. Consider so-called flashbulb memories, or vivid memories of  dramatic moments that caught you off-guard. I remember exactly where I was when I heard that a plane had struck one of the Twin Towers and, later, when I learned that my father had died. I remember that I was sitting on the living room rug in my Somerville apartment when I watched Columbia transform from a space shuttle into a streak of fire across the sky. Is it helpful to remember where I was sitting? Not in the slightest. But in the murky, mysterious realm of memory, when and what are inextricably linked with where.

Mention the word “memory” to neuroscientists and you’re sure to get them thinking of the hippocampus, a sliver of tissue nestled deep inside each hemisphere of the brain. The hippocampus has been synonymous with memory since the late 1950s, when William Scoville and Brenda Milner described a patient who was incapable of forming new memories after both of his hippocampi were removed. Since then, throngs of neuroscientists have devoted their careers to studying the hippocampus. Among other revelations, they’ve discovered a class of neurons called place cells that represent (you guessed it) information about place.

How do cells represent place? To illustrate, let’s say you’re in your favorite coffee shop. Some of the place cells in your hippocampus will fire like crazy when you walk through the entrance. Others save their enthusiasm until you are waiting in line to order your latte, stopping at the counter for milk and sugar, or settling in at your favorite table. When you physically occupy their place-of-interest, they go nuts – like a neural alarm signaling your location. At this moment, you are here!

The same principle applies to my experience at the pancake house. Different place cells fire at different tables. In essence, these sets of cells provide a unique neural code for each space I can occupy in the restaurant. And this code has been with me for a while. When I sat in the corner booth after my graduation from middle school, I formed a memory of that celebration that included the code for that particular spot. Decades later, sitting in that booth or even walking past it can trigger a similar code in my brain, one that elicits the rest of that dusty old memory.

While eternally cool, place cells have become old news in hippocampal research. The new hippocampal hotness is studying “time cells”. These recently discovered neurons prefer to fire at different intervals after an event (say, ten seconds versus one minute after you step into the coffee shop). This research fad is a bit amusing, as it turns out that place cells and “time cells” are one and the same. This fact hasn’t stopped scientists from referring to “time cells,” but it has forced them to typically use the term in quotation marks.

As scientists studied the time code in the hippocampal cells of rats, a flaw in their experiments became clear. Their studies recorded the neural activity of moving rats, which means that the firing patterns observed by the scientists could reflect changes in time, changes in the rat’s location, or in its motion.

Two recent papers addressed this issue and clarified the nature of “time cells” in the hippocampus. The first of these appeared in the journal Neuron in June of this year. The paper, by Benjamin Kraus, Michael Hasselmo, and collaborators at Boston University, describes an experiment that has as much to do with your time spent sweating it out at the gym as it does with your memory of past events. The scientists recorded the activity of hippocampal cells in rats as they ran on a treadmill or moved around in a simple maze. Since the rat remained in the same location as it ran on the treadmill, the researchers could decouple the rat’s location from the passage of time and the distance the rat ran. Since the authors could vary the speed of the treadmill, they could also piece apart the related variables of time and distance.

The scientists found that “time cells” still produced a time code when location was kept constant (on the treadmill). Using some fancy modeling, they also showed that the activity of most “time cells” reflected a combination of elapsed time and distance run, but a smaller number of “time cells” seemed to care only about time or distance. They also found that these same cells behaved like normal place cells when the rat walked around a simple maze. In short, place cells (a.k.a. “time cells”) can convey information about place, time, and distance travelled to varying degrees that also change under different conditions.

A second paper on the subject came out in a September issue of The Journal of Neuroscience. The authors, Christopher MacDonald, Howard Eichenbaum*, and colleagues (also from Boston University) eliminated the variable of location by physically restraining the rats from moving with a special headpiece that attached to the rats’ heads. This headpiece locked into the testing apparatus so that the rats couldn’t move their heads during testing. Unlike the fitness buff rats in the prior study, these rats were given a memory task. They got a whiff of an odor and then another whiff of an odor a few seconds later. If the second odor was the same as the first, the rat licked its waterspout and got a reward (a drop of water). If the two odors were different, the rat was not supposed to lick.

Even though the rats were completely immobile, the rats’ “time cells” showed a strong time code. Different cells fired at different times during the delay. These cells also seemed to represent what information (in this case, the odors presented for the task). The scientists found that the overall pattern of “time cell” firing was more similar when the rats remembered the same odor than when they remembered different odors across trials.

In short, place/time cells can represent what, when, and where in a variety of ways, depending on a variety of factors. This representation is flexible – just as memory must be in order for you to remember the date of your anniversary, the feel of your first kiss, and the items on your next shopping list. The remarkable thing about memory is that it is both flexible and robust, meaning that it is resistant to degradation or being swamped out by noise. It can return us to times, places, and experiences that are far away and decades past. For that, we can thank the hippocampus, neural codes, and a set of remarkable cells with an identity crisis.

_____

Photo credit: Stu Rapley on Flickr, used via Creative Commons License

*Howard Eichenbaum was also a middle author on the Neuron paper. Much of the recent work on “time cells” has come from his lab and affiliated labs at Boston University.

Kraus BJ, Robinson RJ 2nd, White JA, Eichenbaum H, & Hasselmo ME (2013). Hippocampal “time cells”: time versus path integration. Neuron, 78 (6), 1090-1101 PMID: 23707613

MacDonald CJ, Carrow S, Place R, & Eichenbaum H (2013). Distinct hippocampal time cell sequences represent odor memories in immobilized rats. The Journal of Neuroscience : the official journal of the Society for Neuroscience, 33 (36), 14607-14616 PMID: 24005311

Stuff and Brains Part 2: How Tools Come In Handy

298571748_c18ca5d78b_bHumans learn about objects by exploring them. I once described how my infant daughter explored objects, discovering their uses and properties through trial and error, observation, and plenty of dead ends. Her modest experiments illustrated a more universal truth: that from our earliest moments, our experience with objects in the world is fundamentally tied to our senses, to the ways we physically interact with them, and to the purposes they serve.

Last week I wrote about object-selective cortex, the part of visual cortex that lets us recognize people and stuff. I mentioned that this swath of the brain is speckled with several areas that specialize in processing certain object classes (e.g., faces, bodies, and scenes). If you consider object-selective cortex as a whole, you find that these specialized areas fit within a broader organization based on whether the to-be-recognized object is animate (a living, moving thing) or, if not, whether it’s large or small. While this may sound like a wacky way to divvy up object recognition, I mentioned some plausible reasons why your brain might map objects this way.

That’s the big-picture view. But what happens if we zoom in and explore one little bit of object-selective cortex in detail? Would we see a meaningful organization at this scale too? The answer, dear reader, is yes. In fact, this type of micro-organization can tell us volumes about how we recognize, understand, and use the objects around us.

For a beautiful example, let’s travel to the extrastriate body area (EBA).* The EBA is involved in visually recognizing bodies. Your EBA is active when you see a human body, regardless of whether the body is clothed or unclothed. It’s also active when you see parts of a body or even (to a lesser degree) when you see abstract body representations like stick figures. In 2010, scientists from Northumbria University used fMRI to ‘zoom in’ on the EBA in the left hemisphere. The team found that a chunk of the left EBA is specifically interested in pictures of hands, as opposed to other parts of the body. In essence, they found a micro-organization within the EBA, segregating hands from other body parts.

Before we talk more about hands, let’s visit another object-selective area in the same vicinity: the tool-selective area on the middle temporal gyrus. No kidding, your visual cortex has areas devoted to tools! The tool area on the middle temporal gyrus is engaged when you see a picture of a tool, be it a hammer, a stapler, or a fork. Patients with brain damage in this region tend to have trouble recalling information about the actions paired with common tools. But what counts as a tool for this region? One research group tried to answer this question by training adult subjects to use unfamiliar objects as tools. Using fMRI, the group showed that pictures of these objects activated the tool area after but not before training. In short, the brain dynamically reorganizes object recognition, or at least tool recognition, based on new experiences with objects.

But the story doesn’t end there. In 2012, the same group that discovered the hand area reported another find: that the hand area and the tool area overlap – a lot. What does this overlap mean? In essence, the same spot of cortex is active both when you see a hand and when you see a screwdriver or a pair of scissors. Notice that this goes against the broad divisions mentioned in my last post, since hands are animate and screwdrivers are not. Here, scale makes all the difference. When you zoom out, you see that object-selective cortex is broadly divvied up based on object animacy and size, but these divisions aren’t absolute and ubiquitous. Up close, you can find tiny bits of cortex that buck the trend, each with its own idiosyncratic combination of preferences.

Screen Shot 2013-10-24 at 3.39.27 PM

Figure from Bracci et al, 2012, showing the overlap of hand and tool areas in the left hemispheres of all but one of their subjects. Each slice represents the overlap (shown in cyan) in a different subject.

While each local mix of preferences may be idiosyncratic, it is probably not accidental. To save space and speed up reactions, brain organization is very well optimized. Chances are good that hands and tools overlap in the brain for a reason. But what might that reason be? It might stem from the fact that hands are intimately linked with tools in your visual experience. Since hands grip tools, you tend to see them together. You also tend to see faces and bodies together (that is, unless you’re watching a horror film.) And as it turns out, the face area and the body area on the bottom temporal surface of the right hemisphere appear to partially overlap as well. Could this be because faces and bodies, like hands and tools, tend to co-occur in our visual experience? It’s possible. Humans are quite sensitive to the statistical properties of our experience with objects.

But there’s another, quite different explanation for why faces overlap with bodies and tools overlap with hands in object-selective cortex. Brain organization tends to be dictated by where information needs to go next. (In essence, how the information will be used). The 2012 paper presents evidence that the overlapping hand/tool area is communicating with other areas of the brain that guide object-directed actions. The paper also cites another fMRI study that suggests the overlapping face and body areas in the right hemisphere communicate with parts of the brain involved in social interactions. In short, recognizing either a face or a body provides information that the social regions in your brain may need, while visual information about hands or tools may be invaluable when it comes time for reaching, grabbing, lifting, or stapling stuff.

Hands and tools. Faces and bodies. These are just a small sample of the many kinds of objects and creatures we see every day of our lives. Just imagine if we knew the micro-organization of every millimeter of object-selective cortex. Now that would be a map, one you started shaping from your earliest days on this earth. It would be a record of your lifetime of adventures with people and with stuff.

______

*Is it just me or does this post seem like an episode of The Magic School Bus?

Photo credits

Hands photo: Carmen Maria on Flickr

Brain images: Bracci et al, 2012 in The Journal of Neurophysiology

Bracci S, Cavina-Pratesi C, Ietswaart M, Caramazza A, & Peelen MV (2012). Closely overlapping responses to tools and hands in left lateral occipitotemporal cortex. Journal of neurophysiology, 107 (5), 1443-56 PMID: 22131379

Mapping a World of Stuff onto the Brain

4872199920_660cd8fb05_bWhile we have five wonderful senses, humans rely most on our sense of sight. The allocation of real estate in the brain reflects this hegemony; a far greater chunk of your cerebral cortex is dedicated to vision than to any other sense. So when you encounter people, objects, and animals in the world, you typically use visual information to tell your lover from a toothbrush from your cat. And while it would be reasonable to expect your brain to process all of these items in the same way, it does nothing of the sort. Instead, the visual cortex segregates and plays favorites.

The most dramatic examples of this segregation occur whenever you look at other people.  Within the large chunk of  visual cortex dedicated to object recognition, two areas in each hemisphere specifically process faces (the FFA and OFA) and two areas in each hemisphere specifically process bodies (the FBA and EBA). In each case, one of these areas is located on the side of the brain (near the back) while the other is tucked away on the bottom surface of the temporal lobe. It’s clear that these areas are important for recognizing faces and bodies. Damage to the face area FFA can profoundly impair one’s ability to recognize faces, while direct electrical stimulation of the same area can temporarily distort perception of a face. And when scientists used a magnetic pulse to momentarily disrupt activity in either the face area OFA or the body area EBA of healthy adults, their participants had difficulty discriminating between similar faces or similar bodies, respectively.

Yet the segregation of objects in your visual cortex doesn’t end there. Scientists have long known that visual information about scenes – including the landmarks and buildings that often define them – is processed separately as well. In fact we have at least two scene areas per hemisphere in classic visual cortex: one on the side of the brain (TOS) and one on the lower surface (PPA).*

But what about other types of objects? If you looked at pictures of a trampoline, a screwdriver, a lamppost, and a toad, would they follow the same path through your visual cortex? The answer is no. In a recent study, Talia Konkle and Alfonso Caramazza at Harvard showed people pictures of a wide range of animals and objects while scanning them with fMRI. They studied the activations in visual cortex for each image and used them to compute something they called preference maps. The preference maps indicated whether each bit of cortex preferred animals or objects and, separately, small or large things. When they combined these maps they found zones of visual cortex that preferred large objects, small objects, or animals of any size.** For large objects and animals (with two zones each), one zone was located on the side of the brain and the other on the lower surface. The only zone that preferred small objects over both large ones and animals lay right at the edge of the brain, smack dab between the side of the brain and its lower surface. The face and body areas fit almost entirely within the animal zones, while the scene areas lay within the large-object zones.

Screen Shot 2013-10-17 at 11.43.06 AM

Figure from Konkle & Caramazza, 2013 showing where the face areas, body areas, scene areas, and ‘preference zones’ were in one participant. Each gray blob represents the right hemisphere of the brain, with the left side of each blob representing the back of the brain. The top two brains show a side view while the bottom two show the bottom surface of the cortex.

It may seem odd that object representation in visual cortex is organized based on such arbitrary dimensions. Why should it matter whether the thing you see is big or small, made of cotton or has a cottontail? The study’s authors argue that these divisions make sense if one considers the various ways we use different types of objects. For instance, small objects are generally useful because you can manipulate and interact with them. Recognizing an apple, axe, or comb allows you to eat, chop, or fix your ‘do, respectively – so long as the visual information about these objects gets passed along to brain areas involved in reaching and grasping movements.

Objects like buildings, trees, or couches are obviously too large to be lifted or manipulated. Since they stay put, you’re more likely to use them as landmarks to help you navigate through a neighborhood, park, or room. But you can only use these objects this way if you send the visual information about them to brain regions involved in navigation.

Finally, we have living things, which can move, bite, and behave unpredictably. While a large animal like an elephant might trample you, a small one like a venomous spider or snake could be more lethal still. And don’t even get me started on people! In short, an animal’s size doesn’t determine how you will or won’t interact with it; you need to be ready to predict any animal’s behavior and react accordingly. Should you pet that dog or run from it? Communication between the animal-preferring zones of visual cortex and the social prediction centers in your brain might help you reach the right answer before it’s too late.

What’s the upshot of all this using, manipulating, predicting and fleeing? A wonderful and miraculous map of all the stuff in your world. It’s a modest little map – no larger than a napkin and half the thickness of an iPhone 5 – that represents a vast array of creatures, things, and people based on what they mean to you. How frickin’ amazing is that?

* I’ll get back to this mysterious pattern in a future post.

** I find it interesting that people generally approach the game Twenty Questions with the same category distinctions. The first two questions are almost invariably: Is it alive? And is it bigger than a breadbox?

___

Photo credits

Elephant and bird: Ludovic Hirlimann on Flickr, used via Creative Commons license

Figure with brains: Talia Konkle & Alfonso Caramazza in The Journal of Neuroscience

Konkle T, & Caramazza A (2013). Tripartite organization of the ventral stream by animacy and object size. The Journal of neuroscience : the official journal of the Society for Neuroscience, 33 (25), 10235-42 PMID: 23785139

Konkle T, & Caramazza A (2013). Tripartite organization of the ventral stream by animacy and object size The Journal of Neuroscience DOI: 10.1523/JNEUROSCI.0983-13.2013

Looking Schizophrenia in the Eye

272994276_3c83654e97_bMore than a century ago, scientists discovered something usual about how people with schizophrenia move their eyes. The men, psychologist and inventor Raymond Dodge and psychiatrist Allen Diefendorf, were trying out one of Dodge’s inventions: an early incarnation of the modern eye tracker. When they used it on psychiatric patients, they found that most of their subjects with schizophrenia had a funny way of following a moving object with their eyes.

When a healthy person watches a smoothly moving object (say, an airplane crossing the sky), she tracks the plane with a smooth, continuous eye movement to match its displacement. This action is called smooth pursuit. But smooth pursuit isn’t smooth for most patients with schizophrenia. Their eyes often fall behind and they make a series of quick, tiny jerks to catch up or even dart ahead of their target. For the better part of a century, this movement pattern would remain a mystery. But in recent decades, scientific discoveries have led to a better understanding of smooth pursuit eye movements – both in health and in disease.

Scientists now know that smooth pursuit involves a lot more than simply moving your eyes. To illustrate, let’s say a sexy jogger catches your eye on the street. When you first see the runner, your eyes are stationary and his or her image is moving across your retinas at some relatively constant rate. Your visual system (in particular, your visual motion-processing area MT) must first determine this rate. Then your eyes can move to catch up with the target and match its speed. If you do this well, the jogger’s image will no longer be moving relative to your retinas. From your visual system’s perspective, the jogger is running in place and his or her surroundings are moving instead. From both visual cues and signals about your eye movements, your brain can predict where the jogger is headed and keep moving your eyes at just the right speed to keep pace.

Although the smooth pursuit abnormalities in schizophrenia may sound like a movement problem, they appear to reflect a problem with perception. Sensitive visual tests show that motion perception is disrupted in many patients. They can’t tell the difference between the speeds of two objects or integrate complex motion information as well as healthy controls. A functional MRI study helped explain why. The study found that people with schizophrenia activated their motion-processing area MT less than controls while doing motion-processing tasks. The next logical question – why MT doesn’t work as well for patients – remains unanswered for now.

In my last two posts I wrote about how delusions can develop in healthy people who don’t suffer from psychosis. The same is true of not-so-smooth pursuit. In particular, healthy relatives of patients with schizophrenia tend to have jerkier pursuit movements than subjects without a family history of the illness. They are also impaired at some of the same motion-processing tests that stymie patients. This pattern, along with the results of twin studies, suggests that smooth pursuit dysfunction is inherited. Following up on this idea, two studies have compared subjects’ genotypes with the inheritance patterns of smooth pursuit problems within families. While they couldn’t identify exactly which gene was involved (a limitation of the technique), they both tracked the culprit gene to the same genetic neighborhood on the sixth chromosome.

Despite this progress, the tale of smooth pursuit in schizophrenia is more complex than it appears. For one, there’s evidence that smooth pursuit problems differ for patients with different forms of the disorder. Patients with negative symptoms (like social withdrawal or no outward signs of emotion) may have problems with the first step of smooth pursuit: judging the target’s speed and moving their eyes to catch up. Meanwhile, those with more positive symptoms (like delusions or hallucinations) may have more trouble with the second step: predicting the future movement of the target and keeping pace with their eyes.

It’s also unclear exactly how common these problems are among patients; depending on the study, as many as 95% or as few as 12% of patients may have disrupted smooth pursuit. The studies that found the highest rates of smooth pursuit dysfunction in patients also found rates as high as 19% for the problems among healthy controls. These differences may boil down to the details of how the eye movements were measured in the different experiments. Still, the studies all agreed that people with schizophrenia are far more likely to have smooth pursuit problems than healthy controls. What the studies don’t agree on is how specific these problems are to schizophrenia compared with other psychiatric illnesses. Some studies have found smooth pursuit abnormalities in patients with bipolar disorder and major depression as well as in their close relatives; other studies have not.

Despite these messy issues, a group of scientists at the University of Aberdeen in Scotland recently tried to tell whether subjects had schizophrenia based on their eye movements alone. In addition to smooth pursuit, they used two other measures: the subject’s ability to fix her gaze on a stable target and how she looked at pictures of complex scenes. Most patients have trouble holding their eyes still in the presence of distractors and, when shown a meaningful picture, they tend to look at fewer objects or features in the scene.

Taking the results from all three measures into account, the group could distinguish between a new set of patients with schizophrenia and new healthy controls with an accuracy of 87.8%. While this rate is high, keep in mind that the scientists removed real-world messiness by selecting controls without other psychiatric illnesses or close relatives with psychosis. This makes their demonstration a lot less impressive – and a lot less useful in the real world. I don’t think this method will ever become a viable alternative to diagnosing schizophrenia based on their clinical symptoms, but the approach may hold promise in a similar vein: identifying young people who are at risk for developing the illness. Finding these individuals and helping them sooner could truly mean the difference between life and death.

_____

Photo credit: Travis Nep Smith on Flickr, used via Creative Commons License

Benson PJ, Beedie SA, Shephard E, Giegling I, Rujescu D, & St Clair D (2012). Simple viewing tests can detect eye movement abnormalities that distinguish schizophrenia cases from controls with exceptional accuracy. Biological psychiatry, 72 (9), 716-24 PMID: 22621999

Neural Conspiracy Theories

140775790_e3e122cd65_bLast month, a paper quietly appeared in The Journal of Neuroscience to little fanfare and scant media attention (with these exceptions). The study revolved around a clever and almost diabolical premise: that using perceptual trickery and outright deception, its authors could plant a delusion-like belief in the heads of healthy subjects. Before you call the ethics police, I should mention that the belief wasn’t a delusion in the formal sense of the word. It didn’t cause the subjects any distress and was limited to the unique materials used in the study. Still, it provided a model delusion that scientists Katharina Schmack, Philipp Sterzer, and colleagues could study to investigate the interplay of perception and belief in healthy subjects. The experiment is quite involved, so I’ll stick to the coolest and most relevant details.

As I mentioned in my last post, delusions are not exclusive to people suffering from psychosis. Many people who are free of any diagnosable mental illness still have a tendency to develop them, although the frequency and severity of these delusions differ across individuals. There are some good reasons to conduct studies like this one on healthy people rather than psychiatric patients. Healthy subjects are a heck of a lot easier to recruit, easier to work with, and less affected by confounding factors like medication and stress.

Schmack, Sterzer, and colleagues designed their experiment to test the idea that delusions arise from two distinct but related processes. First, a person experiences perceptual disturbances. According to the group’s model, these disturbances actually reflect poor expectation signals as the brain processes information from the senses. In theory, these poor signals would make irrelevant or commonplace sights, sounds, and sensations seem surprising and important. Without an explanation for this unexpected weirdness, the individual comes up with a delusion to make sense of it all. Once the delusion is in place, so-called higher areas of the brain (those that do more complex things like ponder, theorize, and believe) generate new expectation signals based on the delusion. These signals feed back on so-called lower sensory areas and actually bias the person’s perception of the outside world based on the delusion. According to the authors, this would explain why people become so convinced of their delusions: they are constantly perceiving confirmatory evidence. Strangely enough, this model sounds like a paranoid delusion in its own right. Various regions of your brain may be colluding to fool your senses into making you believe a lie!

To test the idea, the experimenters first had to toy with their subjects’ senses. They did so by capitalizing on a quirk of the visual system: that when people are shown two conflicting images separately to their two eyes, they don’t perceive both images at once. Instead, perception alternates between the two. In the first part of this experiment, the two images were actually movies of moving dots that appeared to form a 3-D sphere spinning either to the left (for one eye) or to the right (for the other). For this ambiguous visual condition, subjects were equally likely to see a sphere spinning to the right or to the left at any given moment in time, with it switching direction periodically.

Now the experimenters went about planting the fake belief. They gave the subjects a pair of transparent glasses and told them that the lenses contained polarizing filters that would make the sphere appear to spin more in one of the two directions. In fact, the lenses were made of simple plastic and could do no such thing. Once the subjects had the glasses on, the experimenters began showing the same movie to both eyes. While this change allowed the scientists to control exactly what the subjects saw, the subjects had no idea that the visual setup had changed. In this unambiguous condition, all subjects saw a sphere that alternated direction (just as the ambiguous sphere had done), except that this sphere spun far more in one of the two directions. This visual trick, paired with the story about polarized lenses, was meant to make subjects believe that the glasses caused the change in perception.

After that clever setup, the scientists were ready to see how the model delusion would affect each subject’s actual perception. While the subject continued to wear the glasses, they were shown the two original, conflicting movies to their two separate eyes. In the first part of the experiment, this ambiguous condition caused subjects to see a rotating sphere that alternated equally between spinning to the left and right. But if their new belief about the glasses biased their perception of the spinning sphere, they would now report seeing the sphere spin more often in the belief-consistent direction.

What happened? Subjects did see the sphere spin more in the belief-consistent direction. While the effect was small, it was still impressive that they could bias perception at all, considering the simplicity of the images. They also found that each subject’s delusional conviction score (how convinced they were by their delusional thoughts in everyday life) correlated with this effect. The more the subject believed her real-life delusional thoughts, the more her belief about the glasses affected her perception of the ambiguous spinning sphere.

But there’s a hitch. What if subjects were reporting the motion bias because they thought that was what they were supposed to see and not because they actually saw it? To answer this question, they recruited a new batch of participants and ran the experiment again in a scanner using fMRI.

Since the subjects’ task hinged on motion perception, Sterzer and colleagues first looked at the activity in a brain area called MT that processes visual motion. By analyzing the patterns of fMRI activity in this area, the scientists confirmed that subjects were accurately reporting the motion they perceived. That may sound far-fetched, but this kind of ‘mind reading’ with fMRI  has been done quite successfully for basic visual properties like motion.

The group also studied activity throughout the brain while their glasses-wearing subjects learned the false belief (unambiguous condition) and allowed the false belief to more or less affect their perception (ambiguous condition). They found that belief-based perceptual bias correlated with activity in the left orbitofrontal cortex, a region just behind the eyes that is involved in decision-making and expectation. In essence, subjects with more activity in this region during both conditions tended to also report lopsided spin directions that confirmed their expectations during the ambiguous condition. And here’s the cherry on top: subjects with higher delusional conviction scores appeared to have greater communication between left orbitofrontal cortex and motion-processing area MT during the ambiguous visual condition. Although fMRI can’t directly measure communication between areas and can’t tell us the direction of communication, this pattern suggests that the left orbitofrontal cortex may be directly responsible for biasing motion perception in delusion-prone subjects.

All told, the results of the experiment seem to tell a neat story that fits the authors’ model about delusions. Yet there are a couple of caveats worth mentioning. First, the key finding of their study – that a person’s delusional conviction score correlates with his or her belief-based motion perception bias – is built upon a quirky and unnatural aspect of human vision that may or may not reflect more typical sensory processes. Second, it’s hard to say how clinically relevant the results are. No one knows for certain if delusions arise by the same neural mechanisms in the general population as they do in patients with illnesses like schizophrenia. It has been argued that they probably do because the same risk factors pop up for patients as for non-psychotic people with delusions: unemployment, social difficulties, urban surroundings, mood disturbances and drug or alcohol abuse. Then again, this group is probably also at the highest risk for getting hit by a bus, dying from an curable disease, or suffering any number of misfortunes that disproportionately affect people in vulnerable circumstances. So the jury is still about on the clinical applicability of these results.

Despite the study’s limitations, it was brilliantly designed and tells a compelling tale about how the brain conspires to manipulate perception based on beliefs. It also implicates a culprit in this neural conspiracy. Dare I say ringleader? Mastermind? Somebody cue the close up of orbitofrontal cortex cackling and stroking a cat.

_____

Photo credit: Daniel Horacio Agostini (dhammza) on Flickr, used through Creative Commons license

Schmack K, Gòmez-Carrillo de Castro A, Rothkirch M, Sekutowicz M, Rössler H, Haynes JD, Heinz A, Petrovic P, & Sterzer P (2013). Delusions and the role of beliefs in perceptual inference. The Journal of Neuroscience, 33 (34), 13701-13712 PMID: 23966692

Delusions: Making Sense of Mistaken Senses

6738201_646b9e485b_o

For a common affliction that strikes people of every culture and walk of life, schizophrenia has remained something of an enigma. Scientists talk about dopamine and glutamate, nicotinic receptors and hippocampal atrophy, but they’ve made little progress in explaining psychosis as it unfolds on the level of thoughts, beliefs, and experiences. Approximately one percent of the world’s population suffers from schizophrenia. Add to that the comparable numbers of people who suffer from affective psychoses (certain types of bipolar disorder and depression) or psychosis from neurodegenerative disorders like Alzheimer’s disease. All told, upwards of 3% of the population have known psychosis first-hand. These individuals have experienced how it transformed their sensations, emotions, and beliefs. Why hasn’t science made more progress explaining this level of the illness? What have those slouches at the National Institute of Mental Health been up to?

There are several reasons why psychosis has proved a tough nut to crack. First and foremost, neuroscience is still struggling to understand the biology of complex phenomena like thoughts and memories in the healthy brain. Add to that the incredible diversity of psychosis: how one psychotic patient might be silent and unresponsive while another is excitable and talking up a storm. Finally, a host of confounding factors plague most studies of psychosis. Let’s say a scientist discovers that a particular brain area tends to be smaller in patients with schizophrenia than healthy controls. The difference might have played a role in causing the illness in these patients, it might be a direct result of the illness, or it might be the result of anti-psychotic medications, chronic stress, substance abuse, poor nutrition, or other factors that disproportionately affect patients.

So what’s a well-meaning neuroscientist to do? One intriguing approach is to study psychosis in healthy people. They don’t have the litany of confounding experiences and exposures that make patients such problematic subjects. Yet at first glance, the approach seems to have a fatal flaw. How can you study psychosis in people who don’t have it? It sounds as crazy as studying malaria in someone who’s never had the bug.

In fact, this approach is possible because schizophrenia is a very different illness from malaria or HIV. Unlike communicable diseases, it is a developmental illness triggered by both genetic and environmental factors. These factors affect us all to varying degrees and cause all of us – clinically psychotic or not – to land somewhere on a spectrum of psychotic traits. Just as people who don’t suffer from anxiety disorders can still differ in their tendency to be anxious, nonpsychotic individuals can differ in their tendency to develop delusions or have perceptual disturbances. One review estimates that 1 to 3% of nonpsychotic people harbor major delusional beliefs, while another 5 to 6% have less severe delusions. An additional 10 to 15% of the general population may experience milder delusional thoughts on a regular basis.

Delusions are a common symptom of schizophrenia and were once thought to reflect the poor reasoning abilities of a broken brain. More recently, a growing number of physicians and scientists have opted for a different explanation. According to this model, patients first experience the surprising and mysterious perceptual disturbances that result from their illness. These could be full-blown hallucinations or they could be subtler abnormalities, like the inability to ignore a persistent noise. Patients then adopt delusions in a natural (if misguided) attempt to explain their odd experiences.

An intriguing study from the early 1960s illustrates how rapidly delusions can develop in healthy subjects when expectations and perceptions inexplicably conflict. The study, run on twenty college students at the University of Copenhagen, involved a version of the trick now known as the rubber hand illusion. Each subject was instructed to trace a straight line while his or her hand was inside a box with a secret mirror. For several trials, the subject watched his or her own hand trace the line correctly. Then the experimenters surreptitiously changed the mirror position so that the subject was now watching someone else’s hand trace the straight line – until the sham hand unexpectedly veered off to the right! All of the subjects experienced the visible (sham) hand as their own and felt that an involuntary movement had sent it off course. After several trials with this misbehaving hand, the subjects offered explanations for the deviation. Some chalked it up to their own fatigue or inattention while others came up with wilder, tech-based explanations:

 . . . five subjects described that they felt something strange and queer outside themselves, which pressed their hand to the right or resisted their free mobility. They suggested that ‘magnets’, ‘unidentified forces’, ‘invisible traces under the paper’, or the like, could be the cause.

In other words, delusions may be a normal reaction to the unexpected and inexplicable. Under strange enough circumstances, anyone might develop them – but some of us are more likely to than others.

My next post will describe a clever experiment that planted a delusion-like belief in the heads of healthy subjects and used trickery and fMRI to see how it influenced some more than others. So stay tuned. In the meantime, you may want to ask yourself which members of your family and friends are prone to delusional thinking. Or ask yourself honestly: could it be you?

_______

Photo credit: MiniTar on Flickr, available through Creative Commons

Modernity, Madness, and the History of Neuroscience

4666194636_a4d78d506e_o

I recently read a wonderful piece in Aeon Magazine about how technology shapes psychotic delusions. As the author, Mike Jay, explains:

Persecutory delusions, for example, can be found throughout history and across cultures; but within this category a desert nomad is more likely to believe that he is being buried alive in sand by a djinn, and an urban American that he has been implanted with a microchip and is being monitored by the CIA.

While delusional people of the past may have fretted over spirits, witches, demons and ghouls, today they often worry about wireless signals controlling their minds or hidden cameras recording their lives for a reality TV show. Indeed, reality TV is ubiquitous in our culture and experiments in remote mind-control (albeit on a limited scale) have been popping up recently in the news. As psychiatrist Joel Gold of NYU and philosopher Ian Gold of McGill University wrote in 2012: “For an illness that is often characterized as a break with reality, psychosis keeps remarkably up to date.”

Whatever the time or the place, new technologies are pervasive and salient. They are on the tips of our tongues and, eventually, at the tips of our fingers. Psychotic or not, we are all captivated by technological advances. They provide us with new analogies and new ways of explaining the all-but-unexplainable. And where else do we attempt to explain the mysteries of the world, if not through science?

As I read Jay’s piece on psychosis, it struck me that science has historically had the same habit of co-opting modern technologies for explanatory purposes. In the case of neuroscience, scientists and physicians across cultures and ages have invoked the  innovations of their day to explain the mind’s mysteries. For instance, the science of antiquity was rooted in the physical properties of matter and the mechanical interactions between them. Around 7th century BC, empires began constructing great aqueducts to bring water to their growing cities. The great engineering challenge of the day was to control and guide the flow of water across great distances. It was in this scientific milieu that the ancient Greeks devised a model for the workings of the mind. They believed that a person’s thoughts, feelings, intellect and soul were physical stuff: specifically, an invisible, weightless fluid called psychic pneuma. Around 200 AD, a physician and scientist of the Roman Empire (known for its masterful aqueducts) would revise and clarify the theory. The physician, Galen, believed that pneuma fills the brain cavities called ventricles and circulates through white matter pathways in the brain and nerves in the body just as water flows through a tube. As psychic pneuma traveled throughout the body, it carried sensation and movement to the extremities. Although the idea may sound farfetched to us today, this model of the brain persisted for more than a millennium and influenced Renaissance thinkers including Descartes.

By the 18th century, however, the science world was a-buzz with two strange new forces: electricity and magnetism. At the same time, physicians and anatomists began to think of the brain itself as the stuff that gives rise to thought and feeling, rather than a maze of vats and tunnels that move fluid around. In the 179os, Luigi Galvani’s experiments zapping frog legs showed that nerves communicate with muscles using electricity. So in the 19th century, just as inventors were harnessing electricity to run motors and light up the darkness, scientists reconceived the brain as an organ of electricity. It was a wise innovation and one supported by experiments, but also driven by the technical advances of the day.

Science was revolutionized once again with the advent of modern computers in the 1940s and ‘50s. In the 1950s, the new technology sparked a surge of research and theories that used the computer as an analogy for the brain. Psychologists began to treat mental events like computer processes, which can be broken up and analyzed as a set of discrete steps. They equated brain areas to processors and neural activity in these areas to the computations carried out by computers. Just as computers rule our modern technological world, this way of thinking about the brain still profoundly influences how neuroscience and psychology research is carried out and interpreted. Today, some labs cut out the middleman (the brain) entirely. Results from computer models of the brain are regularly published in neuroscience journals, sometimes without any data from an actual physical brain.

I’m sure there are other examples from the history of neuroscience in general and certainly from the history of science as a whole. Please comment and share any other ways that technology has shaped the models, themes, and analogies of science!

Additional sources:

Crivellato E & Ribatti D (2007) Soul, mind, brain: Greek philosophy and the birth of neuroscience. Brain Research Bulletin 71:327-336.

Karenberg A (2009) Cerebral Localization in the Eighteenth Century – An Overview. Journal of the History of the Neurosciences, 18:248-253.

_________

Photo Credit: dominiqueb on Flickr, available through Creative Commons

%d bloggers like this: