FEATURES

If They Could Read Your Mind

To a large extent, they already can. As neuroscientists hone new technologies for probing our brains, predicting our behavior and perhaps even altering our thoughts, ethicists wrestle with some troubling questions.

January/February 2004

Reading time min

If They Could Read Your Mind

Peter Hoey

January 31, 2010. Your Local Hospital. 

An ebullient Jim Perry pushes his wife Jean’s wheelchair down the hall. She’s holding their cooing newborn, Nick. Just a quick stop at the NeuroTesting Conference Office, then home to a new family life.

Dr. Deena hurries in and pops a CD into a computer. A pie chart appears on the monitor. It’s 98 percent green, with a thin slice of red. “This is wonderful,” says Dr. Deena. “No obvious structural defects in the brain, and no raised susceptibility to Alzheimer’s or any other genetic diseases for which we have tests. It’s a clean bill of health.”

“But the red area, what’s that?” asks Jean.

Dr. Deena taps on the keyboard, and a chart titled “Future Concerns” appears with two items in red: “Nicotine Addiction” and “Violence.”

“I’m not a smoker,” says Jean. “That must be a mistake.”

“No,” says Dr. Deena. “What this genetic test suggests is that individuals with the same gene mutation that Nick has are 89 percent more likely to become addicted to cigarettes if they try them, compared with those without the mutation.”

“Well, that’s a no-brainer,” says Jim. “We’ll just make it very clear to Nick that he’s to never, ever try cigarettes. What about that violence bit, though?”

“You know that brain scan Nick had last night? Studies suggest that individuals with activity similar to his in the neocortex are 25 percent more likely to commit violent crimes.”

“What kind of prediction is that?” says Jean.

“I agree it’s weak. We don’t even like doing the test, but the FBI is moving ahead on a national database.”

Jim clenches his jaw. “I don’t want my boy in some database.”

Dr. Deena sighs. “Mr. Perry, the nation’s prisons are so overcrowded that law enforcement has convinced Congress that tracking this group is the answer to pinpointing and controlling violent personalities very, very early. But I wouldn’t worry. It’s just as likely they’ll find that Nick lacks some other gene that’s the real determiner.”

“Will Nick’s schools have to know about this? What about employers?”

“All good questions, Mr. Perry.”

This is an imaginary conversation. But the technologies are real, and the worries they raise are just a few years off. Neuroscientists are rapidly learning to read and mold the human brain, and to predict behavior and disease well into the future. Meanwhile, bioethicists at Stanford and elsewhere are trying to keep pace by anticipating the potential landmines. Those mines threaten to detonate across a broad array of sectors—from schoolroom to courtroom, hospital to voting booth, homeland security to human rights.

Genetic manipulations such as cloning, fetal stem-cell transplants and “designer genes” have triggered intense debate in recent years. But probing the brain may strike an even more sensitive nerve. “Far more than our genomes, our brains are us, marking out the special character of our personal capacities, emotions and convictions,” says neurobiologist Donald Kennedy, editor-in-chief of the journal Science and emeritus president of Stanford. “I already don’t want my employer or my insurance company to know my genome. As to my brainome, I don’t want anyone to know it for any purpose whatsoever. It is . . . my most intimate identity.”

Kennedy, the Bing Professor of Environmental Science and Policy, spoke in November at the annual meeting of the Society for Neurosciences, delivering a follow-up talk on a conference organized by Stanford and UCSF the year before. That pivotal gathering, in May 2002, brought together dozens of bioethicists to forge a new field called neuroethics. Today, neuroethics is a major focus of the Stanford Center for Biomedical Ethics. At stake, says Stanford neuroethicist Judy Illes, “is ultimately the protection and privacy of human thought.”

It’s important to grapple with these concerns now, before the technologies become part of daily life. University of Pennsylvania bioethicist Arthur Caplan, who took part in the Stanford/UCSF conference, wrote last September in Scientific American, “It is very likely that advances in our ability to ‘read’ the brain will be exploited . . . for such purposes as screening job applicants, diagnosing and treating disease, determining who qualifies for disability benefits and, ultimately, enhancing the brain.”

The brain is not exactly unexplored terrain. Philosophers have long pondered such notions as free will and the nature of thought pulsing through our gray matter. “I think, therefore I am,” Descartes declared. And there have been myriad schemes to unlock or redirect our thoughts and behaviors—from truth serums to polygraphs, hypnotism to lobotomies—while pharmaceutical companies have made billions of dollars selling relief from depression, anxiety, compulsions and other psychiatric disorders.

In the past decade, however, the neurosciences have entered what Kennedy calls “a period of extraordinary, perhaps unprecedented promise.” At Stanford and other leading research institutions, scientists are already scanning the brain—not just for defects, disease and injury, but for patterns of thought and emotion, meaningful precursors of behavior, and the mechanics of learning.

Classic magnetic resonance imaging (MRI) provides a high-resolution view into the body, usually to illuminate structural defects, tumors or injuries. But in recent years, refinements in MRI scanning have allowed researchers to monitor identifiable changes in the brain in response to stimuli or during directed thoughts. With this technique, called functional MRI (fMRI), “we are able to make measurements of brain function in a way we could not do before,” says Illes.

The functions being measured aren’t far removed from baby Nick’s “Future Concerns.” Illes, who has surveyed neuroscientists’ use of fMRI, wrote in the March 2003 Nature Neuroscience: “Our analysis shows a steady expansion of studies with evident social and policy implications, including studies of human cooperation and competition, brain differences in violent people and genetic influences on brain structure and function.” Complex behaviors and emotions—such as fear, lying, decision making, self-monitoring, moral dilemmas, and assessments of rewards and punishment—are all in play. So far, she suggests, society has given little thought to how these technologies and their volatile payloads will be used.

Brain scanning is not the only neurotechnology raising hackles. Drug companies are pushing ahead with psychopharmaceuticals that raise a host of ethical issues. As researchers struggle to come up with remedies for Alzheimer’s, for example, there arises the prospect of drugs that don’t just fix broken and battered memories, but could perhaps enhance normal ones. And many neurological disorders have a genetic component, prompting some of the same ethical questions raised by other genetic tests. Decisions will have to be made, for instance, on whether to offer tests for untreatable conditions, and who should have access to the results.

This is familiar turf for ethicists assessing the high-profile Human Genome Project, in part because ethical discussions were incorporated into that effort since the beginning. “That hasn’t happened in neurosciences, where plenty of things will be happening much sooner,” says Barbara Koenig, associate professor of medicine and former executive director of the Stanford Center for Biomedical Ethics. Koenig, an anthropologist, worries in particular that “the brain offers a seductive promise of prediction.” Predictions will span a range of other domains, she believes, including future illness, performance in school or work, violent behavior and even addiction. “Whether or not those predictions prove to be scientifically accurate may be less important than our belief in their power,” she warns.

Koenig is especially concerned about preliminary results being touted as if they were conclusive, and the effects of early labeling on kids. “There are such negative labeling implications for children,” she says. “We have to keep premature findings from being turned into marketable products for desperate parents.”

In her own research, Koenig is asking the very question confronted by the fictitious Nick Perry’s parents: what if science could reliably deliver evidence of a gene that would predispose a person to nicotine addiction? She is assembling a wide-ranging scenario. Would such evidence lead to a ban on smoking? If not, would it be more cost-effective to intensify anti-smoking campaigns, or to develop an anti-addiction vaccine or a gene therapy? Under what circumstances would parents or a child be tested for a predisposition to addiction? Might genes help predict which addicts will respond to different therapies, such as drugs or behavioral approaches?

And that’s just the beginning.

September 12, 2028. Your Local University.

Jean Perry brushes lint off Nick’s blue blazer as they sit down before a gray-haired gentleman in tweeds. This is Nick’s freshman pharmaceutical review board hearing, and Dr. Better is checking Nick’s file. “I have your application here for an Enhancement prescription,” says Dr. Better, “but with your violent tendencies profile, we’ll have to ask you to agree to regular brain scans if we give you something like Ritalin-3 or Focusalin.”

“Yes. I’m willing to do that.”

“Doctor,” says Jean, “Nick has never shown any violent tendencies. We just want him to have access to all the same study-aid drugs the other students do.”

“Of course, Mrs. Perry. I believe that will be fine. Now, on another subject, I do have good news. We have reviewed Nick’s learning-sensitivity scans, and we have approved that he be tracked in our more symbolic curriculum.”

By the time Nick starts college, there could be a huge array of “study-aid drugs.” Even today, some students attempt to stay alert by illegally taking drugs intended to treat attention deficit disorder. Those drugs can have serious side effects, including addiction, when used outside their approved parameters. But neuroethicists are wondering how long it will be before drugs without such severe side effects are tested as general-use “enhancers”—a term that raises an ethical red flag.

There are those, including President Bush’s own Council on Bioethics, who have suggested that technological tampering such as study enhancement medication or genetic manipulations to boost intelligence is inherently disturbing, perhaps unethical. Others say such tinkering will be unfairly reserved for the rich. However, a number of ethicists, including Stanford’s David Magnus, co-director of the Center for Biomedical Ethics, point out that many medical interventions once considered enhancements—eyeglasses, for example—eventually come to be viewed as elements of baseline health. Further, Magnus notes that private schooling, travel and exposure to the arts already boost the abilities of affluent students but are not considered unethical.

How about Nick’s “learning-sensitivity scans”? Here, the ethical questions compete with exciting potential to help students with learning differences. Last February, psychology professor John Gabrieli demonstrated that the brains of dyslexic children can be “rewired” by intensive reading training. He used fMRI brain scans to “watch” dyslexic children react to various reading exercises. After the kids received special training, Gabrieli scanned them again and found that the dyslexic brains had become much more like those of normal readers. Such scans could be part of an early battery of tests designed to pick up dyslexia and other learning differences early in a child’s life, and educators could tailor special programs to a given child’s needs. For instance, some kids—like Nick in our example—appear to learn better using more symbol-based approaches, while others benefit from a more aural curriculum. Scans could save years of frustration and trial and error in figuring that out.

June 16, 2032. Your Airport, International Terminal.

Fresh from his graduation summa cum laude, Nick and his parents make their way through airport security for a celebratory trip to Paris and Madrid. They toss their carry-on luggage on the belt scanner, then stand beneath the Security Brain Wave Reviewer. A red light flashes, a chime goes off and one of the technicians rushes to Nick’s side. “I’m sorry, sir, but I’ll have to ask you to step into the interrogation room.”

Jim hands the technician a laminated card identifying Nick as a member of the National Violence Propensity Database. “He’s been a genetic Level 2 since birth. If you just run the card, it’ll validate that he’s had no violent incidents.”

“I’m sorry, sir, we’re on high alert today. He’ll have to be interviewed.”

“C’est la vie,” sighs Jean.

Outlandish, you say? Illes doesn’t think so. Advances in MRI, combined with a post-9/11, security-oriented climate, could yield developments like brain scanners in airports and even schools in as little as 10 years, she predicts. At that point, it’s unlikely a comprehensive ethical framework will be in place to avert misgivings about their use.

Such concerns range from the reliability and calibration of equipment in the hands of relatively unskilled people to the invasiveness of the procedure. Suppose, for example, a woman is in the middle of an ugly divorce as she attempts to board a plane for a much-needed vacation. Should she really have to account for the angry brain waves bouncing around in her head to convince some airport security employee she’s not a terrorist? How would the brain waves of an NFL football player appear as he prepared to board a plane to battle in the Super Bowl?

mind innocentJanuary 1, 2033. Your Local Courthouse.

Nick Perry and his attorney are surrounded by reporters. He has emerged victorious from the first successful use of what legal experts are calling the Truth Scanner. “The science of using brain scans in pursuit of justice has entered a triumphant new phase,” announces defense attorney Gus Healey. “It’s about time we allowed unjustly accused defendants to prove their innocence.”

A reporter calls out: “But Mr. Healey, aren’t some people worried we haven’t tested these devices enough and that anybody who’s uncomfortable with a scan will be presumed guilty?”

“Combined with other evidence, we feel the technology is now an important piece of a good defense,” says Healey. “We don’t need to go to the eighth decimal point.”

“Nick, how do you feel about being exonerated?”

“Great,” Nick says. “The idea that I was going to use my shaving cream to hijack that plane was preposterous. I’ve spent six months trying to prove my innocence just because a dumb airport scanner went off and an even dumber security guard overreacted to a joke.”

“So, what’s next for you, Nick?”

“Well, I hope to take that long-awaited trip to Europe,” Nick grins, lighting a cigarette. “I’ve always wanted to see a bullfight.”

It’s no hyperbole to suggest that technologies that can illuminate the shifting and shadowy world of veracity and memory could turn our legal system upside down. “The invention by neuroscientists of reliable truth-detecting or truth-compelling methods could have substantial effects on almost every trial and on the entire judicial system, and the constitutional questions are many and knotty,” contends Stanford law professor Henry Greely, chair of the steering committee for the Center for Biomedical Ethics.

Could most crimes be solved long before trial if everybody took a truth test? What happens when two witnesses with different stories are both shown to be speaking honestly? What about false memories? Does the right not to incriminate oneself extend to refusing to have one’s brain read? If criminal defendants are due impartial jury trials, would attorneys press to use brain scans to probe for juror bias? For really volatile cases, could a jury ever be found to be entirely fair?

We may not have too much more time to work through those issues. An Iowa company called Brain Fingerprinting Laboratories Inc. says it has technology that can identify specific kinds of brain waves people emit when they are looking at or discussing something they’ve seen before—in other words, when they’ve already formed a memory. The company aims to use the technology in the legal system to help innocent defendants prove they were not, for example, present at a crime scene. In 2001, a judge allowed the results of a “brain fingerprinting” test to be entered as evidence in the review of an appeal.

MRI machines may also become improved lie detectors. At the University of Pennsylvania, psychiatrist Daniel Langleben has found that when people lie, increased activity in several brain regions is visible in an fMRI scan. At present, traditional polygraphs—which mainly measure anxiety associated with lying—are not considered accurate enough to be introduced in court. Will these new technologies become admissible?

The courts will likely confront another bioethical minefield: findings on brain injury in violent criminals. In a 1986 study of the next 15 death-row prisoners slated to be executed, researchers discovered that each man had suffered a serious brain injury, yet none of their attorneys had raised the issue. If brain scans unveil injuries that create a propensity for violent acts, it “will significantly change the way we look at criminal justice,” says William Winslade, an attorney and professor of philosophy of medicine at the University of Texas.

November 30, 2056. Your Local Hospice.

mind dyingNick’s estranged wife, Helen, stands with their son, Troy, at Nick’s bedside. Helen and Nick have had a volatile marriage, plagued by Nick’s alcoholism and occasional violent outbursts. They’ve lived apart for the past four years, but he’s dying and she’s returned to his side. (Scans have shown that Helen’s brain is unusually developed in an area linked to loyalty.) She is relieved that Troy has not inherited his dad’s genes for addictive tendencies, especially since it was shown in 2025 that susceptibility to nicotine addiction was not a discrete gene after all, but stemmed from a host of genetic and environmental factors.

“Dad sure looks peaceful, Mom,” says Troy. “I know it was hard, but you did the right thing with the pain-erase memory implant.”

Helen sighs. “You were right. No time for ancient history now. I saw my own father die, and he was so debilitated by his regrets and guilt. This is much better.”

“It’s the humane thing.”

Nick stirs in the bed. His eyes flutter open. “Helen,” he whispers, “we’ve had a wonderful life, haven’t we?”

“Yes.”

“We were luckier than most people.”

“Absolutely.”

“I just hope our son can look back someday and feel at least as much pride and satisfaction as I do right now.”

Troy steps forward and takes his hand. “Don’t worry, Dad. I can practically guarantee that I will.”

What if we could implant new memories in a person’s head, “writing over,” in effect, their traumas in hopes of calming a fractured psyche? Researchers are investigating therapeutic drugs and implants that might, for example, erase the memory of a violent assault or a wartime experience. Will we someday truly forgive and then literally forget?

Taking it further, might an implantable chip give us instant proficiency in a foreign language or in-depth technical acumen—or instill a new political bent, or a willingness to follow any order? Could we ever be sure that our thoughts and memories were our own?

As Greely points out, it’s the bioethicist’s job to look disproportionately at troubling consequences. The benefits of the new neurotechnologies may far outweigh their threats. Yet as we probe ever deeper into the three-pound universe in our heads, surely the manipulation of what we’ve learned “the hard way” would be one of the most chilling intrusions of all.


Joan O’C. Hamilton ’83, is a frequent contributor to Stanford.

You May Also Like

© Stanford University. Stanford, California 94305.