Understanding MindBrain

 
 
Random Article


 
Latest Posts
 

Does the brain show a lie?

 

 
Overview
 

 
Summary
 
 
 
 
 


 


Bottom Line

Amanda lies flat on her back, clad in a steel blue hospital gown and an air of anticipation, as she is rolled headfirst into a beeping, 10-ton functional magnetic resonance imaging (fMRI) unit. Once inside, the 20-something blonde uses a handheld device to respond to questions about the playing cards appearing on the screen at […]

0
Posted May 24, 2007 by thomasr

 
Full Article
 
 

noliemri.pngAmanda lies flat on her back, clad in a steel blue hospital gown and an air of anticipation, as she is rolled headfirst into a beeping, 10-ton functional magnetic resonance imaging (fMRI) unit. Once inside, the 20-something blonde uses a handheld device to respond to questions about the playing cards appearing on the screen at the foot of the machine. With each click of the button, she is either lying or telling the truth about whether a card presented to her matches the one in her pocket, and the white-coated technician who watches her brain image morph into patterns on his computer screen seems to know the difference.

It’s unlikely anyone would shell out $10,000 to exonerate herself in a dispute over gin rummy. But Amanda, the model in a demo video for Tarzana, Calif.-based No Lie MRI, is helping to make a point: lie-detection is going high-tech. No Lie MRI claims it can identify lies with 90% accuracy. The service is meant for “anybody who wants to demonstrate that they are telling truth to others,” says founder and CEO Joel Huizenga. “Everyone should be allowed to use whatever method they can to defend themselves.

Watching the Brain Lie

Can fMRI replace the polygraph?
By Ishani Ganguli

No Lie MRI isn’t the only company hawking fMRI scans as lie detection tests. A competitor, Cephos, based in Pepperell, Mass., makes similar claims though the company has yet to unveil its test. And some government and law enforcement officials are bullish on the technology, as suggested by the federal research dollars being poured into the field.

But at a symposium hosted by the American Academy of Arts and Sciences this past February, several neuroscientists and legal experts said they’re not quite ready to save a place for fMRI lie detection in the courtroom or elsewhere. “No published studies come even close to demonstrating the kind of lie detection that would be useful in a real world situation,” says Nancy Kanwisher, a professor of cognitive neuroscience at MIT, who spoke at the symposium. “Scientists are endlessly clever, so I’m not saying that it can’t be done. But I can’t see how.”

Humans aren’t particularly good at knowing when they’re being deceived. In studies, subjects can only correctly identify 47% of lies on average, according to a review by Bella DePaulo at the University of California, Santa Barbara. So those who detect lies for a living have turned to science. The polygraph test – used in the United States since the 1920’s to root out liars by measuring physiological responses to stress – has largely been discredited as a scientific tool (see “A History in Deception”). Researchers are now honing in on the brain itself, turning to imaging techniques including fMRI, which measures blood oxygen concentrations across the brain every few seconds, in an attempt to map neural activity in real-time.

There are many types of lies-omissions, white lies, exaggerations, denials-that likely involve differing neural processes that scientists are just beginning to parse (see “Anatomy of Lying”). But in comparing fMRI images in such studies, it’s clear that the brain generally works harder at lying than telling the truth. As Marcus Raichle, professor at the Washington University in St. Louis School of Medicine, puts it, “You slow down, that’s not what you’re used to doing. In your brain, a whole new set of areas come online as you try to abort this learned response [to tell the truth] and institute something new and novel.”

Steve Kosslyn, a psychologist at Harvard University, is studying how fMRI results differ for spontaneous versus rehearsed lies-for which the work of concocting the new story has already been done. Nearby, John Gabrieli’s group at the Harvard-MIT Division of Health Sciences and Technology hopes to find a characteristic brain response associated with preparing to lie or tell the truth. Since September 11, 2001, grants from US agencies including the Departments of Defense and Homeland Security have burst open the field (Gabrieli is partially funded by the Central Intelligence Agency), and pushed many of its practitioners to seek practical applications.

Daniel Langleben at the University of Pennsylvania, who has spent nearly a decade studying deception, has recently been trying to apply fMRI to lie detection on the premise that a scanner can detect the suppression of truth, or “guilty knowledge.” No Lie MRI’s technology is based on the results of this research, partially funded by the Department of Defense (DoD). In one study, published in NeuroImage in 2002, Langleben gave each of his 18 subjects a playing card (a five of clubs) and a $20 bill before entering the fMRI machine. They looked at a string of cards on a screen and manually responded yes or no when asked about the identity of that card-their guilty knowledge-among a series of questions. They could keep the cash if they successfully fooled the tester. Using this approach, Langleben and colleagues have found increased activity associated with lying in cortical regions associated with conflict and suppression of a truthful response. They report they can distinguish lies from truths with up to 88% accuracy.

In 2005, the researchers who are now behind Cephos, and are also partially funded by the DoD, published results of another experimental approach in Biological Psychiatry. Mark George, director of the Brain Stimulation Laboratory at the Medical University of South Carolina, and Andy Kozel, a professor of psychiatry at the University of Texas Southwestern Medical Center, had subjects steal either a ring or a watch from a room, then deny it when they were asked a series of questions. They imaged the brains of 30 subjects while asking questions about the mock crime to establish a model for brain differences associated with lying, then applied this model to predict when another 31 subjects were lying or telling the truth. The researchers found greater activation in the anterior cingulate-thought to monitor intention-and the right middle and orbital frontal lobes, thought to carry out the lie. They say they could predict accurately for 90% of the subjects in the latter group.

But MIT’s Kanwisher says she is skeptical about such research. For one thing, group averages of brain patterns-which are required to make sense of the patterns in the first place-are difficult to interpret (and fraught with noise) on the level of individual prediction. And in the real world, lying is verbal and carried out in defiance of instruction, and the stakes are incomparably higher. Rather than missing out on a $20 study reward, being caught in a lie could mean life in prison. Lying under these circumstances comes with an emotional component that is poorly elicited by a playing card, she argues.

“Applied fMRI studies of the kinds done so far have similar limitations to those of typical laboratory polygraph research,” according to a 2003 National Academy of Sciences report. “Real deception in real life circumstances is almost impossible to explore experimentally. You can’t randomly assign people to go do crimes. I do think that’s an inherent limit,” says Gabrieli, a professor of cognitive neuroscience. Others worry about the level of nuance that fMRI-posed questions can accommodate.

The limitations in the research haven’t stopped people from trying to take its applications to market. No Lie MRI’s Huizenga was selling fMRI scans as screens for heart disease at his last company, Ischem, when he read about Langleben’s work in 2001. He says he thought, “I can automate what you’re doing, [I] can make it into a product.” So he acquired the technology from the University of Pennsylvania.

Though the company’s product is still based on comparing brain scans to those in Langleben’s preliminary studies, No Lie MRI had its first commercial customer in December 2006: Harvey Nathan, who has been trying to get compensation from his insurance company after his Charleston, South Carolina delicatessen burned down in 2003. He had been cleared of arson charges in a criminal case, but wanted to use No Lie MRI to convince his insurance company he hadn’t started the blaze, for a per-session fee of $1,500 (clients get a hefty discount from the $10,000 going rate for agreeing to be televised). Nathan came out squeaky clean in the test, though his insurance company has yet to pay up, Huizenga reports.

Huizenga won’t say how many people have since tried the technology, but he’s clear on the philosophy behind it: “We’re testing individuals that want to be tested in areas [in which] they want to be tested. If they want to be tested on the topic of taking money from the cash register, we won’t test them on: are you having sex with your assistant? We deliver results to them personally. They get to use the results in the manner that they wish,” he says.

Huizenga eagerly points to the high prediction rate in Langleben’s study as a huge step up from the rate associated with the “nearest competing product”-the polygraph. He counts on snagging a worldwide patent for the service, administered at “Veracenters,” and if the company’s website is any indication, he will continue to market it for such uses as “risk reduction in dating.”

Cephos CEO Steve Laken got into the business of lie detection when he met Kozel, then at the Medical University of South Carolina, at a 2003 conference on human brain mapping in New York. Laken wanted to bring his findings to bear in post 9/11 counterterrorism efforts. Since its 2004 incorporation, Cephos has had weekly calls from individuals eager to use the technology, according to Laken, and government agencies have expressed interest as well. But he expects it will be a while until he is ready to put people through the fMRI machine-he’s hoping to increase what he claims is a 90% accuracy rate to 95%.

Laken says they are making strides toward this goal. In addressing concerns that the studies are poor approximations of reality, they are raising the perceived stakes in deception and imposing realistic time delays. In one, college students executed “a pretty elaborate mock crime” that involved stabbing a dummy and they were tested days or weeks afterwards, George explains. You can’t really have people go out and break the law. [Institutional Review Boards] won’t allow you to do that,” George chuckles. “[Still,] they thought they were involved in something a little bit illegal. We had people’s hands shaking.”

One of fMRI-based lie detection’s hurdles, oddly enough, is bettering the oft-questioned polygraph. Though “polygraphy isn’t much of a gold standard,” it still needs to be directly compared to these new methods before they can be widely adopted, Gabrieli says. Laken and Huizenga tout fMRI as the anti-polygraph, but the new technology may not be as different that people would like to think. fMRI “involves many of the same presumptions and interpretative leaps and gamesmanship,” argues Ken Alder, a historian at Northwestern University and author of The Lie Detectors: The History of an American Obsession. Research on fMRI lie detection has progressed much more openly than that on polygraphs, but Alder is concerned that fMRI may turn out to similarly operate as a placebo if used at this stage, catering to what Morse calls the “lure of mechanism” in courts and otherwise.

Still, Laken sees the machine as a clear alternative to the polygraph. Unlike the older technology, he says, on-site fMRI test administrators can send out brain scans for independent analysis. Questions are presented on a screen, eliminating the human element, and the entire process is completed in under an hour.

Elizabeth Phelps, a professor of psychology at New York University, raises concerns about potential test-beating strategies such as thinking about unrelated topics or doing mental arithmetic, though Laken denies being fooled by these in preliminary studies. But in reality, a nonconsensual testtaker need only move his or her head slightly to render the results useless.

And there are other challenges. For one, individuals with psychopathologies or drug use (overrepresented in the criminal defendant population) may have very different brain responses to lying, says Phelps. They might lack the sense of conflict or guilt used to detect lying in other individuals. Laken concedes that they’ve tested the machine on a rather limited population-18-50 year-olds with no history of drug use, psychiatric disease, or serious traumatic brain injuries. But he says he is content for his clientele to be restricted to “relatively normal people” like Martha Stewart and Lewis “Scooter” Libby – neither of whom has actually used the technology.

There’s another drawback: If a person actually believes an untruth, it’s not clear if a machine could ever identify it as such. Researchers including Phelps are still debating whether the brain can distinguish true from false memory in the first place. “In law, we’re concerned with acting human beings [who] can intentionally falsify or unintentionally falsify,” says Stephen Morse, professor of law and psychiatry at the University of Pennsylvania. “To the extent that we’re trying to get at the truth, we need a valid measure to understand [the difference].”

Jed Rakoff, US District Judge for the Southern District of New York, says he doubts that fMRI tests will meet the courtroom standards for scientific evidence (reliability and acceptance within the scientific community) anytime in the near future, or that the limited information they provide will have much impact on the stand. In court, most lies are omissions or exaggerations of the truth – among the trickiest to recreate in a laboratory. In his experience, and given the polygraph’s history, he says he would argue that the potential for harm outweighs the foreseeable benefits.

On the other hand, Judy Illes, who is the director of neuroethics at the Stanford Center for Biomedical Ethics, expects to see the technique enter courtrooms in the not-too-distant future. “I believe that technology like fMRI will certainly reach the point where its reliability and accuracy is sufficient to be an indicator of whether someone is lying or being forthright (i.e, the answer to the “if” question),” writes Illes in an e-mail. “A significant challenge for the legal system, however, is that this kind of technology will unlikely be able to ‘get inside someone’s head’ enough that it can reveal answers to the ‘what’ question, i.e., what is someone lying about, what is motivating them to lie, and does content and motivation interact with the concept of moral culpability or guilt.”

As for Huizenga and Laken, they are both optimistic that the fMRI test will eventually be legally viable, but in the meantime, they would be content to sell their services for out-of-court settlements. According to Rakoff, the best way to get at the truth in the courtroom is still “plain old cross-examination.” And in the national security sphere, there’s “much more to detecting spies than the perfect gadget,” Raichle agrees. “There’s some plain old-fashioned footwork that needs to be done.”

Source: The Scientist


thomasr

 


0 Comments



Be the first to comment!


Leave a Response