Will We Ever Have A Fool
Will We Ever Have A Fool
[2] Could such a machine ever be a reality? Not if our current technology is
anything to go by. The polygraph has been around for almost a century, with wired-
up offenders and twitching needles becoming a staple of criminal investigations.
But there is no solid evidence that the signs it looks for—faster heart rates,
shallower breaths and moist skin—can accurately indicate whether someone is
telling a lie. Underpinned by fluffy theory* and backed by a weak and stagnant
evidence base, this lie-detection device is unlikely to get any better.
[3] Abandoning the polygraph, some scientists have turned to brain scanners. Two
technologies have dominated the field. The first uses electronic sensors on a
person’s scalp to measure an electrical signal, or "brainwave," called the P300,
which appears when we recognize something. By looking for this signal, you could
potentially tell if someone is hiding knowledge about something they are already
familiar with, like a murder weapon. This is certainly useful, but it is a long way
from an all-purpose lie-detection method, and two of the key figures in the field
have been arguing about how effective this is for many years.
[5] But there is no "center for dishonesty" in the brain. The areas illuminated in
fMRI scans have many functions. They can even be more active when people tell
the truth, especially if they are trying to decide whether to be honest or not.
[6] So, how accurate are the scans? In simple lab experiments, they can detect lies
around 78 to 85% of the time. "We’re not that close to a perfect lie detector," says
Giorgio Ganis from the University of Plymouth, who uses fMRI to study
deception. "There’s also a 15–20% chance of an innocent person being wrongly
determined to be a liar."
[7] What is particularly troubling is that these limitations crop up* in simplified
and artificial conditions, like volunteers lying about a playing card they have been
given. So we know very little about how fMRI would fare at detecting lies in more
realistic settings—for example, not a single study has scanned people’s brains
when they lie during conversations.
[8] There are also different types of lie. If you have been pulled over for speeding,
you would need to come up with a tall tale spontaneously. If you were on trial for a
crime, you would have more time to rehearse your story. Ganis found that these
brands of lie produce different patterns of brain activity: rehearsed ones are
accompanied by a weaker buzz in so-called action-repression areas, and a stronger
one in memory centers.
[9] Finally, there are ways of fooling a brain scanner, just as there are
countermeasures for other lie-detection techniques. Ganis says, "I’ve done a study
showing that you can play mental tricks with fMRI. You mentally associate the
important events of your life to items that are shown during the test." By bringing
those events to mind at the right time, volunteers could bamboozle the scans, and
slash their accuracy from 100% to just 33%. Although fMRI scanners will
undoubtedly improve, as Ganis says, "If you want a general lie detector, that’s
definitely science fiction right now."
[10] That hasn’t stopped fMRI from being marketed as a tool for lie detection—
two companies, Cephos and No Lie MRI currently offer such services. Nor has it
deterred brain scans from being presented in courtrooms, with varying success. In
recent years, two US judges have dismissed fMRI-based evidence, but a murder
suspect in India was sentenced to life imprisonment after brain scans supposedly
revealed that she had knowledge about a crime that only the killer could have
possessed.
[11] Possible misuse of this developing technology has raised ethical concerns
about the future of brain-based lie detection. Daniel Langleben from the University
of Pennsylvania, who did much of the pioneering work in this field, recognizes the
limitations of the technique, but thinks that it could be improved to the point where
it could be usefully applied in practical settings. But he worries that the current
doubts will stifle the research necessary to improve the technology.
[12] "Every time you have a negative critical review, it has a chilling effect on
people who want to do this research," says Langleben. For now, we know there are
broad differences between an honest brain and a dishonest one. He explains that to
turn that knowledge into a practical test, "you need a lot of boring validation work.
We need clinical trials, just as for every medical device or test." They would also
assess the effects of age, motivation, mental disorders, medication,
countermeasures and more. They would likely cost tens of millions of dollars, and
would need to include thousands of people—far more than the dozens who take
part in typical fMRI lab studies.
[13] For now, a foolproof lie detector is a far-away goal, but it will be even more
distant if no one can afford to do the necessary research. That, at least, is no lie.
electronic sensor
lie-detector technology
electrical signal
4. If you lie in the following situations, what would produce a stronger
buzz in your memory center?
Your boss demands to know you why you were late for work this
morning.
Next month, you must explain to the government why your tax payment
was late.
Your friend asks if you’re going to throw a surprise birthday party for
her.
5. The word bamboozle in paragraph 9 is closet in meaning to ________.
avoid
fool
design
6. In which position ([1], [2], or [3]) should this sentence be added to this
text from the passage?
Such tests would try to work out how accurate the scans are in more
realistic settings, and how often they make errors.
[1]
[2]
[3]
7. According to Giorgio Ganis, only about 20% of fMRI scans give an
accurate result.
True
False
Not given
8. The fMRI technique was developed before the technique using
electronic sensors.
True
False
Not given
9. Until now, fMRI technology has only been used in the United States.
True
False
Not given
10.The author implies that if more money were available for research, a
fool-proof lie detector could eventually be developed.
True
False
Not given