Nothing but the truth! Imagine living in a world ten to 15 years from now where every piece of information and every statement can be instantly and automatically checked for truth. Will artificial intelligence expose lies and fake news in the future?
In fact, signals are accumulating from four directions.
- By facial expressions, gestures, tone of voice, and the pattern of blood flow in the face, trained people can recognize a lie. And artificial intelligence can already do it better today.
- Linguistic features can also be detected and analyzed by software, so that lying statements, whether spoken or written, are becoming increasingly recognizable.
- Software can convert speech to text and, of course, read text. Software is increasingly capable of matching the truth of a statement by comparing it with all the knowledge stored on the Internet.
- During a lie, certain brain regions are particularly active, which can be visualized with an fMRI.
And we may be able to use most of it unnoticed in the future via our data glasses or data lenses, which supplement or even replace our cell phones. So it’s not so crazy to assume that lying will become virtually impossible sooner or later. Unless we shield ourselves from all technology. Which then also looks suspicious. We are forced to be honest!
Social lies against loneliness
After all, there are many reasons to lie. Even legitimate and honorable ones. “Well, how did it taste?” “Yes, great, excellent” is very often a so-called social lie. Social lies serve to avoid putting unnecessary strain on relationships. They serve to make our friends and relatives feel comfortable with us and to prevent us from wasting away in loneliness.
But what if your hosts’ sensory-enhanced smart home indicates, based on your facial expressions, gestures, or the pattern of blood flow in your face, that you’ve been anything but honest in your answers? It’s hard to imagine how much we would argue and hurt fellow human beings if we unnecessarily always said everything freely that we really thought.
Now, not everything that is technologically possible today and tomorrow will find widespread application right away tomorrow. Yet technologies for detecting lies and fake news are already in the world, in labs and research facilities, as prototypes and niche applications. For all their concerns, there are areas of application where they could be very helpful.
Life support for mental disorders and diseases
MIT has unveiled a system that consists of a wearable and an app. It can recognize people’s emotions during a conversation. The artificial emotional intelligence analyzes speech patterns and the facial expressions of the counterpart – with an accuracy of 83 percent. It is initially intended to support people who suffer from autism and social phobias, i.e. who find it difficult to assess others correctly.
How do we fight fake news and alternative facts in the future?
Some people talk about the post-factual age. Misinformation on the Internet can be identified using computational linguistic analysis techniques. The truth of a statement is assessed on the basis of linguistic characteristics and by comparing it with statements stored in countless other places on the Internet, i.e. via Big Data Analytics. With the so-called semantic web, the results become even more reliable.
In this way, information circulating on the Internet could be classified with a truth index that, although it only specifies probabilities, at least enables orientation. Algorithms cannot yet reliably distinguish between false and true claims. But it’s not a dream of the future. Such technology is being developed, for example, in the EU research project Pheme. And numerous research teams around the world are currently taking on the challenge of developing a working solution in the Fake News Challenge.
Just imagine. For every speech and talk show contribution by a politician or board member, a colored window shows how true the claim just made is.
The end of the post-factual age may be coming, barely dawning. But then it’s not that easy. Who runs the software? Can we trust Internet companies and governments? Not only people with a tendency to paranoia fear censorship and manipulation here.
Is the perfect crime solver coming?
Classic polygraphs measure physio-psychological parameters such as blood pressure, pulse, respiration and the electrical conductivity of the skin. However, people are not machines and react quite differently in stressful situations, so that it cannot be ruled out that the polygraph will indicate a lie even if the interviewee is telling the truth. In addition, the results must be interpreted by the so-called polygraphist. As experienced as he or she is, interpretations are fallible. For this reason, the polygraph is not permitted as an aid in crime detection in Germany and many other countries.
Will technological advances change that?
Researchers at the University of Michigan have developed software that succeeds in exposing liars 75 percent of the time. In a first step, video recordings of statements made in court were evaluated, i.e. real cases that did not take place under laboratory conditions. The system was then trained to look for abnormalities in tone of voice, facial expressions, and gestures that were significantly more common in liars. Now 75 percent is not 100 percent. Humans, however, fare much worse in comparison with a rate of 50 percent! So it makes sense to outsource the assessment of the counterpart to the technology, so to speak.
The Canadian start-up NuraLogix even wants to analyze the blood flow pattern in people’s faces with the image processing software ‘Transdermal Optical Imaging’ in order to make hidden emotions visible. This can also be completely hidden from test subjects.
It is not yet possible to hide what goes even deeper: In recent years, lie detection using functional magnetic resonance imaging (fMRI) has developed considerably. Neuroscientists are tracking down lies where they originate: in the brain. Companies have been offering such procedures for a good ten years now, for example ‘No Lie MRI’ from California. They advertise with the statement, “the first and only direct measure of truth verification and lie detection in human history!” The first and only solution for truth verification and lie detection in history. When lying, certain brain regions are more active. The scan makes this visible.
Has a suspect been to the scene before? There has been software for years that can determine that with 90 percent probability. Neurophysicist John-Dylan Haynes from the Bernstein Center of Computational Neuroscience in Berlin has been able to determine whether a subject has seen a shown room before using software to interpret brain activity. This is called ‘Crime Scene Recognition’. Now, no judgment has to be pronounced on the basis of corresponding technologies, but they could at least be used to establish the truth.
Studies suggest that the fMRI method is more successful, but ultimately the same uncertainty exists as with the classic polygraph: under what psychological and emotional pressure is the subject? To what extent can test results with many people under laboratory conditions, which are in some respects ‘fake situations’, be transferred to real conditions? What if the liar himself believes what he says to be true? What if the system is lying because it has been manipulated? Who interprets the results? And how does the interpreter feel today?
Hopes and doubts
Lie or truth? Fake News or Real News? In the future, new methods of neurological and biochemical measurement of humans and their statements on these issues will provide more and more data. And artificial intelligence is already proven to be better than the average human when it comes to detecting lies and fake news.
At the same time, however, this also means a world in which personal rights are drastically violated by today’s standards. We have to come to terms with that first. Or try to ban all the magic. Experience shows that this has little chance of success.
Despite ethical concerns and legal hurdles, it is very likely that truth technologies will be widely used and a truth industry will emerge. It is a market of the future. Probably ‘Truth Engineer’ or ‘Trust Engineer’ is a profession of the future.
Trust technologies and trust solutions are already a burning need today. If nothing else, the hype around blockchains is a clear signal of this.
Where can you promote and ensure truth today, promoting the priceless factor of trust? The massive proliferation of ratings and reviews has punished or even wiped out many fraudulent products and businesses over the past twenty years. What we are thinking about here is basically the continuation of that.
It’s not just about trust in people. We also need to think in the other direction: If artificial intelligence were to one day actually develop something like consciousness, it could purposefully gain advantages over humans by lying and misinforming. After all, humans no longer stand a chance in poker against artificial intelligence as of late. She bluffs even better than human masters in poker.
As a result, I expect that we will create a more honest, truthful and trustworthy world. It really is needed, as recent developments show.
And when the time comes, please turn off the truth app when you ask your guests how your menu tasted after dessert.
Video: Will artificial intelligence expose lies and fake news in the future?
Follow these links as well:
Have a bright future!