Artificial intelligence, like any other technology, does not have exclusively positive aspects. Every technology comes with its own set of challenges and threats.
Technology, in this case AI, can be deliberately used for the wrong thing. This problem is obvious and leads away from what I see as underestimated and less obvious challenges. So I want to leave that out for now and just look at the use cases where people are really trying to make sense of AI.
Discriminatory system AI
Nearly a decade ago, Amazon trained an AI system to find people in the applicant selection process who had a particularly strong interest in working for a tech company. The idea behind it was to find exactly the people who fit Amazon. In the end, the AI contributed in the result that all female applicants were sorted out. However, this was obviously not due to a fundamental lack of interest or motivation to work at a tech company. So they took a closer look and found that the AI system was trained with discriminatory data. The AI could see that men were much more likely to be in the relevant jobs and concluded that men would consequently also show a higher interest in such jobs and display higher motivation. The AI concluded that women were simply not interested or motivated enough. Of course, this does not reflect reality. The example shows that the data with which the AI works is crucial. If the data is discriminatory and depicts a bias, the AI also learns this bias. You have to be aware of this problem when working with such systems, because the systems can only be as good as the data they work with. So it’s our job, the people’s job, to provide appropriate data to the systems. We need to learn how to recognize such discriminatory data and prepare it in such a way that AI systems are given the chance to perform what they are actually capable of performing.
Follow these links as well:
Have a bright future!