‘My fear is that people become emotionally dependent on AI’
It is a busy week for Melanie Mitchell, professor at the renowned Santa Fe Institute in New Mexico. Although during her visit to Amsterdam she goes from lecture to workshop to show, she always looks modest and relaxed. She easily switches between in -depth discussions about fundamental, mathematical principles from a computer model to general observations about the impact of AI on society.
Mitchell is considered an authority in the field of intelligence, AI and complexity. In 2010 she published the prize -winning public book Complexity: a guided tourfollowed in 2019 Artificial Intelligence: A Guide for Thinking Humans And next year her latest book on Generative AI will be published.
What are your expectations of AI?
« People often see me as a ‘ai-skeptic’, because I doubt that we should consider AI as a thinking system or intelligence. In the AI community you usually see two extremes. On the one hand there are people who think that AI will be superslim, take over the world, or make everyone unemployed. On the other hand, the people think that AI is no more than one is one more than one autocomplete on steroids. Both visions are incorrect. I am in between and see myself in the first place as a scientist. It is my job to be skeptical. «
AIs are trained to constantly confirm and encourage users, while we humans have evolved to be sensitive to that
What indications do you find in your own work for reluctance?
« Take a scientific study that shows that chatgpt-3 and chatgpt-4 score better than bachelor students psychology on a certain intelligence test. The point with this kind of research is that one tests on a certain benchmark and from there the conclusion that AI can therefore logically reason or abstract. But how robust are those results?
« Together with my colleague Martha Lewis, from the University of Amsterdam, we changed small details in the test questions of that study. Suddenly people turned out to score better than the machines. That is because of the human ability to understand analogies.
« Analogies go further than, for example, steering wheel as a rudder stands for boat ». If you formulate a math problem in the form of a story and then change unimportant details, such as names or colors, then the underlying mathematics problem remains the same. Understanding that the same applies to abstract concepts. Immediately understand that the situation is comparable to Watergate.
“Strikingly enough, the older chatbots did a little better in our research than the newer one. That may have to do with the fact that newer versions are even more trained on it pee From people. Because they want to do well, at least, at least trained on it, mistakes could rather arise. But this is just a suspicion. We cannot test it, because still nobody knows exactly what is happening in an AI system. «
Is it dangerous to use those systems?
“Yes, I am worried about that. In the United States, AI is used on a large scale, especially by the government. That is risky, because the systems are not yet good enough to take over that role. This can lead to injustice, such as in the case of racist or sexist bias in face recognition technology. Ai can now be easily used for fraud for example Safety risks, because the systems are vulnerable to hacking.
« Another fear that I have is that people become too dependent on AIs; not only in a practical sense, but also on the emotional level. Companies are committed to further and further personalizing their systems, among other things by ‘socializing’ and training to ‘socialize’ and train to become good conversation partners. That makes the people who use these systems to confirm the users are teemiging, ai’s to be a positive effect. have evolved to be sensitive to that.
The argument always comes down to the head of any form of regulation innovation
Do you also see hopeful developments?
« Especially in medical science I see a revolution that is comparable to the revolution that computers have brought about. Deepmind of Google has developed a system for example to predict protein structures. That really has a huge medical potential. But other science can also benefit from the way AI can process data. »
After your studies on analogies, companies will make new versions of their AIs. You help companies like this. What is the other way around?
“AI is a moving target and that is difficult for us scientists. We do not get insight into the underlying systems and cannot look under the hood, while Ideally, also in collaboration with social scientists, should be involved from the start. After all, it is not that a group engineers Know how this technology will influence society. Now all technical knowledge remains in the hands of a few large companies and that makes our research more difficult. «
Is the solution in more regulation?
« Yes, we need more regulation. Under the current US government, however, this is only becoming more difficult. Recently the US Copyright Office published a report in which doubts were made about the use of copyrighted works by tech companies. A few days later the head of the copyright office was dismissed.
« The argument always comes down to the fact that any form of regulation is the headline. While open source systems, where the source codes are freely accessible, often turn out to be more robust. Because a large group thinks along, you benefit from the wisdom of the crowd. Another argument is often: if we don’t do it, then China will win. It is always the narrative of inevitability that is so harmful. «