avril 20, 2025
Home » « What remains when we put responsibility in AI’s hands? »

« What remains when we put responsibility in AI’s hands? »

« What remains when we put responsibility in AI’s hands? »


For some reason, you are still based on the behavioral Turing test when discussing whether AI can really think. The machine gets approved if its simulated intelligence can convince human users that they are dealing with another thinking and knowing being.

In this context, that arrangement is quite stupid because you then throw out the question of what thinking really is for something and completely ignores the knowledge theoretical role that consciousness itself plays.

But there is another serious problem with using the Turing test in this way that lies in the fact that it is (a little ironically) based on a subjective assessment. If the machine can create the experience with us that something is going on inside the digital boiler bone, well then we should assume that it can actually think about it too.

On this point of view, we must in principle say that my 8-bit Nintendo could really think sometime around 1992

That logic leads To some difficulties, and also make the conclusions sensitive in relation to spectacular illusions. On this point of view, we must in principle say that my 8-bit Nintendo could really think sometime around 1992 because in my childish attitude I believed that the software made a rational assessment of my skills as a player.

It is a funny example. But the fact is that the same type of questionable conclusions are now starting to appear a little everywhere when it comes to AI. And not only in the debate, but also within the framework of policy development and international legislation. In the EU’s AI team that came into force last fall, these systems are attributed to a rational conclusion. Just as I did at the age of seven, and on about as good grounds.

The approach goes again throughout the spectrum. At one end, we have lyrical tech entrepreneurs who are similar to the situation that we have been in contact with intelligent space creatures and find that it produces as good as highly educated people’s work (although it also applies to hives in nature, whose design cannot in principle be surpassed by people in terms of temperature and air flow).

But our stupidity has also given rise to AI

And this underwrits In turn, the more extreme perspectives. Biden administration AI counselor Ben Buchanan is convinced About that AGI (Autonom AI, which in all respects is superior to man’s ability) is a fact within a few years.

American vice president JD Vance advocated at the AI ​​top meeting in Paris at the beginning of the year a radical risk-taking and a minimum of caution to accelerate the AI ​​turnover at all costs.

The tech entrepreneur Elon Musk wants to colonize space, as you know, to secure a future for our soon-to-be-charged consciousness where human and digitally merge.

And entrepreneur Mo Gawat, former Business Manager for Google’s Secret Research Institute for AI and Biotechnology, Talking about the AI ​​systems Like our little digital children that we have to raise thoughtfully because they will soon take over the world and build a « brand new nature » for man to run around and play in. A paradise where man will finally achieve happiness under the supervision of an AI who knows us better than we do, and which can solve all the problems that our stupidity gets rise to.

But our stupidity has also given rise to AI. And this applies not least to this whole of this social, cultural and market phenomenon. Our declining appreciation of human abilities is what lies behind all these imaginative stories that radically overestimate the technology. As we become more and more unfamiliar with the entire spectrum of man’s unique abilities due to hyperspecialization and digital isolation, the more we will also be impressed by AI’s simulated intelligence.

One can be liken to an transmitted « Dunning-Kruger effect ». The less you know, the greater the superstition you tend to have to your own ability.

At this point, the British-Hungarian scientist and philosopher Michael Polanyi’s work gives a valuable corrective. His theories about « Tacit Knowledge » (approximately « implicit knowledge ») show that there are aspects of human thinking that cannot actually be quantified, but which are at the same time completely indispensable not only for science, but for all human knowledge as such. He shows how this implicit knowledge is necessary for the evaluation of new hypotheses and effective problem formulation in scientific work, and for the intuitive assessments that underwrite artistic and cultural innovation.

The relevant In this context, this dimension of human knowledge cannot be broken down into its separate constituents. It is based on immediate observations of complex whole, a kind of direct access to meaning that cannot be achieved by joining information pieces step by step. And our self -learning algorithms and digital « neural networks » simply do not work in this way even in theory.

And in the same way the AI ​​technology risks cutting off the branch on which it is sitting

Polanyi, based on this, meant that a science based on a reductive view of knowledge in the long run risks abolishing itself, as it breaches the free thinking and the unique human abilities on which science is based. The Soviet science model became a warning example to him, especially clearly expressed in the promise that free research would soon be abolished and become a pragmatic tool for the objectives of the current five -year plan.

And in the same way, the AI ​​technology risks cutting off the branch on which it sits. It is not just that AI technology and the almost revival-like marketing today are in the process of recreating these particular perspectives on human knowledge. On top of this, it also looks like the spread of AI-generated substitutes for human creativity and reflection will make us increasingly unfamiliar with the genuine model that would otherwise be a basis for critical comparisons.

For when we laid Over the responsibility for most of our scientific and cultural development work on AI technology, and when the relaxing confidence in technology means that most people can hardly put together a rational argument without « asking chat GPT », who can then assess if the general development is even desirable?

You can use an AI tool, is of course Mo Gawdat’s solution.

And suddenly, science has become a tool for fulfilling the five-year plans of the technology giants and « AI revolution.

Read more articles on DN Debate.



View Original Source