Opinion | AI does not become ethically of rules but of involvement
Suppose a health insurer can work with personalized premiums based on health, behavior and living conditions thanks to artificial intelligence (AI). This results in accurate risk assessments. Many insured persons could get a lower premium. But especially people with a lower socio-economic status should probably pay more. Does the insurer have to use such a system?
I presented this dilemma to all kinds of groups in my years of research into ethical AI. The reactions are almost always that such a system is undesirable, because it is not solidarity. But when I then ask what the insurer would actually doing If the company is able to dispose of this AI, it appears that respondents expect that solidarity and ethics make it against competitive pressure and profit stimuli.
Who (or what) actually determines whether, and how, AI is used? Now that more and more of our choices are being informed (and ultimately made) by AI, that question should be leading when we talk about responsible use of the technology. The biggest challenges are not at the level of technology. But rather in the interaction between AI and the environment in which it is used. Which choices are made there, from which logic and in whose interests?
Technical glasses
The current toolbox for responsible AI – full of standards frameworks, design methods, legislation, algoritere registers and impact tests – looks mainly through technical glasses at the ethical aspects of AI. If something that needs to be solved within the design of technology. In this way we miss all kinds of things Rondóm Ai, both within the companies that use the technology and outside, that are much more decisive for the choices that these organizations make.
That organizational practice is already challenging for AI. Organizations develop linearly, but technology is growing exponentially. This means that new ethical challenges are being added at a rapid pace where existing legislation and organizational structures are insufficiently equipped. Think of issues about ingrained prejudices and discrimination, dependence on AI systems that produce outcomes that nobody knows exactly how they come about, And the talkiness for undesirable consequences of these semi-autonomous systems.
If we want to give ethics a meaningful role in answering these issues, it is not enough to occasionally have a discussion about a specific ethical issue. Then delegating responsibility to AI specialists is also not sufficient: such a Human in the Loop Then soon becomes a human tang.
To get a grip on the ethical aspects of AI, organizations must focus on what I call ‘ethical infrastructure’.
That seems like a somewhat illogical link of concepts. With ethics we think of noble but abstract principles, in infrastructure of concrete bridges or electricity networks.
Yet the term exposes an aspect of ethics that often remains underexposed. Namely: that an organization that wants to work ethically, needs structures and processes that are systematically built, maintain and controlled. Think of it this way: if we want to provide an entire company with light, the wiring must be drawn by all layers of the organization.
You cannot get responsible deployment of AI by hoping for world improvers in the AI teams
To achieve that, organizations have to tackle things differently and change three distinguished cases: the decision-making (who has influenced and heard), the accountability (who supervises and can correct), and the organizational design (the structure).
Take the Cooperative Enterprise Mondragon from Spanish Basque Country, which runs a billion -dollar turnover and is one of the largest companies in the country. There, the employees ultimately vote on the strategic direction and the ethical course. It underlines that decision -making can also be collectively arranged. For example, in the use of AI in areas such as work, care, allowances or the fight against fraud, representatives of, for example, trade unions could also be involved. Or at least be enabled to check decisions and correct it if necessary. This is only possible if there are fixed ethics functions in organizations that have influence, resources and fixed times when they perform their checks.
World improvers
We now lean too much on the ‘moral ambition’ idea of ethics-ethics as something individual, which we leave to its developers in the case of AI. But if we want to make other ethical choices, it will not work to overload the AI developers of the health insurer from the start of this piece with frameworks and moral consciousness. You cannot get responsible deployment of AI by hoping for world improvers in the AI teams, but by organizing organizations in such a way that moral action is structurally supported-also in the long term.
An ethical infrastructure does not change the logic of market forces overnight. She does not automatically prevent profit incentives from outweighing moral considerations. We simply cannot trust that people always put ethical action and companies aside their own interest.
But it is precisely in that the power of a good ethical infrastructure. Because what she does do is make ethical aspects visible and negotiable, and giving the opportunity to employees, customers, social organizations and supervisors to think along with questions and to exert influence on the direction that organizations take in with AI. This way we can also turn the artificial future into a humane future.
Read also
Treat your chatbot as a person, but don’t forget to turn it off again