mai 8, 2025
Home » OpenAI’s latest artificial intelligence began to see hallucinations – Cyprus Newspaper

OpenAI’s latest artificial intelligence began to see hallucinations – Cyprus Newspaper

OpenAI’s latest artificial intelligence began to see hallucinations – Cyprus Newspaper


Artificial intelligence is developing rapidly, but it does not always progress in the right direction. OpenAI’s latest models were developed to mimic the GPT O3 and O4-Mini, the way of thinking more closely. However, the latest research showed that these models produce more misleading information, even though they were smarter.

Since the first emerged of artificial intelligence chat robots (chatbots), misleading information or “hallucinations” have been a constant problem.

In each new model, these hallucinations were expected to decrease. However, OpenAI’s latest findings show that hallucinations have increased even more.

In a public figures test, the GPT-O3 gave false information in 33 percent of its responses; This ratio is twice the error rate of the previous model GPT-O1. The more compact GPT O4-Mini performed even worse and produced 48 %misleading information.

Does artificial intelligence think a lot?

Previous models were very successful in producing fluent texts, but O3 and O4-Mini were developed by step-by-step thinking programming to mimic human logic.

Ironically, this new “thinking” technique can be the source of the problem. Artificial intelligence researchers say that the more thinking of the model, the more likely the possibility of deviation of the wrong path.

Unlike old systems with high safe responses, these new models can achieve false and strange results while trying to bridge a bridge among complex concepts.

Why are more advanced artificial intelligence models less reliable?

OpenAI associates the increase in the hallucinations of artificial intelligence, not in the form of direct thinking, but with the abundance and courage in the narrative of models. While artificial intelligence tries to be useful and comprehensive, sometimes makes predictions and can confuse the theory with reality. The results can be extremely convincing, but it can be completely wrong.

Real world risks of artificial intelligence hallucinations

It carries great risks when used in artificial intelligence, legal, medical, education or government services. In a court document or in the medical report, misleading information may lead to disaster.

Nowadays, due to chatgpt, lawyers were sanctioned for providing fabricated court quotes. What about small mistakes made in a job report, school assignment or state policy document?

The more artificial intelligence is integrated into our lives, the more the likelihood of making mistakes is reduced. However, the paradox is this: The more beneficial the danger of the mistakes he will make, the more grows.



View Original Source