mai 2, 2025
Home » Chat GPT was too pale – now the Open AI backs

Chat GPT was too pale – now the Open AI backs

Chat GPT was too pale – now the Open AI backs

When Open AI recently updated one of the AI ​​models behind Chat GPT, the system underwent a personality change.

« Good question! » it could respond to the most trivial questions and pay tribute to the user to have the wisdom to ask about how to boil eggs.

Many described it As a porky and tuning at the top.

« Is it just me or butter chat GPT too much now? All the time it’s’ good question ‘,’ love the deepness, ‘you really go in depth now’. I get flattered, but I can’t handle the falsehood anymore, » wrote A user on the Reddit forum and was immediately supported.

The change seems to be leaning from the end of March, when Open AI updated GPT-4O, one of several AI models one can choose from in Chat GPT. Among other things, the update would make the system more cooperative, according to The news site Ars Technica.

Now the protests have had an effect. In the days, Open AI’s CEO Sam Altman has admitted that the update of GPT-4o had made « its personality for the lism and annoying ». Shortly thereafter, he announced that the update will be withdrawn.

« Sometime I will share what we have learned from this, it has been interesting, » he writes on X.

Excessive pork from language models Like those in Chat GPT is a well -known problem. Research from AI company Anthropic has previously shown what it may be due to.

When AI developers fine-tune their systems, it is often based on feedback from users, sometimes in the form of giving their thumbs up or down to the answers they receive. Sometimes technical systems are also used to evaluate the answers.

But, wrote Anthropic in a reportboth humans and technical systems tend to give higher ratings to the lism than correct answers. Therefore, AI models that are adjusted after such evaluation can start to pull in the porky direction over time.

Open AI confirms in a Blog posts The fact that the « error » was based on the update relying too much on the users’ thumb up or down at the answers that chat GPT gives.

« In this update, we placed too much emphasis on short-term evaluation and did not consider how the interactions with Chat GPT develop over time. As a result, GPT-4o began to give answers that were excessively supportive but also dishonest. »

Read more:

New Strategy for Artificial Intelligence – Think in peace and quiet

Report: Here is the jobs that can be replaced by AI



View Original Source