juin 7, 2025
Home » The Supervisory Board is concerned about the end of Fact-Checking-Liberation

The Supervisory Board is concerned about the end of Fact-Checking-Liberation

The Supervisory Board is concerned about the end of Fact-Checking-Liberation

Meta’s supervisory board does not see a good eye the softening of moderation on its platforms. The body called on Tuesday, April 22, the group’s management to measure the possible effects on human rights abuses of The removal of its Fact-Checking program in the United States.

This supervisory board, created in 2020, considered that the group’s decision, in January 2025, to give up its partnerships with Independent verification organizations had been taken « Hurve, breaking with the normal procedure »according to a report. She was notably accompanied « No public information about the assessments, if it has been carried out, of its impact on human rights » On social networks Facebook, Instagram and Threads.

At the start of the year, Meta announced the end of Fact-Checking in the United States and updated its regulations and its practices on content moderation. The group has decided to rule out less messages and publications likely to sprained your standardsespecially in terms of words aimed at minorities. Meta argued that, so far, « Too much content was censored when they shouldn’t have been ». His boss Mark Zuckerberg, trying to return to the good graces of Donald Trump, had argued that these changes were intended to « Return to our roots in matters of freedom of expression ».

In response, several organizations have warned of the consequences of these changes for minorities, including the LGBTQIA +community. « It is essential today that Meta identifies and deals with the negative effects that could result from human rights »asked for the Council, which made a series of recommendations related to the changes announced in early January.

Among other things, he suggested Meta to « Measure the effectiveness of context notes, compared to fact-checking, especially in situations where the spread of false information poses a risk to the safety of people ».

Indeed, the Californian group has chosen to replace Fact-Checking with products produced by referenced users saying that a message requires details or contextualization, most often by joining sources. This system is a variation of the one used on X (ex-Twitter). Several studies have concluded that the context notes had only a limited effect to prevent the spread of disinformation on the platform.

On Tuesday, the Council rendered several decisions concerning content from which it had been seized. Among the most striking cases, that of two messages posted on the sidelines riots in the United Kingdom After the murder of three children at the end of July.

The man suspected of being the killer had been wrongly presented as a Muslim asylum seeker when he was born in Wales in a family from Rwanda. The first of the two messages, generated thanks to artificial intelligence, staged a man chased an individual identified as a Muslim. The second showed four men, also presented as Muslims, in pursuit of a young child.

Seized by users, Meta had decided, after examination, to leave these messages on her platforms. They « Presented a risk of imminent endangerment » of individuals and « Should have been withdrawn »on the contrary judged the council. The latter stresses that the United Kingdom has not been identified as risk zone that more than a week after the start of the riots and said « Worried about seeing Meta be too slow to put in place crisis measures ».



View Original Source