Artificial intelligence spreads disinformation, now it should help with exposing – the press.com
Automated fake news spread faster than ever on social media, disinformation became a means of hybrid warfare. A Salzburg research team is looking for weapons in the fight against it.
Grilling in chocolate, migrants who eat pets, or a manipulated hurricane: These reports, which have caused a lot of sensation on social platforms, have one thing in common: they are not true. “Information” like this is often invented by social bots, programs that imitate people. In many cases, it can still be seen more or less easily with a critical look that something cannot be right on the texts, pictures or videos.
Digital tutoring
But the methods of artificial intelligence, which automatically spreads false reports, are becoming increasingly difficult to recognize fake news. A research project under the direction of the Creative Technologies department at the Salzburg University of Applied Sciences is therefore developing tools to uncover disinformation on social media. « Tguard », According to the name of the research project, the digital resilience of society is intended to improve society and enable citizens, companies and authorities to be more likely to prepare for disinformation. Finally, according to the Risk Bild 2025, which was created by the Department of Defense, hybrid threats – including disinformation – are the greatest dangers to society.
In addition to the Department of Creative Technologies, which has built up a lot of competence around the recognition of social bots and disinformation in recent years, the Austria Institute for European and Security Policy (AIS), the Austrian Institute of Technology (AIT), the Federal Ministry of State Defense (BMLV), the Software Consulting company NEKE-NEKE GmbH and the Austrian Institute for Applied telecommunications (Öiat) partner in the project.
« We want to prevent generative models from creating disinformation campaigns. »
Clemens Havas,
FH Salzburg
« /> Clemens Havas FH Salzburg/Wildbild
Clarify the population
« Our goal is to prevent generative models from creating disinformation campaigns, » says the project manager Clemens Havas. Neither western nor Chinese AI models are completely transparent. While Chatgpt gives insights in use and security, technical details usually remain unclear – similar to many Chinese models. However, open models like Llama of Meta show that extensive transparency is possible.
In a first step, the researchers collect data in order to better compare different generative models and their ways of working. « We want the population to understand which models there are and how they work, » says Havas. In the next stage, a safe test environment will be created to recognize fake news and from social bots on platforms such as Tiktok or YouTube.
Fake news are everywhere
Artificial intelligence is used, the algorithms of which are trained to unmask inconsistencies. The teams combine information from pictures, videos and texts and bring them together. « This makes it easier to uncover manipulations, » explains Havas. Finally, all findings flow together in an app for appropriate training. It shows how social bots work and what dangers are associated with it. After all, it is all about awareness that we can meet fake news and disinformation everywhere today and very convincingly.