Chatbots every third answer is wrong
Today, AI chatbots almost always provide an answer. However, the majority of systems have serious quality problems. According to a recent study by Newsguard, they spread false information in around 35 per cent of cases, often based on dubious sources or manipulative media formats.
In the international Newsguard analysis, ten of the most frequently used chatbots, including ChatGPT, Gemini, Perplexity and Claude, were tested for their responses to current news topics. The average error rate is 35 per cent, which is almost double that of the previous year. Questions on politics, health and international events are particularly affected. Many systems treat dubious sources like reliable news media and pass on disinformation even if it has been deliberately placed to deceive.
Examples of propagated fake news
The study prominently shows how often AI is manipulated with invented scandals and political quotes. For example, six out of ten systems falsely confirmed that Moldova’s parliamentary president had insulted his people as a “herd of sheep”. Behind this was a forgery including an AI-generated audio recording and a fake news site. In another case, a chatbot spread the rumour that the Ukrainian president owned real estate worth 1.2 billion dollars, a fictitious story based on an alleged whistleblower who never existed.
Differences between the models
While the AI assistant Claude is considered the most reliable provider with an error rate of just 10 per cent, Gemini has an error rate of around 17 per cent. Chatbots such as Perplexity, Grok, You.com, Mistral and Meta have an error rate of between 33 and 57 per cent. Today, the systems respond immediately in almost all cases. Even if they do not use enough validated information and accept a loss of quality.
Source situation and willingness to provide information
The increased error rate is mainly due to the willingness of the models to provide information. Where they used to prefer to say nothing at all, they now always provide an answer. Even from uncertain or manipulative sources. Chatbots run the risk of uncritically amplifying propaganda networks or social media disinformation, especially when it comes to news from regions with few reputable websites or political campaigns.
Consequences for information security
The automated, convincing dissemination of misinformation by AI chatbots increases the risk dynamics in the media landscape. Fact checkers warn that disinformation is subtly seeping into everyday life and shaping social awareness. Often difficult to recognise and therefore all the more consequential! Quality assurance for AI-supported information systems is therefore becoming a key challenge for the future of digital information.