Can You Trust What You See Online? AI Fakes Put Users to the Test

Fact-checkers urge caution as AI disinformation spreads across social media and global events.

Header Image

 

Artificial intelligence is rapidly reshaping the information landscape, with increasingly realistic content making it difficult for users to tell what is real and what is not.

On International Fact-Checking Day, researchers and media experts are warning that AI-generated material is no longer confined to niche corners of the internet, but is now embedded in mainstream news cycles, from geopolitical developments to election coverage.

New research published in PNAS Nexus illustrates the extent of the challenge. In a survey of 27,000 participants across 27 European Union countries, respondents were asked to assess the credibility of a series of headlines produced by both humans and AI.

The results showed that nearly half of the AI-generated headlines were perceived as real, slightly exceeding the share for human-written ones. Participants were also more inclined to trust and share AI-generated content when it referred to genuine events, highlighting how easily misleading material can gain traction.

At the same time, respondents indicated they were less likely to share stories they knew were false, suggesting that awareness remains a key factor in limiting the spread of disinformation.

How AI content blends into real news

AI-generated material has evolved significantly in recent years. Early examples often contained visible flaws, such as distorted objects or unnatural movements, but these are now far less common.

Instead, misleading content tends to blend seamlessly with authentic material. Experts say the most reliable indicators are often subtle, including inconsistencies in lighting, background details or continuity within a video. Subjects may also appear unusually polished or lack natural texture, particularly in close-up images.

Checking authenticity through simple tools

Verification remains essential when encountering suspicious content. Reverse image searches can help identify where an image first appeared and whether it has been reused in a different context.

Other methods include examining metadata, identifying digital watermarks and using specialised platforms designed to trace manipulated content. Some AI systems embed invisible markers, though these are not always accessible to users.

Turning to trusted sources

Fact-checking organisations continue to play a critical role in identifying false or misleading content. European networks and research initiatives regularly publish analyses, debunks and databases that track disinformation trends.

Consulting these sources can provide quick confirmation when a piece of content has already been investigated.

Limits of AI detection tools

A growing number of tools claim to detect AI-generated text, images or video, but their accuracy remains inconsistent. Experts caution that these tools should be used as part of a broader verification process rather than relied upon alone.

Similarly, the absence of a watermark or visible indicator does not guarantee that content is genuine.

Taking time reduces the spread

One of the most effective defences against disinformation is simply slowing down. Misleading content often spreads quickly because users react emotionally and share information without verification.

Pausing to assess the source, check for corroborating reports and review how others are responding can help prevent the spread of false information.

As AI technology continues to advance, experts say maintaining a cautious and informed approach will be essential for navigating an increasingly complex digital environment.

Source: Euronews

Comments Posting Policy

The owners of the website www.politis.com.cy reserve the right to remove reader comments that are defamatory and/or offensive, or comments that could be interpreted as inciting hate/racism or that violate any other legislation. The authors of these comments are personally responsible for their publication. If a reader/commenter whose comment is removed believes that they have evidence proving the accuracy of its content, they can send it to the website address for review. We encourage our readers to report/flag comments that they believe violate the above rules. Comments that contain URLs/links to any site are not published automatically.