Blogs

Fact-checking

Confronting the crisis of trust in news media

Lyne Mneimneh

Jul 31, 2023

In today's rapidly evolving information landscape, ethical journalism plays a critical role in promoting accountability and evidence-based debate. This article explores how journalists, fact-checkers, and technologists are working together to enhance public trust in factually accurate news media, countering disinformation and the rise of post-truth politics.

Promoting professional standards and norms

Ethical journalism upholds core principles such as independence, impartiality, accountability, minimising harm, and promoting truth and accuracy. Journalists must ensure the information they provide is thoroughly fact-checked, reliable, and sourced from verified channels. By adhering to these principles, journalists combat misinformation and build public trust. This is crucial in a context where social media companies – whose ad-driven business models promote attention grabbing content over factually accurate content - act as gatekeepers of the information landscape.

To uphold ethical journalism principles, individual journalists and media organisations often pledge to uphold and be audited on codes of conduct, like the codes of principles developed by the International Fact-Checking Network, or the Arab Fact Checkers Network Code of Principles.

Another complementary approach involves giving outlets “trustworthiness” scores. The Journalism Trust Initiative is a good example. Launched in 2019 by Reporters Without Borders, the JTI invites media outlets to “voluntarily self-assess their editorial processes, to publish the results and to get independently audited.” Its online assessment tool is based on a list of criteria developed by more than 130 organisations and actors from the media industry, academia, and tech landscape, and ultimately sets a benchmark for journalistic credibility and quality.

Downstream, citizens and civil society organisations play a growing role in ensuring that journalists, media outlets and online platforms promote reliable and trustworthy content.

This is something made clear in the EU’s Digital Services Act of 2022, the bloc’s flagship law tackling disinformation and other harmful online content. Under the DSA, large platforms must create easy-to-access “notice and action” tools for users to flag potentially illegal content like disinformation. Qualified researchers, fact-checkers, and NGOs will also be designated as "trusted flaggers" and responsible for platform monitoring. The DSA’s companion Code of Practice on Disinformation meanwhile compels media platforms to offer users tools to improve media literacy and critical thinking.

New technologies reinforcing ethical journalism

In recent months, Google has pitched its generative AI news writing tool, internally codenamed “Genesis,” to major US publishers, including The New York Times, The Washington Post and News Corp. Could such tools also support media literacy and pre- and post-publication fact-checking?

AI holds significant potential in this regard, boasting an ability to efficiently sift through vast amounts of information, analyze data, detect patterns, and cross-reference sources. This can help journalists and citizens verify facts and identify misinformation, saving them valuable time to focus on in-depth reporting and analysis.

Through its language analysis tool, for example, Dalil can help identify and analyze deceptive or manipulative communication techniques used in various forms of media, such as political speeches, advertisements, news articles, or social media posts. The tool’s primary objective is to help individuals become more critical consumers of information and recognise when they are being exposed to potentially misleading or biased messages. Still in beta form, it operates by detecting several key tactics commonly employed in misleading communications, such as loaded language, name calling, exaggeration, minimisation and sowing doubt. 

There is undoubtedly a need to develop such tools further. The Code of Practice on Disinformation, for instance, calls on platforms to offer features to inform users that content they interact with has been rated by an independent fact-checker (Measure 21.1) and to lead users to authoritative sources (Measure 22.7). AI tools could also assist researchers in identifying persistent sources of disinformation in advertising, using advanced algorithms to scrape and analyse the searchable ad-repositories that will be created pursuant to Articles 30 and 39 of the DSA.

The key role of human expertise

While new technologies may bring benefits, they also carry inherent risks. One significant challenge lies in humans' already difficult task of differentiating truth from misinformation. Breaking down the complexities of disinformation into parts that can be taught to language models introduces further errors. AI content moderation systems are thus prone to error, lacking sight of the social relations within which deceptive content is enmeshed.

Creating quality training datasets for AI content moderation systems is another challenge. Being trained on inadequately wide datasets that do not include “slang or nonstandard use of certain expressions” can cause these systems to censor legitimate speech. This problem is magnified when working with Arabic language content, which is lacking in both quality and quantity online. Additionally, AI content moderation systems can replicate and amplify existing biases if they are trained on datasets that inaccurately flag content. Meta’s content moderation systems, for instance, are notorious for falsely flagging content promoting Palestinian human rights.

The pairing of human expertise with AI is therefore the only way to accurately assess claims and mitigate freedom of expression risks. Humans input is required to build good quality training datasets from diverse sources; to manually annually annotate and audit some data; and to evaluate the AI model’s performance. With such risk mitigation measures in place, AI content monitoring and analysis tools can help bed-in standards of ethical journalism, better equipping journalists and citizens alike to discern truth from deception.

Interested in partnering with us?

Drop us a line at hello@dalil.io

Dalil is designed and developed by Siren.

© 2024 Siren Analytics. All Rights Reserved.

Interested in partnering with us?

Drop us a line at hello@dalil.io

Dalil is designed and developed by Siren.

© 2024 Siren Analytics. All Rights Reserved.

Interested in partnering with us?

Drop us a line at hello@dalil.io

Dalil is designed and developed by Siren.
Privacy Policy.

© 2024 Siren Analytics. All Rights Reserved.