The Media Foundation for West Africa (MFWA) has called attention to the growing risks and underutilisation of Artificial Intelligence (AI) tools in journalism across the West African subregion, citing a critical lack of regulatory safeguards as a key barrier. This concern came to the fore during a high-level webinar held on April 30, 2025, as part of MFWA’s commemoration of this year’s World Press Freedom Day.
The virtual conference, themed: AI, Press Freedom and the Future of Journalism, brought together 150 participants, including journalists, media rights advocates, students, and civil society actors. The session was moderated by Dr. Daniel Kwame Ampofo Adjei, Institutional Development & MEL Manager at the MFWA, while the keynote address was delivered by Dora Boamah Mawutor, Director for Freedom of Expression, Tech & Digital Rights at the Foundation.
The event featured insightful presentations from Dr. Theodora Dame Adjin-Tettey, Senior Lecturer at the Durban University of Technology; Mr. Edetaen Ojo, Executive Director of Media Rights Agenda (Nigeria); and Mr. Kwaku Krobea Asante, Manager of MFWA’s Independent Journalism Project.
Speaking on AI in Newsrooms – A Tool or a Threat, Dr. Adjin-Tettey emphasized the dual nature of AI in journalism. Although AI tools used in journalism like ChatGPT, Grammarly, Otter.ai, and Canva AI offer immense potential to improve newsroom efficiency, their unregulated nature poses ethical and professional risks.
“AI algorithms are only as good as the data fed into them,” she warned. “Without African-centered AI development and regulatory mechanisms, journalists are rightfully cautious. The technology can be corrupted and weaponised, especially in the realm of disinformation.”
She highlighted the urgent need for the development of AI systems tailored to African realities, noting that current tools often lack the contextual understanding necessary to support robust journalism on the continent.
In his address, Mr. Edetaen Ojo echoed the concerns about AI’s unregulated deployment, warning that its power can be and is being abused by authoritarian governments. “AI-powered surveillance systems are becoming increasingly intrusive, with features like facial recognition and biometric tracking being used to monitor journalists,” he said.
He raised alarms over the use of AI in censorship, explaining how machine learning algorithms are often trained to suppress or filter content under the pretext of combating misinformation or extremism. “These tools are not neutral, they can and have been trained to silence dissent,” Ojo asserted. He further noted that many AI-driven content moderation systems operate without transparency or accountability, making them susceptible to manipulation by repressive regimes.
Kwaku Krobea Asante brought to light another critical dimension of the AI threat: disinformation through synthetic media. In his presentation on AI, Disinformation and Trust in the Media, he explained how sophisticated AI tools like deepfake software are increasingly being exploited to spread false information and manipulate public opinion.
“These technologies can fabricate highly convincing videos or audio clips of public figures saying or doing things they never did. In politically tense environments, such tools are dangerous weapons that erode public trust, confuse citizens, and undermine democracy,” he said. He added that the fast-paced evolution of AI tools is outpacing current governance frameworks, underscoring the urgency for stakeholders to develop robust policy and technical responses.
Participants at the webinar reached a consensus on the urgent need for comprehensive and responsible AI governance. The following recommendations were made to safeguard press freedom, enhance ethical journalism, and ensure AI tools serve the public interest:
- Regulatory efforts must involve governments, media institutions, civil society, academia, and law enforcement to ensure inclusive and balanced AI governance.
- Surveillance activities should be legally bounded and only conducted with court-issued warrants to prevent abuse.
- When surveillance is carried out, there must be legal requirements to disclose findings, especially when they affect public interest.
- AI regulation must include robust protections for whistleblowers who expose unethical or unlawful uses of AI technologies.
- AI tools must be designed with transparency in mind, including the disclosure of algorithmic parameters and decision-making processes.
- Independent civil society organisations should be granted oversight responsibilities to monitor how AI is being used, especially by state actors.
- Digital literacy programmes should be introduced to help citizens understand AI’s impact and protect their rights in digital spaces.
Missed the discussion? Click here to watch the full webinar on YouTube.