Deepfakes Push Women Journalists Off Camera
Reporters Without Borders says deepfakes targeting journalists are spreading across countries and platforms, with women accounting for nearly three quarters of recorded victims.

Women made up 74% of 100 journalists targeted by deepfakes documented across 27 countries over a two-year period, according to Reporters Without Borders, a pattern the group says is reshaping how reporters work and how audiences judge what they see.
RSF said 13% of the women in its sample were targeted with pornographic deepfakes, attacks it said were designed to humiliate, discredit and drive them offline. UNESCO has described the same pressure as technology-facilitated gender-based violence and said women journalists face deepfakes, surveillance, disinformation and harassment aimed at silencing them.
The damage now extends beyond reputational harm. RSF reported that Cristina Caicedo Smit, a press freedom reporter at Voice of America, stopped filming for two weeks after fake videos cloned her voice and image and used them to attack Donald Trump and Elon Musk while defending USAID. When she returned, RSF said, her team changed production methods to reduce her exposure online.
That is one measure of the shift: the target is not only a journalist’s credibility but also her willingness to appear in public-facing formats at all.
RSF said political deepfakes remain hard to trace and even harder to punish. It cited the case of Slovak journalist Monika Todová, who filed a defamation complaint after a fake audio clip claimed she was planning electoral fraud. RSF said the investigation later stalled because police could not identify the perpetrator.
The BBC reported this week that new legislation in the United Kingdom has made it a criminal offence to create or request deepfake intimate images of adults without consent. BBC Verify said experts now believe deepfakes are becoming harder than ever to spot. Henry Ajder, a generative-AI specialist quoted by the BBC, said many are now “nearly impossible to detect with untrained human eyes and ears.”
That gap between production and detection is giving platforms a larger role in the outcome. RSF said some journalists had success getting Meta to remove fake content, while others reported little response. It said clips often reappear quickly even after takedown. In Argentina, RSF said accounts on X were still resharing material from a deepfake campaign against journalist Julia Mengolini.
UNESCO’s campaign language is more explicit about the social effect. It said a viral deepfake attacks not just a woman journalist’s image but also “her credibility, safety, and voice.” UNESCO cited a 2022 study finding that 73% of women journalists had faced online threats, with one in four suffering offline attacks as a result.
The way the threat is described varies by institution and by country. In some Western policy debates, the focus is on detection tools, watermarking and legal reform. In newsrooms hit by the attacks, the issue is workload, abuse and whether reporters can still trust their own audience not to believe the next fake clip. In countries already polarized by elections or disinformation campaigns, a deepfake is not treated as a tech glitch. It is used as a political weapon.
RSF said fake endorsements, scam advertisements and fabricated political statements are already mixing commercial fraud with public manipulation. South African broadcaster Leanne Manas, it reported, was hit by repeated deepfakes promoting pharmaceutical products and cryptocurrency schemes. Some viewers blamed her for financial losses. Police later appeared at her workplace after a complaint was filed, according to RSF.
The BBC said the best hope identified by several experts is wider use of watermarking and content provenance, which embeds information showing where a piece of media came from and how it was edited. Those systems, the broadcaster said, depend on broad adoption by technology companies and publishers.
RSF is asking platforms to label AI-generated content clearly and governments to create a specific criminal offence for malicious deepfakes. It is also urging newsrooms to adopt technical traceability standards.
The legal response is moving slower than the software. The next pressure point will be whether platforms make synthetic media easier to identify before election cycles and breaking-news events produce another round of fake clips that spread faster than any correction.
Sources for this article are being documented. Albis is building transparent source tracking for every story.
Get the daily briefing free
News from 7 regions and 16 languages, delivered to your inbox every morning.
Free · Daily · Unsubscribe anytime
🔒 We never share your email

