Introduction
Across the world, we are witnessing a striking new trend: national leaders are beginning to blame artificial intelligence (AI) for the wave of misinformation that floods social networks, news feeds, and election campaigns. From our perspective, this shift reveals something deeper than a simple technological misunderstanding—it reflects a global struggle to assign responsibility in an era where truth itself can be manufactured. What was once “fake news” is now being rebranded as “AI-generated deception,” and presidents are increasingly using that label to explain or dismiss controversy.
AI has become both a convenient culprit and a genuine concern. On one hand, generative AI tools can create convincing deepfakes, clone voices, and produce false news stories in seconds. On the other, political figures are using AI’s rise as a rhetorical shield—to deflect blame, sow doubt, or avoid accountability. As misinformation becomes easier to fabricate, the boundary between genuine errors and deliberate manipulation grows dangerously thin.
The new face of political blame
In recent months, leaders from the United States to Latin America and Europe have started invoking AI whenever controversial videos or quotes appear online. Former U.S. president Donald Trump, for instance, recently dismissed a viral video showing him allegedly throwing objects from a White House window, insisting that it was “created with artificial intelligence,” even though members of his own team confirmed its authenticity. His statement echoed a new global narrative: if something looks bad, blame AI.
This marks a significant rhetorical evolution. A few years ago, politicians denounced “fake news” and blamed journalists or foreign powers for misinformation. Now, the villain has changed. AI has become a faceless scapegoat, one that cannot argue back, sue, or vote. And because the technology is complex and invisible to most citizens, it serves as a perfect excuse: plausible, technical, and impossible to disprove quickly.
How AI became the perfect scapegoat
From our point of view, AI’s rise as a convenient culprit makes sense. Unlike traditional media manipulation, AI offers an element of uncertainty. Deepfakes, voice clones, and synthetic texts are so realistic that people struggle to distinguish them from reality. This confusion benefits those who wish to deny inconvenient truths. When every video could be fake, even the real ones can be dismissed.
As experts from The Washington Post have pointed out, this phenomenon is creating a “crisis of authenticity.” If citizens start to believe that everything might be generated, truth itself loses weight. In that sense, blaming AI does not only distort public discourse—it erodes the very foundation of trust that democracy depends on.
The real problem: AI is both the tool and the excuse
The irony, of course, is that AI really is contributing to the misinformation problem. Generative models can produce fabricated images of protests, speeches that were never given, or realistic voice recordings of presidents endorsing false statements. In 2024 and 2025, several countries, including Mexico and Argentina, experienced AI-generated political propaganda during elections—some of it created by local groups, some by foreign actors.
However, conflating the use of AI with the blame for misinformation is dangerous. Technology amplifies problems that already exist; it doesn’t create them in a vacuum. Disinformation thrives not because of machines but because of human intentions—political polarization, lack of regulation, and the economic incentives of attention-driven media. AI is a mirror reflecting those flaws back at us, sharper and faster than before.

The consequences for truth and accountability
When presidents blame AI for misinformation, they are not just deflecting responsibility—they are reshaping public perception of truth. Each time a leader dismisses an unflattering story as “AI-generated,” it plants a seed of doubt that weakens journalistic credibility. In the long term, this creates a new kind of informational chaos: a world where we no longer trust evidence, videos, or even direct quotes.
Researchers have warned of what they call the liar’s dividend—the advantage gained by wrongdoers when real evidence can be discredited simply by claiming it’s fake. If AI deepfakes are everywhere, then anyone can deny anything. This erosion of authenticity doesn’t just affect elections; it affects justice, public health, and every institution that relies on verified truth.
What governments are doing about it
Around the world, governments are beginning to react. The European Union is introducing its AI Act, which includes rules to label synthetic content and require transparency from companies that develop generative models. Latin American nations are discussing similar measures, though progress remains uneven. Some countries are considering digital watermarking systems to help verify the origin of videos and images, while others focus on media literacy programs to teach citizens how to recognize AI-generated material.
Still, regulation alone is not enough. The pace of technological evolution far outstrips legal frameworks. As we’ve seen, every time one form of AI manipulation is detected, another emerges. The solution must go beyond policy; it must include education, awareness, and a renewed commitment to truth from both the media and political leaders themselves.
The danger of moral outsourcing
From our perspective, what makes this trend particularly worrying is how easily it allows power to escape accountability. Blaming AI turns a political or moral failure into a technical glitch. It depersonalizes deception. Instead of asking who lied, we begin to ask which software generated it. That shift changes the tone of public debate—and not for the better.
When leaders point fingers at AI, they subtly normalize the idea that truth is relative, that facts are unstable, and that nobody is ever fully responsible. This is not just a communications strategy; it’s a cultural one. It fosters cynicism. It tells citizens that the truth is too complicated to know, that verification is pointless, and that outrage is futile.
Our perspective on what comes next
In our view, the future will not depend on whether AI gets better at detecting fakes—it will depend on whether societies get better at valuing truth. We must rebuild trust in journalism, strengthen independent fact-checking organizations, and invest in digital education that helps people understand how AI works. Transparency in government communication and clear labeling of AI-generated content are essential steps, but they must be accompanied by cultural honesty.
We also believe that political leaders have a moral duty to use AI responsibly—not just as a tool for campaigning but as an example for the public. Admitting mistakes, clarifying misinformation, and refusing to exploit technological confusion should be part of ethical leadership in the digital era.
Conclusion
The tendency of presidents to blame AI for misinformation reveals a deeper human problem: our difficulty accepting responsibility in an age of infinite digital illusion. Artificial intelligence has undoubtedly changed how falsehoods spread, but it has not changed who creates or sustains them. Behind every misleading image or statement is a human decision—a choice to deceive, to deny, or to distract.
From where we stand, AI is neither villain nor savior. It is a mirror reflecting the integrity of those who wield it. If leaders use it as a shield against truth, trust will continue to erode. But if they use it with transparency and accountability, it could become a force for clarity rather than confusion. The challenge is not to stop AI from lying—it is to stop ourselves from hiding behind it.