Introduction
We’re seeing a striking new trend take shape worldwide: national leaders are beginning to blame artificial intelligence for the wave of misinformation that floods social networks, news feeds, and election campaigns. What this speaks to, from our perspective, is more profound than a simple technological misunderstanding; it’s about a global struggle to allocate responsibility in a world where truth can be fabricated. What was once “fake news” is now rebranded as “AI generated deception,” and presidents are increasingly using that label to explain or dismiss controversy.
AI has turned into both a convenient culprit and a genuine concern. On one hand, generative AI tools can create convincing deepfakes, clone voices, and produce fake news stories in seconds. On the other hand, political figures are using AI’s recent rise as a rhetorical shield-to deflect blame, sow doubt, or avoid accountability altogether. As misinformation becomes easier to fabricate, the line between genuine error and deliberate manipulation becomes dangerously thin.
The new face of political blame
In fact, in recent months, leaders from the United States to Latin America to Europe have begun invoking the AI bogeyman whenever controversial videos or quotes surface online. Recently, for instance, former U.S. president Donald Trump appeared in a viral video purportedly throwing objects out of a White House window a recording he falsely insisted was “created with artificial intelligence“, despite his own team acknowledging it as genuine. His exclamation added to a new world narrative: something looks bad, blame AI.
This is a remarkable evolution in rhetoric. Until just a few years ago, politicians ranted about “fake news” and blamed journalists or foreign powers for lying to the public. Now, the villain has changed. AI has become a faceless scapegoat, one that cannot argue back, sue, or vote. And because the technology is complex and invisible to most citizens, it serves as a perfect excuse: plausible, technical, and impossible to disprove quickly.
How AI became the perfect scapegoat
From our viewpoint, AI’s rise as the convenient culprit makes perfect sense. Unlike older types of media manipulation, AI brings along an element of uncertainty: deepfakes, voice clones, and synthetic texts are so hyper realistic that people genuinely struggle to discern them from reality. And herein lies the beauty for anyone wishing to deny inconvenient truths. When any video might be fake, even the real ones can be dismissed.
As experts at The Washington Post have pointed out, the result is the creation of a “crisis of authenticity.” When citizens begin to think that anything can be fabricated, the truth itself becomes lighter. In that sense, blaming AI does not just distort public discourse but erodes the very trust democracy is built upon.
The real problem: AI is both the tool and the excuse
But the irony, of course, is that AI really is contributing to the misinformation problem. Generative models can produce fabricated images of protests, speeches that were never given, or realistic voice recordings of presidents endorsing false statements. In 2024 and 2025, several countries, including Mexico and Argentina, experienced AI generated political propaganda during elections some of it created by local groups, some by foreign actors.
Blaming AI use for misinformation, however, is dangerous. Technology amplifies problems that are already there; it does not create them in a vacuum. Thus, disinformation thrives not because of machines but because of human intent, poor regulation, and the economic incentives of attention driven media. AI is simply a mirror reflecting those flaws back at us, sharper and faster than ever before.

Consequences for truth and accountability
When presidents point fingers at AI as a source of misinformation, they do more than pass the buck they recreate public perceptions of the truth. With every unflattering story a leader dismisses as “AI generated,” another seed of doubt is planted into the fragile flower that is journalistic credibility. And in the long run, it brings about a whole new kind of informational chaos one in which we can’t even be sure of evidence, videos, or even direct quotes.
Researchers have warned of what they term the liar’s dividend the edge that bad actors receive when actual evidence can be undermined by merely saying it’s fake. Where AI deepfakes are ubiquitous, everyone can refute everything. This erosion of authenticity doesn’t just affect elections; it affects justice, public health, and every institution predicated on the concept of verified truth.
What governments are doing about it
Around the world, governments are starting to respond. The European Union is implementing its AI Act, which includes regulations to label synthetic content and force companies developing generative models to be transparent about their activities. Latin American countries are also talking about similar steps, though progress is very uneven. Some are considering digital watermarking systems that would aid in verifying the origin of videos and images; for others, the emphasis is on media literacy programs that would help citizens learn to identify AI generated material.
Yet, regulation alone is not the answer. The pace of technological evolution outpaces the legal frameworks. Every time, as we have seen, one form of AI manipulation is detected, another emerges. It needs to go beyond policy into education, awareness, and a fresh commitment to truth by media and political leaders themselves.
The danger of moral outsourcing
The underlying reason this trend is particularly concerning to us is that it allows power to escape accountability with surprising ease: blame shifted toward AI transforms a political or moral failure into a technical glitch. Deceit gets depersonalized. Instead of asking who lied, we start to ask which software generated it. That shift changes the tenor of public debate and not for the better.
When leaders blame AI, they subtly normalize the notion that truth is relative, facts are unstable, and no one is ever fully responsible. It’s not only a communications strategy; it’s a cultural one. It breeds cynicism. It says to citizens that the truth is too complex to know, that the act of verification is hopeless, and outrage is futile.
Our perspective on what comes next
As we see it, the future has little to do with whether AI improves at catching fakes it has everything to do with whether societies improve at valuing truth. We have to rebuild trust in journalism, fortify independent fact checking organizations, and invest in digital education that helps people understand how AI works. Transparency about government communication and proper labeling of AI generated content will be necessary first steps, but they need to be matched by cultural honesty.
We also believe that political leaders have a moral obligation to act responsibly with AI: not only will this be an effective campaign tool, but it will also serve as a good example to the public. Owned up to mistakes, parsed misinformation, and resistance to the exploitation of technological confusion all will fall under the banner of ethical leadership in the digital era.
Conclusion
Blaming AI for misinformation by presidents reflects a deeper problem: our avoidance of responsibility within an age of infinite digital illusion. Artificial intelligence has doubtless altered the way falsehoods spread, but it has not changed who makes them or keeps them alive. Behind every devious image or statement lies a human decision a choice to deceive, deny, or distract.
Where we stand, AI is neither our villain nor savior; it is a mirror reflecting the integrity of those who will wield it. If leaders use it to shield themselves against the truth, then trust will continue to erode. But if they use it with transparency and accountability, it could be a force for clarity rather than confu