Information is power: AI hasn't killed the truth; it has just made the truth irrelevant.

Writing: The Rust of Uncle Bu Dong’s Knowledge

April 5, 2026, Sunday. U.S. President Trump posted on his social platform Truth Social, sounding like a final ultimatum.

“Tuesday, 8 p.m. Eastern Time!”

If Iran does not reopen the Strait of Hormuz, the U.S. will bomb its power plants and bridges. Two days later, he doubled down: Iran’s “entire civilization will die tonight.” A few hours afterward, he suddenly announced a two-week ceasefire.

Who responds fastest to such posts?

It’s the Iranian embassy in Zimbabwe. They wrote on X: “8 p.m. is not ideal. Could we change it to 1-2 p.m. or 1-2 a.m.? Thank you for your attention to this important matter.”

That closing line, “Thank you for your attention to this important matter,” mimics Trump’s signature style.

A country under bombardment responds to a war threat with a joke.

This is not an isolated case. When responding to Trump’s remark “bomb Iran back to the Stone Age,” the Iranian embassy in Thailand posted an AI-generated image: Trump sitting in a cave wearing animal skins. The South African embassy posted a set of crossed-out U.S. military generals’ photos, with the caption: “The regime change has been successfully completed.” The crossed-out figures are not Iranian generals but U.S. military officials recently dismissed by Defense Secretary Hegseth.

Meanwhile, a team called “Explosive Media” is mass-producing AI Lego animations. Trump, U.S. military, and Iranian military all turn into yellow plastic figures. One rap diss video calls Trump a “loser” and Israeli Prime Minister Netanyahu a “puppet,” racking up millions of views on global social media.

The White House isn’t idle either. Its official X account posted an AI video depicting the Iranian regime as a bowling pin, knocked down by the U.S. military. A senior White House official told Politico:

“Bro, we’re just nonstop creating viral memes.”

Another official added that the White House’s Iran war videos have already gained “over 3 billion exposures” online.

3 billion.

Pause for a second and think. This is a real war. Real bombs, real civilian casualties, real oil supply disruptions. But in the public information space, it looks like a meme contest. A superpower and a bombed country attacking each other on the same platform, in the same language. That language is memes.

This is the information landscape of 2026. Information is no longer a description of reality. It is itself a battlefield.

The essence of information disparity is not about information at all.

A turning point in an era’s cognition: what was not an issue before has now become a fundamental problem.

The true goal of information flow is not to deceive you.

Most people’s understanding of “fake news” still stays within a naive framework: someone says falsehoods, others believe them, causing harm. So the countermeasures are intuitive: fact-checking, exposing lies, improving media literacy.

But that framework is outdated.

The ultimate aim of modern information warfare has never been to make you believe a specific lie. Its goal is to make the distinction between “truth” and “falsehood” itself become a dimension you no longer care about.

Hannah Arendt, one of the most important political philosophers of the 20th century, author of “The Origins of Totalitarianism,” wrote in 1951:

“The ideal subjects of totalitarian rule are not steadfast Nazis or committed communists, but those whose sense of the difference between fact and fiction, truth and falsehood, has already disappeared.”

She described the propaganda machinery of 20th-century totalitarian regimes. Se seventy-five years later, AI is transforming that prophecy into the everyday experience of every mobile phone user worldwide at an unimaginable efficiency.

George Orwell, even earlier, in 1943, wrote in “Looking Back on the Spanish War”:

“The concept of objective truth itself is vanishing from the world. Lies will be recorded in history.”

He was referring to Nazi Germany. But today, the driver of this process no longer needs an authoritarian state. An algorithm is enough.

Garry Kasparov, the world chess champion and later a prominent political activist, put it even more precisely:

“The purpose of modern propaganda is not just to mislead you or push an agenda. It is to exhaust your critical thinking and eliminate the very concept of truth.”

Exhaust. Not defeat. Exhaust. That word is chosen carefully.

AI has already technically undermined the foundation of “distinguishing truth from falsehood.”

What I just described might sound like philosophical speculation. But let’s look at the facts.

Signals in the AI era have already reversed.

In the past, a photo without digital traces meant it was original, unaltered. In 2026, the absence of digital traces might precisely indicate it was never captured by a camera. Because it was AI-generated from the start. The entire system of authentic signals has flipped. Like a negative of a film, inverted black and white.

More deadly are the “hybrids.”

Recently, WIRED magazine published an in-depth report titled “The Internet Has Cracked Everyone’s ‘Fake Detection’ Radar,” citing Dutch investigative journalism trainer Henk van Ess. He pointed out that the hardest images to identify are not fully AI-generated but those that are 95% real and 5% tampered.

A photo with genuine metadata, real sensor noise, real physical light and shadow. The tampering exists only in a tiny detail: an added armband on a uniform, an extra weapon in the hand, a subtly replaced face. Pixel-level detection tools will still judge it as real because it is indeed real in most dimensions. The fake part might only be one square inch.

“All previous verification methods were based on the premise: images are records of something,” van Ess said. “Generative media fundamentally shatters this premise.”

Henry Ajder, a deepfake researcher who has advised companies like Adobe, went even further. He said AI is no longer something you can spot at a glance; it’s embedded in our daily content. The era of finger-drawn six fingers or garbled text is over. New AI content looks completely credible.

What about detection tools? Ajder’s words: “Detection tools should never be the sole basis for judgment.” They are not truth engines. Even the best tools often fail. Most just output an unexplained “confidence score.” 85% real? 62% fake? These numbers tell you nothing.

Why are smart people fleeing social media?

The Atlantic Monthly: The era of social media is over, but what replaces it is worse.

Busy national leaders posting

Meanwhile, the gates of verification are closing.

On April 4, 2026, Planet Labs announced an indefinite ban on satellite imagery of Iran and the Middle East conflict zones. Planet Labs is one of the most relied-upon commercial satellite image providers for conflict reporting worldwide. The ban was implemented at the request of the U.S. government, retroactive to March 9.

U.S. Secretary of Defense Hegseth responded frankly: “Open-source intelligence is not where you confirm the truth.”

In other words: You don’t need to see for yourself; we will tell you what happened.

According to the 2026 AI Traffic and Cyber Threat Baseline Report, automated traffic on the internet now accounts for 51% of total traffic, growing eight times faster than human traffic. These bots are not just distributing content—they prioritize low-quality viral material. Synthetic content is racing ahead, verification is still in its shoes.

On one side, the fake content engine is running at full speed. On the other, the gates of verification are closing. This is not a fair race. One side is accelerating; the other is dismantling the engine.

The ghost of McLuhan: Why “atmosphere” is more powerful than “facts”

By now, many might think the problem is already severe enough. But it’s not. The collapse of technical verification is only the tip of the iceberg. The larger, more difficult part lies beneath the surface.

Uncle Bu Dong has always admired the digital prophet Marshall McLuhan, who in 1964 in “Understanding Media” made a statement that changed communication theory forever: “The medium is the message.”

Most interpret this as “the channel matters.” That’s a serious misreading and underestimation.

McLuhan’s true meaning is: The medium, before you consciously evaluate content, has already reshaped you on a deeper perceptual level.

TV doesn’t need to broadcast a specific program to change you. The form of TV itself changes how humans understand the world. The printing press didn’t need to print a particular book to create nationalism. It made mass dissemination of a common language possible, which in itself fostered national identity.

Today: AI-generated content doesn’t need to deceive you once. It only needs to exist in large quantities, and it shifts your default mental state from “what I see is probably real” to “everything I see might be fake.”

This cognitive shift is itself an effect of information weapons. It affects everyone—regardless of IQ, education, or stance.

Recently, I read an article about the fusion of media and machines, which takes McLuhan a step further, proposing a new formula:

In the era of large language models, the medium is both information and machine.

Natural language now serves as both the interface for human-machine interaction and the underlying infrastructure. Writing is building; building is writing. Code and culture emerge from the same source. This fusion of medium and machine forms a Möbius strip: media creates machines, and machines in turn create media, in an endless loop.

On the battlefield of Iran meme warfare, this equation can be pushed even further.

Why are Iran’s embassy AI Lego videos effective? Not because of the content. The content is just crude satire propaganda, with almost zero informational value. Its effectiveness lies in the medium itself being an attack.

AI-generated, platform-native, optimized for sharing. The White House’s war memes follow the same logic. “The president posting memes” is itself an act of information.

It conveys not specific policies but a meta-message: rules no longer exist, seriousness is gone, the “order” you once believed in no longer exists.

Gregory Daddis, a history professor at Texas A&M University who served over 20 years in the U.S. military, explained clearly in an interview: Trump’s social media style almost exclusively serves his domestic political base: “That audience that thinks Kid Rock and Secretary Kennedy working out in a sauna is cool. This is not a serious diplomatic approach.”

But Iran has clearly learned. Phillip Smyth, an expert on Iran proxy organizations, pointed out that AI tools help Iran and China bridge cultural gaps, enabling them to produce propaganda that resonates with Western audiences—even if the creators themselves are unfamiliar with Western culture.

A propaganda video released by China’s CCTV depicts Americans as white eagles and Iranians as Persian cats. The video has been translated, reposted, and reported by mainstream media worldwide.

Media is information, media is machine. And in the context of war, we can add one more: media is weapon.

You track 5,000 influencers with AI, but your world narrows.

What is the most valuable asset today? Wall Street directly throws analysts into the Strait of Hormuz.

Why are the smartest people in Silicon Valley rereading one person?

What are ordinary people receiving from these weaponized media?

A recent Financial Times article offers a counterintuitive view: social media’s influence on opinions is greatly overestimated. “Not everyone who reads the Bible becomes a Christian, not everyone who reads The Guardian becomes left-wing.” People are far more skeptical of random information on social media than of traditional authoritative media.

This seems to defend social media on the surface.

But it actually reveals a deeper truth. What people get on social media is never “information,” but an “atmosphere” (vibes). You don’t need to believe a specific false message. You only need to absorb a certain atmosphere amid hundreds of messages—angry, anxious, nihilistic.

This precisely validates McLuhan’s core insight in 2026: the influence of media does not occur at the “content” level but at the “perceptual structure” level. You think you are reading, judging, rationally thinking. But the media has already changed how you perceive the world before your judgment. As McLuhan said, “the medium is massage.”

You think you are choosing what to believe. But in fact, you are only absorbing atmosphere.

Atmosphere is domination.

What does Kasparov mean by “exhaustion” in the big chess event?

Imagine your daily information consumption. Every day, you open your phone and see a shocking image. You’re unsure if it’s real. You want to verify, but how? Reverse image searches on Google Lens, Yandex, and TinEye give three different results. No match no longer means “original”; it might just mean it was never captured. Detection tools give a 72% confidence score. What does 72% mean? You close the page and keep scrolling.

The next day, another image. The third day, another.

A month later, you won’t become a “more cautious person.” You will become someone who abandons judgment. Not out of stupidity or laziness, but because cognitive resources are limited. This war is designed to drain your entire cognitive bandwidth. Every piece of information demanding you judge “true or false” taxes your attention. The tax rate rises, but your available attention remains the same.

Healthy skepticism sounds like “Let me verify.” But when verification costs skyrocket, skepticism slides into cynicism—“Nothing can be trusted anyway.”

The step from “everything might be false” to “everything is false” is only one step—and you won’t realize you’ve already taken it.

Arendt’s prophecy completes its loop here. When most of society reaches the state of “nothing can be trusted,” they do not become critical thinkers. They become “ideal subjects.” Not persuaded subjects, but weary subjects. Not believers in lies, but indifferent to truth.

In the comments of that FT article, someone wrote: “Hate, division, anger generate clicks. Zuckerberg, Musk—none of them made billions by encouraging mutual respect.”

Algorithms optimize for transmission, not truth. When emotion fuels the algorithm, atmosphere becomes the true ruler of this era.

Truth has not been defeated. It is just submerged beneath a force it can never compete with: zero cost, infinite speed, and emotion-driven.

AI has exposed the truth of education. American universities are reviving an ancient tradition.

The first to deeply use AI are being de-skilled by AI.

Anchored in the flow of fake atmospheres

Writing here, if your feeling is “So what? I can’t fight the algorithm,” that precisely shows that atmosphere is already working on you.

This is not a universal cure. There is no universal cure. But there are structural strategies that can at least make you less easily swept away.

First: stop asking “Is it true?” Start asking “Where does it come from?”

In WIRED’s report, verification expert van Ess offers a five-step method, focusing on shifting from “verifying content” to “tracing sources.” He calls the crucial step “finding patient zero,” tracing back to where the information first appeared.

Authentic material usually has a line connecting to a specific person: an eyewitness, a photographer, a locatable coordinate. Synthetic content, by contrast, is typically anonymous, polished, born to be shared. It’s like an orphan—no parents, no origin, only a perfect appearance.

Van Ess also offers a simple but effective rule of thumb: “If a photo looks too much like a movie, with perfect lighting and symmetrical composition, that’s the first warning. Real disasters are rarely symmetrical.”

The era of verifying information is ending; the era of tracing origins is beginning. Though it’s very resource-intensive.

Second: impose a “tariff” on your attention.

Not the correct but useless advice of “less phone scrolling.” Instead, consciously add a slowdown checkpoint before information enters your brain.

Specific practices:

Whenever you see content that triggers strong emotions, give yourself a 24-hour cooling-off period before deciding whether to share. You’ll find that most likely, after a day, you won’t want to share anymore because the atmosphere has dissipated, leaving only the information itself. And the information itself often isn’t worth sharing.

For any major event you plan to believe, cross-verify with at least three independent sources—not three accounts on the same platform (they might share the same source), but three different channels.

Return to long-form texts. Books and in-depth reports inherently slow you down—something algorithms can’t do. Algorithms compress everything into a 3-second judgment. Books force you to immerse yourself in the same idea for three hours. This is not a loss of efficiency; it’s calibration of thought. It’s also very resource-consuming.

Third: rebuild a small-scale trust network based on genuine relationships.

If the large-scale public information space has been irreversibly “atmospheric,” the most important thing individuals can do is not to be smarter at distinguishing truth from falsehood there, but to find anchors outside that space.

The article on media and machine fusion suggests: when everything visible is devoured, the only “extra gain” is in what resists digitization—offline experiences, intuition, silent moments with old friends.

McLuhan might be right again: electronic media will bring us back to a “tribal” state. You should seriously rebuild your tribe—not the one recommended by algorithms, but the real ones you know, with shared memories, face-to-face conversations.

In a world where everything can be forged, the hardest thing to fake is long-term relationships.

AI2028-AI2027-AI2026: Countdown to Great Change and a Guide for Ordinary Self-Rescue

The collapse of the middle class: two thousand years of management history ends in an AI cycle.

In an accelerating era, be a decelerator.

Returning to the opening scene:

A superpower’s president posts a threat to destroy a civilization in the early morning. The threatened country responds with an AI Lego animation. The White House celebrates bombing with an AI bowling video. Iran’s embassy jokes about the deadline with a meme. 3 billion exposures.

Every participant is accelerating. AI accelerates generation, algorithms accelerate distribution, emotions accelerate contagion.

As WIRED concludes:

“In a system where synthetic content spreads faster than verification, the only real defense may be behavioral: hesitation. Pause before sharing. In a system designed for zero thinking, take a few minutes to reflect.”

In a world of accelerating systems, the most disruptive act an individual can do is to slow down—not necessarily to find the truth, which you might never do, but as a form of resistance.

Refuse to become the “ideal subject” Arendt described. Refuse to let “truth and falsehood don’t matter” become your default.

Orwell said lies will be recorded in history.

Maybe. But as long as someone is willing to pause for three seconds before clicking “forward,” history is still unwritten.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin