Pentagon wants better AI tools to enhance its online fakes – report

This post was originally published on RT

You will shortly be re-directed to the publisher's website

The DoD’s Special Operations Command is seeking advanced technology to deploy falsified human behavior, according to The Intercept

The Pentagon’s Joint Special Operations Command (JSOC) wants better tools that can fabricate a living person’s online footprint using advanced generative technologies, The Intercept reported on Thursday, citing a procurement document.

The unclassified wishlist for acquisitions expresses interest in producing fake imagery, including that of humans with different facial expressions, virtual environments, and “selfie videos” that can withstand scrutiny by social media algorithms and real humans. The solutions should also provide audio layers specific for locations of simulated footage.

The Pentagon’s use of fake online personas, or “sock puppets,” dates back at least over a decade. Such digital constructs are used to spread American propaganda, shape or falsify public opinion, and to collect intelligence, according to media reports.

Earlier this year, Reuters exposed a US military operation to undermine public trust in a Chinese vaccine against Covid-19 in the Philippines, a country Washington wants to keep in its orbit while curbing Beijing’s regional influence.

Read more

MI5 Director General Ken McCallum.
MI5 blames Russia for ‘mayhem on the streets’

In 2022, The Pentagon ordered a review of its psychological warfare operations, after social media giants Facebook and Twitter (now X) reported detecting and banning dozens of bots operated by US Central Command.

The US government has regularly accused its geopolitical rivals, including China, Russia and Iran, of conducting “malign influence operations” online using AI-generated content. Among other forms of meddling, foreign governments were alleged to be influencing elections in the US.

The purported methods resemble what The New York Times described in June when it exposed an Israeli influence operation targeting American citizens. The campaign, sponsored by Israel’s Ministry of Diaspora Affairs, used AI-generated content to boost narratives favorable to the close US ally, the report claimed.

Daniel Byman, a professor of security studies at Georgetown University, commented on the disparity between US denunciations of its adversaries’ methods and its apparent intention to use the same tactics in its own offensive operations.


READ MORE: Russia and Iran divided on US election – American spies

The US “has a strong interest in the public believing that the government consistently puts out truthful (to the best of knowledge) information and is not deliberately deceiving people,” he said. “So, there is a legitimate concern that the U.S. will be seen as hypocritical.”

Categorised as News