AI Deepfakes Target Ukraine as Sora Videos Intensify Disinformation War

AI Videos of Ukrainian Soldiers Spread Rapidly Online

A wave of AI-generated videos began circulating in early November, depicting what appear to be Ukrainian soldiers crying, surrendering or pleading for help on the battlefield. At first glance, these clips look authentic and resemble the genuine footage frequently shared from conflict zones. Many viewers did not notice signs of manipulation and accepted the videos as real.

However, analysts quickly identified them as digital fabrications. Some clips used the likeness of Russian livestreamer Aleksei Gubanov, who lives in New York and has never served in the military. He was shocked to see his face digitally placed onto a Ukrainian uniform, begging not to be sent to the front. The videos gained tens of thousands of views before many users realized they were manipulated.

Sora 2 Identified as the Engine Behind Hyperrealistic Deepfakes

NBC News reviewed 21 videos portraying Ukrainian troops in distress and found that at least half were created using Sora 2, OpenAI’s advanced text-to-video generator. These videos included extremely realistic facial movements, lighting and motion, making detection difficult. Some clips showed entire groups of soldiers surrendering, while others featured dramatic monologues with emotional appeal.

Subtle inaccuracies provided clues to their origins. Helmet designs, camouflage textures and facial details were sometimes incorrect, but these inconsistencies were too small for the average viewer to notice. In several cases, the Sora watermark was removed or covered, indicating intentional concealment of the videos’ AI origins.

A Disinformation Operation Designed to Undermine Confidence

These AI videos appear to be part of a coordinated attempt to erode morale and influence global perception of Ukraine’s position in the war. The intent is to portray Ukrainian soldiers as unwilling or defeated, even as real surveys show strong civilian and military resolve.

Ukraine’s Center for Countering Disinformation reported a significant rise in AI-generated videos designed to create confusion and weaken trust in the government and armed forces. Many of the clips showed fabricated confessions or fake battlefield messages, crafted to evoke strong emotional reactions and spread rapidly on social media.

Recommended Article: Enterprises Struggle to Keep Pace as Rapid AI Evolution Overwhelms…

Experts Warn That Deepfakes Are Reaching an Alarming Level of Realism

AI researchers caution that Sora 2 represents a major leap in the sophistication of deepfakes. Traditional detection methods struggle to identify Sora’s output because the videos often lack the usual errors seen in older AI models. NewsGuard found that Sora generated convincing false videos 80 percent of the time when prompted.

This level of realism poses a serious challenge for journalists, governments and citizens trying to distinguish authentic wartime footage from fabricated content designed to manipulate public opinion.

Limitations of Safety Guardrails Highlighted by Violations

OpenAI states that Sora 2 includes layered protections, such as watermarks, metadata tagging and restrictions against generating deceptive or violent content. However, NBC News discovered multiple videos that violated these rules, including one showing a Ukrainian soldier being shot.

Researchers demonstrated that rephrasing prompts allowed them to bypass certain safeguards. These findings underscore the difficulty of preventing misuse when powerful generative tools are made widely accessible.

Social Media Platforms Race to Remove Deepfakes but Struggle to Contain Reposts

TikTok and YouTube removed several of the misleading videos after they were flagged. TikTok reported that most harmful content is now removed before users see it. Yet reposts quickly resurfaced on X and Facebook, where many remain accessible.

Even when removed from the original platforms, downloaded copies continue spreading through new accounts and reposts. This persistence shows how quickly disinformation evolves and how difficult it is to fully eliminate once it circulates.

Ukraine Remains Resolute Despite Manipulation Efforts

The surge in deepfake videos comes as peace negotiations remain stalled and new polls show that 75 percent of Ukrainians reject Russian terms. Another 62 percent say they are prepared to continue fighting regardless of the conflict’s duration. These figures directly contradict the surrender narratives pushed through AI-generated media.

Ukrainian officials warn that such deepfakes aim to distort reality and weaken international support. Experts agree that as generative AI advances, citizens must become more vigilant and skeptical of emotionally charged videos shared online.

IMPORTANT NOTICE

This article is sponsored content. Kryptonary does not verify or endorse the claims, statistics, or information provided. Cryptocurrency investments are speculative and highly risky; you should be prepared to lose all invested capital. Kryptonary does not perform due diligence on featured projects and disclaims all liability for any investment decisions made based on this content. Readers are strongly advised to conduct their own independent research and understand the inherent risks of cryptocurrency investments.

Share this article

Subscribe

By pressing the Subscribe button, you confirm that you have read our Privacy Policy.