By: M A. Mahmood
Digital terrorism is at its worst, and if not checked well in time and with full power and commitment, it has the potential to eviscerate the national fabric of the country.
A striking example of this was observed on March 16, 2025, when a video emerged on social media featuring a self-styled soldier who claimed to have resigned from the military after witnessing alleged atrocities committed against civilians. In a controversial statement, he alleged that he had killed 12 civilians on the orders of his superiors.
This video rapidly gained traction online, fueled by various groups that alleged the soldier was subsequently assassinated by intelligence agencies for revealing sensitive military actions.
However, the narrative faced pushback on March 17, 2025, when aligned social media analysts (SMAs) began to debunk the claims. They traced the video back to its original upload on TikTok by an account named ‘Parachinar News.’
Through forensic analysis and further investigation, it was revealed that the soldier depicted in the video was not a real person and that the content itself was AI-generated. The fallout from this deceptive video was significant. Numerous social media accounts erroneously reported that the individual had been killed, allegedly as retaliation for his purported confession.
This false narrative was propagated by accounts associated with anti-state propaganda networks, aiming to sow distrust and resentment toward state institutions. The propagation of this unfounded claim, devoid of credible evidence, intensified tensions and fueled anti-state sentiments, particularly among specific linguistic and regional communities.
This incident underscores the worrying trend of utilizing AI-generated content as a tool for manipulating public perception and inciting unrest.
In the aftermath of a video that went viral, a photograph of an injured police constable circulated online, accompanied by unfounded claims that he was the soldier featured in the footage and had been “eliminated” by intelligence agencies.
However, it was subsequently revealed that the constable was injured in an unrelated incident and was receiving medical treatment when the photograph was taken. He later released a video confirming his safety and disassociating himself from the viral content.
This misidentification appears to have been a deliberate attempt to enhance the credibility of the AI-generated video and promote an anti-state narrative. A thorough analysis of the video revealed numerous technical and logical inconsistencies, confirming its creation through AI-driven morphing and deepfake technology. Key observations included:
1. The subject’s beard varied inconsistently throughout the footage; there were moments where the chin looked slightly shaven, while in other frames, the beard appeared to regrow—an unnatural occurrence for a continuous recording. This fluctuation indicates the use of AI morphing tools, suggesting that the video manipulated pre-existing images to fabricate the subject’s likeness.
2. The video employed a crying filter to amplify emotional appeal; however, it lacked realistic tear flow and facial expressions that typically accompany genuine distress. The absence of muscle movement in key facial areas, such as the cheeks and forehead, further supports the assertion that the video was artificially generated. Based on forensic evidence and digital footprint analysis, the viral video has been deemed inauthentic and a piece of AI-generated disinformation.
This incident exemplifies the evolving tactics of modern propaganda, illustrating how deepfake technology can be used to manipulate narratives and escalate tensions
The phenomenon of deepfake technology represents a concerning form of digital manipulation that cannot be condoned. It is imperative for the state to take decisive action to address this issue with the necessary resources and authority. The prevalence of digital terrorism associated with deepfakes must be addressed and eradicated as soon as possible to safeguard society.