ISLAMABAD – The evolution of AI has transformed cybercriminals’ tactics. One alarming trend is the use of AI to enhance phishing scams, refining them, targeting specific individuals, and making these attacks almost impossible to recognize.
According to a recent Kaspersky study, the number of cyber-attacks experienced by organizations in the last 12 months is reported to have increased by nearly half (49%). The most ubiquitous threat came from phishing attacks, with 49% of those questioned reporting this type of incident.
With AI becoming a more prevalent enabler for cybercriminals, half of the respondents (50%) anticipate significant growth in the number of phishing attacks. In this text, we will examine how AI is used in phishing and why experience alone is sometimes not enough to avoid becoming a victim.
Previously, phishing attacks relied on a generic mass message sent to thousands, hoping one of the recipients would fall for the bait. AI has changed this into scripting highly personalized phishing emails in large numbers.
Using publicly available information like that on social media, job boards, and companies’ websites, these AI-powered tools can generate emails tailored to an individual’s role, interests, and communication style. For example, a CFO might receive a fraudulent email that mirrors the tone and formatting of their CEO’s messages, including accurate references to recent company events. This level of customization makes it exceptionally challenging for employees to distinguish between legitimate and malicious communications.
AI has also introduced deepfakes into the phishing arsenal. These are increasingly being leveraged by cybercriminals to create fake but highly accurate audio and video messages, crafted to reflect the voice and appearance of the executives they seek to impersonate. As deepfake technology continues to advance, it is expected that such attacks will become more frequent and harder to detect.
Cybercriminals can also manipulate the script of traditional e-mail filtering systems with the use of AI. By analyzing and mimicking legitimate email patterns, AI-generated phishing emails can bypass security software detection.
Even experienced employees are falling victim to these advanced phishing attacks. The level of realism and personalization that AI can achieve may override the skepticism that keeps experienced professionals cautious. Moreover, AI-generated attacks often exploit human psychology, such as urgency, fear, or authority, pressuring employees into acting without double-checking the authenticity of the request.
To defend against AI-driven phishing attacks, Kaspersky recommends organizations to adopt a proactive and multi-layered approach that emphasizes comprehensive cybersecurity. Regular, up-to-date AI-focused cybersecurity awareness training is critical for employees, helping them identify the subtle signs of phishing and other malicious tactics.