The Rise of AI-Driven Cyber Threats: Are We Prepared?

Image from PSD, 2024

Artificial intelligence (AI) is being integrated into virtually all aspects of modern society, profoundly impacting and changing how we live, work, and interact with the world in virtually every facet of our lives, from healthcare, finance, and transportation to government and education. AI's influence in cybersecurity is particularly significant as the field evolves to counter increasingly sophisticated threats.

When AI is used to improve cybersecurity defensively, its role is proactive and diverse, incorporating protective measures and offensive strategies. These applications can monitor network traffic, spot anomalies, and implement countermeasures automatically and in real-time, providing faster and more accurate threat detection than traditional methods. However, cybercriminals can also use AI to develop advanced phishing schemes, automate attacks, and bypass security measures. This dual use of AI presents a dilemma: while it enhances security protocols, it also equips attackers with powerful tools. This blog post explores how AI is being used to accelerate cyber-attack development and maximize its effectiveness.  

AI empowers cybercriminals with sophisticated tools to create more intricate and rapidly evolving malware, ransomware, and phishing schemes. These AI-driven attacks are not just sophisticated, they are adaptive, often easily evading conventional security measures. The emergence of deepfakes and AI-generated social engineering schemes further complicates cybersecurity. A recent survey found that 60% of respondents believe that human-driven responses to cyberattacks cannot keep pace with automated attacks. This underscores the necessity to adopt advanced, defensive AI technologies to combat this evolving and adaptive threat (MIT Technology Review, 2021). 

Hackers also increasingly leverage AI-powered applications, such as FraudGPT, an unfiltered large language model available on the dark web, which allows cybercriminals to generate malicious code, identify vulnerabilities, and target victims with fraud schemes more effectively. It mimics legitimate AI models but is fine-tuned specifically for malicious purposes. Finally, AI-powered systems can employ reinforcement learning, a type of AI that learns to make decisions by maximizing rewards from trial and error, learning from defensive strategies, and continuously improving their ability to bypass security measures (De Angelo, 2024).

Sophisticated AI-powered spearfishing attacks exhibit several defining characteristics. First, AI-driven attacks can automate manual tasks, such as vulnerability identification, data gathering, and generating phishing material, significantly increasing the speed and efficiency of attacks. Phishing schemes are carefully crafted to imitate legitimate sources, making it challenging to detect their fraudulent nature. Cybercriminals employ two types of phishing attacks: spear phishing (personalized) and traditional phishing (mass-scale). AI is making these types of attacks cheaper and more accessible, increasing the potential for hyper-personalized solicitations. AI tools, particularly large language models like GPT-4, can automate the entire phishing process, reducing costs by over 95% while maintaining high success rates (Heiding et al., 2024). 

Secondly, attackers use AI to create convincing and personalized phishing emails by utilizing data from sources like social media profiles, making detection more challenging. Research indicates that 60% of participants fell victim to AI-automated phishing, matching the success rates of expertly crafted messages. The high click-through rates of these AI-generated emails—37% for GPT-generated, 74% for human-crafted, and 62% for a mix of AI and human editing—highlight the threat's urgency and the need for immediate action. This cost reduction and efficiency have significant implications. This level of efficacy demonstrates the potent capability of AI in crafting convincing phishing content at scale (Heiding et al., 2024).

Another type of AI-enabled attack that is becoming increasingly sophisticated is ransomware. Cybercriminals leverage AI and machine learning to target high-value victims, optimize encryption processes, and demand ransoms more effectively. Key advancements include the creation of personalized phishing lures with AI-generated deepfakes (Blake, 2023). These AI-generated deepfakes are becoming more realistic and more challenging to detect, posing significant risks as they can deceive even experienced observers, threatening both organizations and individuals (Pointner, 2024). These developments make ransomware attacks more efficient and dangerous, posing significant challenges for cybersecurity defenses.

Another area that is seeing a significant impact with the introduction of AI and automation is distributed denial-of-service (DDoS) attacks. These attacks not only incur significant financial losses but also exploit advanced technologies to evade detection, posing a heightened risk to various industries. The average cost of a DDoS attack rose to $6,000 per minute, leading to an average expense of $270,000 per attack. AI and automation make these attacks more frequent and sophisticated by better mimicking legitimate traffic and complicating detection. Manufacturing has experienced a 200% increase in attack size, and healthcare saw a 129% increase in attack volume. (Zayo Group, 2024). 

AI is now also being used to crack user passwords, employing machine learning algorithms that achieve up to 95% accuracy, enhancing the efficiency of traditional methods. While most passwords can be cracked with smart brute-force algorithms, these AI-driven methods, trained on common patterns and human behavior, significantly boost efficiency compared to traditional brute-force attacks (Winder, 2024). These advancements make all systems more vulnerable than ever, as passwords that were once considered secure can now be compromised within seconds, raising the risks to personal data, financial institutions, healthcare systems, and other critical infrastructures.

As AI becomes more integrated into critical applications, it also becomes a prime target for adversarial attacks, which can exploit its vulnerabilities. According to the National Institute of Standards and Technology, these attacks can take various forms, including evasion (altering inputs post-deployment), poisoning (introducing corrupted data during training), privacy (extracting sensitive information during deployment), and abuse (using incorrect information from legitimate sources)—each posing distinct risks to AI reliability and security (NIST, 2024). The ease with which these attacks can be executed underscores the urgent need for robust security measures to protect AI systems from being compromised. 

Finally, nation-state threat actors are also leveraging AI for malicious purposes. OpenAI, in collaboration with Microsoft Threat Intelligence, disrupted five such groups nicknamed Charcoal Typhoon, Salmon Typhoon, Crimson Sandstorm, Emerald Sleet, and Forest Blizzard. These actors used AI for tasks like research, coding, translation, and phishing. Each group had specific tactics: Charcoal Typhoon from China used AI for research and phishing, Salmon Typhoon focused on translations and intelligence gathering, Crimson Sandstorm from Iran engaged in web development and evading malware, Emerald Sleet from North Korea targeted defense experts, and Forest Blizzard from Russia worked on satellite and radar research (Open AI, 2024).

Clearly, AI will increase the cyber threat, and organizations must adopt a proactive, multi-layered cybersecurity approach to counter AI-enabled cyber threats effectively. First, deploying advanced AI-driven security solutions is essential. These tools can perform real-time threat detection, response, and mitigation, monitoring network traffic to identify anomalies and autonomously neutralize threats before significant damage occurs. AI-based systems continuously learn from new attack patterns, adapting to emerging threats more rapidly than traditional methods. 

Additionally, monitoring and analyzing threat intelligence through collaborations allows organizations to share information about AI-related threats. This shared intelligence aids in identifying new attack vectors and provides insights into evolving tactics, enabling a coordinated and timely response. Enhancing employee awareness and training is equally crucial. Regular education on AI-enhanced phishing and social engineering tactics helps employees recognize and report suspicious activities. Incorporating simulations of AI-driven attacks in training can further prepare employees to identify and respond to sophisticated threats. 

Relying on passwords alone has proven inadequate for protecting sensitive data. The shift towards safer and more user-friendly security measures such as biometrics, two-factor authentication (2FA), and multi-factor authentication (MFA) should be accelerated. Advanced techniques using behavioral biometrics, which analyzes how an individual interacts with their computer or smartphone, can provide even more nuanced security and eliminate the need for passwords. 

Finally, as AI models are developed, they must be strengthened with defenses to ensure robust protection against evasion, poisoning, and privacy breaches. Exposing AI systems to various attack techniques during their development will increase their ability to defend against various cyber threats.

References

Ahmed, D. (2024). Employee Duped by AI-Generated CFO in $25.6M Deepfake Scam. HACKREAD. February 5, 2024. Available at: https://hackread.com/employee-duped-ai-generated-cfo-deepfake-scam/

Blake, A. (2023). Hackers are using AI to create vicious malware, says FBI. Digitaltrends. July 31, 2023. Available at: https://www.digitaltrends.com/computing/hackers-using-ai-chatgpt-to-create-malware/

Business Wire. (2024). DDoS Attacks Surge 106% from H2 2023 to H1 2024, Reveals New Zayo Data. Business Wire. August 15, 2024. Available from: https://www.businesswire.com/news/home/20240815532680/en/DDoS-Attacks-Surge-106-from-H2-2023-to-H1-2024-Reveals-New-Zayo-Data

CISA. (2023). Contextualizing Deepfake Threats to Organizations. Cybersecurity Information Sheet. Sept. 2023. Available at: https://media.defense.gov/2023/Sep/12/2003298925/-1/-1/0/CSI-DEEPFAKE-THREATS.PDF

De Angelo, D. (2024). The Dark Side of AI in Cybersecurity — AI-Generated Malware. May 15, 2024. Available from: https://www.paloaltonetworks.com/blog/2024/05/ai-generated-malware/

Dutta, T. S. (2023). FraudGPT: New Black Hat AI Tool Launched by Cybercriminals. Cyber Security News. July 27, 2023. Available at: https://cybersecuritynews.com/fraudgpt-new-black-hat-ai-tool/

FBI. (n.d.) Spoofing and Phishing. Available from: https://www.fbi.gov/how-we-can-help-you/scams-and-safety/common-frauds-and-scams/spoofing-and-phishing

Heiding, F., B. Schneier, and A. Vishwanath. (2024). AI Will Increase the Quantity — and Quality — of Phishing Scams. Harvard Business Review. May 30, 2024. Available from: https://hbr.org/2024/05/ai-will-increase-the-quantity-and-quality-of-phishing-scams

Hsu, T., and S. Lee Myers. (2023). Can We No Longer Believe Anything We See? New York Times. April 8, 2023. Available at: https://www.nytimes.com/2023/04/08/business/media/ai-generated-images.html

Langley, M. (2024). AI-Powered Ransomware: How AI is Revolutionizing Ransomware. Security Daily Review. July 17, 2024. Available from: https://dailysecurityreview.com/ransomware/ai-powered-ransomware/

MIT Technology Review. (2021). Preparing for AI-enabled cyberattacks. MIT Technology Review Insights. April 8, 2021> available at: https://www.technologyreview.com/2021/04/08/1021696/preparing-for-ai-enabled-cyberattacks/

NIST. (2024). NIST Identifies Types of Cyberattacks That Manipulate Behavior of AI Systems. NIST News. January 4, 2024. Available from: https://www.nist.gov/news-events/news/2024/01/nist-identifies-types-cyberattacks-manipulate-behavior-ai-systems

Open AI. (2024). Disrupting malicious uses of AI by state-affiliated threat actors. Open AI Blog Post. February 14, 2024. Available from: https://openai.com/index/disrupting-malicious-uses-of-ai-by-state-affiliated-threat-actors/

Pointner, P. (2024). Mounting Threat to Organizations and Users. Cyber Defense Magazine. February 7, 2024. Available from: https://www.cyberdefensemagazine.com/ai-enhanced-identity-fraud-a-mounting-threat-to-organizations-and-users/

PSD. (2024). The Dark Side of AI: AI-Powered Attacks and Their Growing Threat. PSD Group. August 2, 2024. Available from: https://www.psdgroup.com/the-dark-side-of-ai-ai-powered-attacks-and-their-growing-threat/

Toulas, B. (2023). Cybercriminals train AI chatbots for phishing, malware attacks. Bleeping Computer. August 1, 2023. Available from: https://www.bleepingcomputer.com/news/security/cybercriminals-train-ai-chatbots-for-phishing-malware-attacks/

Winder, D. (2024). Smart Guessing Algorithm Cracks 87 Million Passwords In Under 60 Seconds. Forbes. Jun 19, 2024. Available at: https://www.forbes.com/sites/daveywinder/2024/06/19/smart-guessing-algorithm-cracks-87-million-passwords-in-under-60-seconds/

Zayo Group. (2024). Protecting Your Business from Cyber Attacks: The State of DDoS Attacks. Zayo Group Report. Q1 & Q2, 2024. Available at: https://go.zayo.com/2024_1H_DDoS_Protection_Report 

William Lucyshyn

Research professor and the director of research at the Center for Governance of Technology and Systems, in the School of Public Policy, at the University of Maryland.

Read Bill’s Bio

Previous
Previous

Key Global Cyber Trends 2014-2024

Next
Next

Municipal Chaos: How Chaos Theory Explains Cyberattacks Against Smart City Architectures