WEBINAR
GigaOm Radar Report for PTaaS: How to Make a Smarter Investment in Pentesting
WEBINAR
GigaOm Radar Report for PTaaS: How to Make a Smarter Investment in Pentesting

Top 40 AI Cybersecurity Statistics

The latest AI cybersecurity statistics show an increase in artificial intelligence to power phishing, ransomware attacks, crypto-related crime, and other forms of attack. 

Organizations are already feeling the impact of AI-generated attacks and anticipate the increased prevalence of low-level vulns becoming more common targets for amateur attackers empowered by LLM technology. In response, security teams are turning to AI-powered tools to fight AI with AI. 

Here's a roundup of some top AI cybersecurity statistics that illustrate current trends and likely future trajectories.

Cost and Frequency of AI Cyberattacks

  1. Security stakeholders rank the highest AI-powered cybersecurity threat categories as malware distribution, vulnerability exploits, sensitive data exposure from generative AI, social engineering, net unknown and zero day threats, and reconnaissance for attack preparation (Darktrace).
  2. 74% of IT security professionals report their organizations are suffering significant impact from AI-powered threats (Darktrace).
  3. 75% of cybersecurity professionals had to modify their strategies last year to address AI-generated incidents (Deep Instinct),
  4. 97% of cybersecurity professionals fear their organizations will face AI-generated security incidents (Deep Instinct).
  5. 93% of businesses expect to face daily AI attacks over the next year (Netacea).
  6. 87% of IT professionals anticipate AI-generated threats will continue to impact their organizations for years (Darktrace).
  7. The global cost of data breaches averaged $4.88 million over the past year, representing a 10% increase and an all-time high (IBM).
  8. Organizations most frequently experience social engineering and phishing attacks (reported by 56% of IT professionals), web-based attacks (50%), and credential theft (49%) (Ponemon Institute).

AI Phishing

  1. 40% of all phishing emails targeting businesses are now generated by AI (VIPRE Security Group).
  2. 60% of recipients fall victim to AI-generated phishing emails, equivalent to rates for non-AI generated emails (Harvard Business Review).
  3. Spammers save 95% in campaign costs using large language models (LLMs) to generate phishing emails (Harvard Business Review).
  4. Phishing attacks cost an average $4.88 million per breach (IBM).

AI Deepfakes

  1. 61% of organizations saw an increase in deepfake attacks over the past year (Deep Instinct).
  2. Deepfake attacks are projected to increase 50% to 60% in 2024, with 140,000 to 150,000 global incidents (VPNRanks).
  3. 75% of deepfakes impersonated a CEO or other C-suite executive (Deep Instinct).
  4. Generative AI will multiply losses from deepfakes and other attacks 32% to $40 billion annually by 2027 (Deloitte).
  5. Impersonation scams cost $12.5 billion nationally in losses in 2023 (Federal Bureau of Investigation).

AI Ransomware

  1. 48% of security professionals believe AI will power future ransomware attacks (Netacea).
  2. The average ransomware attack costs companies $4,450,000 (IBM).
  3. Ransomware attacks rose 13 times over the first half of 2023 as a percentage of total malware detections (Fortinet).

AI Cryptocrimes

  1. Deepfakes will account for 70% of cryptocrimes by 2026 (Bitget).
  2. Cryptocrime losses totaled $5.6 billion nationally in 2023, accounting for 50% of total reported losses from financial fraud complaints(Federal Bureau of Investigation).
  3. Cryptocurrency losses rose 53% from 2022 to 2023 (Federal Bureau of Investigation).

AI-generated Cybersecurity Risks

  1. 60% of IT professionals feel their organizations are not prepared to counter AI-generated threats (Darktrace).
  2. While 79% of IT security executives say they've taken steps to mitigate AI-generated risks, just 54% of hands-on practitioners share their confidence (Darktrace).
  3. 41% of organizations still rely on endpoint detection and response (EDR) strategies to stop AI attacks (Deep Instinct). Previous research has found that over half of organizations say EDR solutions are ineffective against new types of threats (Ponemon Institute).
  4. Despite the limitations of EDR, 31% or organizations plan to increase investment in EDR solutions (Deep Instinct).

AI-powered Cybersecurity Prevention Tools

  1. Only 15% of stakeholders feel non-AI cybersecurity tools are capable of detecting and stopping AI-generated threats (Darktrace).
  2. 44% of organizations can confidently identify ways AI could strengthen their security systems (Ponemon Institute).
  3. 62% of organizations can identify ways machine learning could strengthen their security systems (Ponemon Institute).
  4. 67% of cybersecurity professionals use AI primarily to create rules reflecting known security patterns and indicators (Ponemon Institute).
  5. 50% of organizations say they're using AI to compensate for a cybersecurity skills gap (Ponemon Institute).
  6. 70% of cybersecurity professionals say AI proves highly effective for detecting threats that previously would have gone unnoticed (Ponemon Institute).
  7. 73% of cybersecurity teams want to shift focus to an AI-powered preventive strategy (Deep Instinct).
  8. 53% of security teams say their organization is still in the early stages of adopting AI cybersecurity tools (Ponemon Institute).
  9. 65% of security teams report challenges integrating AI cybersecurity solutions with legacy systems (Ponemon Institute).
    Just 18% of security teams say their organization has fully adopted and enacted AI cybersecurity tools (Ponemon Institute).
  10. 63% of cybersecurity professionals use AI primarily to create rules reflecting known security patterns and indicators (Ponemon Institute).
  11. 50% of organizations say they're using AI to compensate for a cybersecurity skills gap (Ponemon Institute).

FAQs

How many AI cyberattacks per day occur?

The average computer connected to the Internet gets attacked 2,244 times a day, equivalent to once every 39 seconds (University of Maryland).

How many people get hacked by AI annually?

Data breaches compromised 353,027,892 victims in 2023, with the majority of incidents stemming from cyberattacks (Identity Theft Resource Center).

What percentage of AI cyberattacks involve social engineering rather than technical issues?

Social engineering represents the most prevalent form of cyberattack, reported by 56% of organizations (Ponemon Institute). AI now generates 40% of phishing emails targeting businesses (VIPRE Security Group).

What do experts predict for AI cybersecurity in 2025?

93% of security leaders anticipate their organizations will face daily AI attacks by 2025 (Netacea). Phishing, web-based attacks, and credential theft represent the most prevalent attack trends (Ponemon Institute). 95% of security professionals anticipate that adopting AI cybersecurity tools will strengthen their security efforts (Darktrace). Cybersecurity experts anticipate that the use of AI to counter AI cyberattacks will become a long-term battle of "AI vs. AI" (U.S. Department of Defense).

How big is the AI cybersecurity market?

The global AI cybersecurity market was worth $22.4 billion in 2023 and will grow to $60.6 billion by 2028, a compound annual growth rate of 21.9% (MarketsandMarkets).

How can you prepare for an AI cyberattack?

The Open Worldwide Application Security Project (OWASP) has published AI security guidance (OWASP) identifying the leading development-time and runtime risks posed by AI and LLM and recommending corresponding mitigation strategies. Cobalt provides organizations with access to professional pentesters experienced with using OWASP methodology to probe for vulnerabilities.

How can you secure AI assets?

Organizations should regard AI assets as an extension of the attack surface and secure AI data, compute resources, algorithms, and models just as they would secure other digital assets (Darktrace). 

Basic security safeguards include visibility, monitoring, detection and response, access controls, defense-in-depth, risk vulnerability management, zero trust, and control. AI-specific security measures include implementing a Testing, Evaluation, Verification and Validation (TEVV) plan, AI offensive security testing, and Adversarial Machine Learning (AML) procedures. 

Pentesting should form part of an effective AI security strategy.

AI Cybersecurity Reports for further reading and research

Back to Blog
About Jacob Fox
Jacob Fox is a search engine optimization manager at Cobalt. He graduated from the University of Kansas with a Bachelor of Arts in Political Science. With a passion for technology, he believes in Cobalt's mission to transform traditional penetration testing with the innovative Pentesting as a Service (PtaaS) platform. He focuses on increasing Cobalt's marketing presence by helping craft positive user experiences on the Cobalt website. More By Jacob Fox