THREE PEAT
GigaOm Names Cobalt an “Outperformer” for Third Consecutive Year in Annual Radar Report for PTaaS.
THREE PEAT
GigaOm Names Cobalt an “Outperformer” for Third Consecutive Year in Annual Radar Report for PTaaS.

EU AI Regulations: What Security Practitioners Need to Know

The EU Artificial Intelligence Act has entered into force, initiating a series of compliance deadlines that will begin arriving in February 2025. What do the new EU AI regulations mean for security teams?

In this blog, we'll cover what you need to know:

  • We'll start with an overview of the AI Act summarizing its highlights.
  • Next, we'll review the definition of AI systems that are in scope of the new EU AI Act and how it distinguishes AI-enabled applications from traditional apps.
  • Then we'll explore the security implications of EU AI regulations and what they mean for security teams.
  • We'll also summarize the implementation timeline, detailing when the EU AI Act's provisions go into effect.
  • Finally, we'll share how Cobalt's LLM pentesting services can help you achieve compliance with the EU AI Act.

Let's begin with an executive summary of the AI Act:

Overview of EU AI Regulations

A high-level summary of the AI Act breaks down into four main points:

  • The AI Act groups AI into risk-based categories, with the highest-risk categories subject to the strictest regulations.
  • The heaviest responsibilities fall on developers of high-risk AI systems, called "providers".
  • Professional users of high-risk AI systems, called "deployers", also bear lesser responsibilities.
  • Providers of AI models with general applications (general purpose AI models or GPAIs), such as large language learning (LLM) models, have obligations covering technical documentation, use instructions, copyright compliance, and training content disclosure.

Let's unpack these highlights:

AI Risk Categories

The AI Act defines risk categories in order of descending risk, with regulatory requirements decreasing as risk goes down. In a similar fashion to vulnerability severity levels, the act distinguishes four risk groupings for non-GPAI AI systems:

  • Unacceptable risk: AI systems, which are strictly prohibited except for specific exceptions, such as law enforcement or terrorism prevention
  • High-risk: AI systems, the focus of the Act's provisions, subject to heavier regulation
  • Limited-risk: AI systems, subject to lighter transparency requirements
  • Minimal risk: AI systems, not subject to regulation

Unacceptable-risk AI Systems

Unacceptable risk applications involve AI systems used for purposes such as deception, manipulation, or discrimination. Examples include:

  • Systems that manipulate end-users' emotions and behavior
  • Social scoring systems that use profiling to discriminate against users

High-risk AI Systems

High-risk AI systems pose risk to end-users' health, safety, or fundamental rights. They include eight main categories of applications:

  • Biometrics
  • Critical infrastructure
  • Essential public and private services
  • Education
  • Employment
  • Law enforcement
  • Judicial and political administration
  • Immigration

Limited-risk AI Systems

Limited-risk AI systems directly interact with individual end-users ("natural persons"). Examples include:

  • Deepfake audios and videos
  • Biometric recognition systems
  • Emotion recognition systems
  • Chatbots

Minimal-risk AI Systems

Minimal-risk AI systems fall outside the AI Act's scope of regulation. Examples include:

  • AI-enabled spam filters
  • AI-enabled video games

Most AI applications currently on the EU market fall into the minimal-risk category.

GPAI Risk Categories

For GPAI systems, the AI Act distinguishes two risk groupings:

  • Systemic
  • Normal

A GPAI system falls into the systemic risk category if its training uses cumulative compute exceeding 1025 floating point operations (FLOPs). The European Commission also may decide to designate a GPAI system as posing systemic risk.

High-risk AI Provider Obligations

The EU AI Act imposes the strongest requirements on providers of high-risk AI systems. High-risk AI systems and their providers must register with the EU's database and meet detailed compliance obligations. These include:

  • Establishing a risk management system to govern the high-risk application's entire lifecycle
  • Conducting data governance to ensure that datasets used for systems training, validation, and testing are relevant, representative, accurate, and complete
  • Creating technical documentation to show compliance and equip authorities to evaluate compliance
  • Designing automated record-keeping into AI systems to capture events relevant for identifying national level risks and substantial system modifications
  • Providing instructions to support downstream deployers in achieving compliance
  • Designing AI systems so deployers can exercise human oversight
  • Designing systems to achieve sufficient accuracy, robustness, and security
  • Establishing quality management systems to support compliance

Multiple authorities possess authority to enforce these obligations, including EU member states' notifying and market surveillance authorities and an AI Office within the European Commission. When market surveillance authorities detect non-compliance or insufficient compliance to prevent risk, they can compel corrective action or removal of products from the market.

Penalties for non-compliance range from the higher of 35,000,000 euros or up to 7% of company global annual turnover. Penalties for misleading authorities range from the higher of 7,500,000 euros or up to 1% of company global annual turnover.

High-risk Deployer Obligations

Deployers of high-risk AI systems face lesser obligations than providers. These include:

  • Using the systems according to provider instructions
  • Assigning human oversight
  • Ensuring input data relevance
  • Monitoring system operation and immediately notifying providers and authorities of risks
  • Keep system-generated logs for at least six months
  • Informing workers before using high-risk systems
  • Avoiding use of systems not registered in EU databases
  • Complying with data protection assessments
  • Cooperating with authorities

Like providers, deployers can be subject to non-compliance penalties.

Limited-risk Provider and Deployer Obligations

Providers and deployers of limited-risk AI systems must meet transparency obligations. These include:

  • Informing end-users when they're interacting with AI systems, except when obvious or when the AI system supports an application such as crime detection
  • Marking artificially generated content such as deepfakes as synthetic
  • Informing users of emotional recognition or biometric classification, except when used for legal purposes
  • Disclosing AI-altered content, except when used for legal, artistic, or satirical purposes

Non-compliance with transparency obligations can incur penalties.

GPAI Model Provider Obligations

Most general purpose AI model providers must meet four general requirements:

  • Creating and maintaining technical documentation covering training and testing processes and results evaluations, to be provided to authorities upon request
  • Creating, maintaining, and publishing technical documentation to AI system providers who intend to integrate the GPAI into their systems
  • Implementing policies for compliance with EU law on copyrights and related rights
  • Producing and publishing detailed summaries of content used to train GPAI models

The first two requirements can be relaxed for certain free and open-source GPAI providers.

Providers of high-risk GPAI models also must meet additional requirements. These include:

  • Evaluating models using standard protocols
  • Assessing and mitigating systemic risks
  • Tracking and documenting serious incidents and reporting them to the AI Office and national authorities
  • Providing sufficient cybersecurity for AI models and their physical infrastructure

The EU is developing a standard for providers to demonstrate compliance with these obligations. In the meantime, providers can demonstrate compliance by using codes of practice or other means.

Difference between Traditional Applications and AI-enabled

To establish which applications fall under its scope, the AI Act lays out criteria to define AI systems and distinguish them from traditional software systems. Several key characteristics mark AI systems:

  • Autonomous: designed to operate with varying levels of autonomy, exhibiting independence of human input and capability of operating without human intervention
  • Self-learning: capable of adapting after deployment by using self-learning capabilities and changing the system while in use
  • Inferential: capable of pursuing explicit or implicit objectives by using input to infer outputs such as predictions, content, or recommendations that can influence virtual or physical environments

These characteristics distinguish AI-enabled systems from traditional software applications that operate within programmed boundaries without capacity for self-learning or inference.

Security Implications of EU AI Regulations

The EU AI Act places responsibility on cybersecurity professionals and software application teams for helping companies ensure compliance. Team leaders can take several key steps to promote compliance with the AI Act:

  • Catalog risks: review existing software ecosystems to identify any AI systems and categorize them in terms of the EU AI Act's risk categories
  • Establish standards: codify best practices for aligning AI model development and deployment with EU requirements while sustaining scalability
  • Close gaps: identify noncompliance gaps and implement corrective actions

Cybersecurity and software application teams tasked with AI Act should prioritize testing for known AI vulnerabilities. The Open Worldwide Application Security Project (OWASP) has developed an AI security matrix that classifies vulnerabilities systematically by type and impact, along with controls for reducing risks. Broadly, threats and their corresponding controls fall into two major groupings:

  • Development-time threats
  • Runtime threats

Development-time threats may target:

  • AI data, using means such as leaks or tampering (poisoning)
  • AI models, using means such as theft or tampering

Runtime threats include:

  • AI input leaks that compromise data confidentiality, following upon threats that emerge through use such as evasion of security systems, theft of AI models, or sensitive data disclosure
  • AI output attacks that pass on output containing injection attacks
  • Conventional security threats that exploit access controls or plug-in vulnerabilities, such as SQL injection and password guessing

To minimize these threat categories, OWASP recommends implementing five key security strategies:

  • AI governance: establishing AI governance processes and integrating these into information security and software lifecycles
  • Technical IT security controls: supplementing standard controls with AI-adapted controls
  • Data science security controls: applying controls both during development and during runtime
  • Data minimization: limiting amount of data at rest and in transit and duration of storage time in both development and runtime
  • Control behavior impact: monitoring unwanted behavior

These guidelines apply to all AI systems.

GPAI-specific Security Implications

GPAI models and systems require special considerations. The OWASP Top 10 for Large Language Model Applications flags the following vulnerabilities as priorities:

  1. Prompt injections: misusing natural language prompts to issue malicious instructions
  2. Insecure output handling: passing LLM output to other system elements with insufficient validation, sanitization, and handling. Read more about prompt injection attacks.
  3. Training data poisoning: providing tampered raw text to machine learning programs. Explore more on the topic of training data poisoning.
  4. Model denial of service: overloading LLMs with input demanding high resource consumption.
  5. Supply chain vulnerabilities: importing risks from software and data supply chains.
  6. Sensitive information disclosure: manipulating LLMs to reveal confidential information. Read more about secure output handling.
  7. Insecure plug-in design: exploiting plug-ins that are called automatically without execution controls and input validation.
  8. Excessive agency: exploiting LLM-based systems' ability to act as agents in operating on other systems and performing actions in response to input prompts or LLM outputs. Explore more about an excessive agency vulnerability. 
  9. Overreliance: relying on LLM output without oversight controls. Explore more about overreliance vulnerabilities.
  10. Model theft: exploiting vulnerabilities to escalate privileges and copy or exfiltrate models. Read more about LLM model theft.

These vulnerabilities can be addressed in part by applying AI pentesting procedures that assess risks such as prompt injections, jailbreaks, and insecure input handling.

When Does the EU AI Act Take Effect?

The EU AI Act was published on July 12, 2024 and went into effect on August 1, 2024. Enactment triggered an implementation timeline of compliance deadlines that will unfold over a 36-month period. Key dates include:

  • February 2, 2025: Prohibited AI restrictions take effect.
  • August 2, 2025: Enforcement begins for regulations covering notifying bodies, GPAI models, governance, confidentiality, and penalties.
  • June 2, 2026: The remainder of the AI Act goes into effect, including requirements for high-risk AI systems, with certain exceptions for high-risk AI systems used as product safety components.
  • August 2, 2027: Remaining rules for high-risk AI systems used as product safety components go into effect.

This summary covers key dates for the Act's implementation. Other deadlines apply specific to the European Commission, member states, providers, deployers, operators, and large-scale IT systems. Updates will be announced by official EU bodies.

How Cobalt Can Help

Security teams now have 6 to 36 months, from August 2024, to meet EU AI Act compliance deadlines, and the clock is ticking. To help you achieve compliance quickly and prevent vulnerabilities before they're exploited by attackers, Cobalt offers next-generation AI and LLM penetration testing services.

We've crafted our pentesting services specifically to address the complexities and risks of AI-enabled software. To help you protect your organization against today's AI and LLM vulnerabilities, we lend you the expertise of our pentesting community, guided by a core of 30 experienced experts who work directly with security authorities and contribute to OWASP standards. Our experts help your team implement systematic, consistent coverage against today's top threats. We help you test for prompt injection attacks, jailbreaks, insecure output handling, and other common threats.

Our user-friendly setup process and streamlined testing methods let you schedule tests without the hassle of negotiating custom scoping and statements of work. Contact our team for a demo to get started with AI and LLM pentesting today.

SANS AI Survey Report 2024 Cover Image

Back to Blog
About Kyle Schuster
Kyle Schuster is the Senior Enterprise Sales Engineer at Cobalt. He graduated with an associates degree in System and Networking management. With nearly 10 years of technical experience, he helps bring to life Cobalt's mission to transform traditional penetration testing with the innovative Pentesting as a Service (PtaaS) platform. Kyle partners with customers to maximize their success using a modern security testing platform. He also provides valuable insights to guide future product releases within the Cobalt Offensive Security Testing Platform. More By Kyle Schuster