Back to Blog

What exactly is an AI system under the EU AI Act?

This is the starting point for anyone working on AI compliance in Europe. Why? Because it defines whether your software falls under the scope of the regulation.

AI system concept

Posted by


The legal definition of an AI system

Article 3.1 of EU AI Act defines an AI system as:


A machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.


Under this definition, it's not always clear what types of algorithms are included. Does it apply to all machine learning models? Only deep learning? What about expert systems or simple rule-based logic?.


On February 6, 2025, the European Commission published Guidelines on the definition of an AI system. These aim to help providers and other stakeholders determine whether their software qualifies, and how to apply the Act effectively.


Keep in mind that these guidelines are not legally binding. They are expected to evolve with real-world application, new use cases and legal interpretation.


What are the key elements?

As you might have noticed, the definition is intentionally broad. The reason is that Europe needs flexibility to keep up with rapid technological developments in the field. Thus, each system must be assessed based on its specific characteristics.


The definition includes seven key elements:


The 7 Elements of an AI System

EU AI Act Definition
1. Machine-based

It carry out computations

2. Autonomy

Independent operation capability

3. Adaptiveness

Learning from experience

4. Objectives

Explicit/implicit goal-driven

5. Inferencing

It can generate outputs from inputs

6. Outputs

Predictions, content, recommendations, or decisions

7. Impact

Influences environments


Let's shed some light on what they actually mean.

What makes a system machine-based in the context of AI?

An AI system is considered machine-based when it is developed with and operates on machines that include both hardware and/or software components. In other words, all core functions of the AI like model training, data processing, predictive modeling and automated decision-making at scale depend on computational processes.


The term machine-based doesn't just refer to traditional computers. It also includes new types of machines like quantum computers or even biological systems, as long as they can process information and perform calculations like a computer. What matters most is that the system can carry out computations, no matter how it's built.


What does mean that AI systems are designed to operate with varying levels of autonomy?

AI systems can work with different levels of independence from human control. This depends on three main factors:


1. Inference capability

Autonomy and inference are connected. AI systems must be able to make decisions, predictions, or recommendations on their own. This decision-making ability is key to their autonomy.

2. Human intervention or involvement

Human involvement and intervention can be direct (e.g., manual control) or indirect (e.g., monitoring automated processes). The EU AI Act excludes systems that are designed to operate with full manual human involvement and intervention.

3. Interaction with the environment

Autonomy also depends on how an AI system interacts with its environment. It's not just about the technology (e.g., machine learning) but how it works independently in real-world situations.


One example of autonomy could be the Netflix or Amazon recommendation engine. These systems require human input, like your viewing history. However, they autonomously generate recommendations based on this data, without needing any direct action from the user. The system analyzes your data and suggests content that you didn't specifically request.

What does it mean for an AI system to be adaptive?

Adaptiveness refers to an AI system's ability to learn and adjust its behavior over time. This means the system may change how it operates based on new data, and as a result, it could produce different outcomes for the same inputs after adapting.

Adaptiveness in action

Initial state

System starts with base knowledge and initial parameters

Accuracy: 85%

Learning phase

Processes new data and feedback to improve performance

Accuracy: 92%

Adapted state

Refined behavior based on learned patterns

Accuracy: 97%


However, it's important to note that not all AI systems need to be adaptive or self-learning after deployment. A system can still be considered an AI system even if it doesn't have the ability to learn or change its behavior automatically after it's been set up. The ability to learn or identify new patterns in data is optional and not required for something to qualify as an AI system.


What are AI system objectives, and how do they work?

AI systems are designed to achieve specific goals, known as objectives. These objectives can be either explicit or implicit:

Explicit objectives

These are clearly defined goals directly programmed by the developer, such as optimizing a cost function, probability, or reward. For example, an AI system might be tasked with minimizing errors or maximizing efficiency.

Implicit objectives

These goals are not directly stated but can be inferred from the system's behavior or the assumptions built into it. These objectives may emerge from how the AI interacts with data or its environment.

Important note: The AI Act (Recital 12) clarifies that the objectives of an AI system might not always align with its intended purpose. While the system's objectives are focused on internal tasks (e.g., accuracy, performance), its intended purpose relates to the external context—such as the specific role it's meant to play in a business setting.


Example: A corporate virtual assistant AI system might aim to answer questions accurately and with few errors (its objectives), but its intended purpose could be to support a specific department's work by following certain guidelines (e.g., document formatting) and integrating into broader workflows.


What does it mean that an AI system must be able to "infer how to generate outputs"?

A core feature of an AI system is its ability to infer, that is, to determine how to produce an output based on the input it receives. This is what sets AI apart from traditional software, which follows only fixed, human-defined rules.


According to Recital 12 of the AI Act, inference refers to the process by which an AI system produces predictions, content, recommendations, or decisions that can influence physical or virtual environments. It can also involve the system deriving models or algorithms from data. It cam happen during the building phase of the AI system or during its use.


To understand how an AI system "infers how to generate outputs," we must look at the techniques used during its building phase. These techniques define whether a system qualifies as an AI system under the AI Act:


Machine Learning (ML) approaches
Supervised Learning

Learns from labeled datasets (input-output pairs)

  • Email spam filters
  • Legal doc classification
  • Credit scoring
Unsupervised Learning

Discovers patterns in unlabeled data

  • Customer segmentation
  • Legal document clustering
  • Anomaly detection
  • Fraud rings detection
Self-Supervised Learning

Generates its own training labels from raw data

  • LLM predicting next word
  • Image models predicting next pixel
  • Autoencoders
Reinforcement Learning

Learns through trial, error and reward feedback

  • LLMs post-training alignment
  • Recomendation algorithms
  • Policy gradient for robotics
Deep Learning

Uses neural networks to learn directly from raw data

  • GenAI
  • Voice transcription
  • Facial recognition
  • Multimodal review

Logic & knowledge-based approaches

Applies encoded rules and expert logic — no learning involved.

  • Code linter
  • Expert systems in healthcare (e.g., clinical decision support)
  • Events and alert trigger rules

What falls outside the scope of an AI System?

Traditional Software

Systems based solely on human-defined rules for automated operations. No inference, learning, or modelling involved.

Mathematical Optimization

Systems using ML to accelerate traditional methods without autonomous decision-making.

  • ML-accelerated physics simulations
  • Satellite bandwidth management
  • Function approximation models

Basic Data Processing

Systems using explicit, fixed rules without AI techniques.

  • Spreadsheets & SQL queries
  • Simple statistics software
  • Opinion poll software

Classical Heuristic Systems

Rule-driven systems without dynamic learning capabilities.

  • Chess programs with minimax
  • Trial-and-error systems
  • Pattern recognition systems

Simple Prediction Systems

Basic statistical systems serving as baselines for ML.

  • Stock price averaging
  • Sales demand estimation
  • Support ticket resolution prediction

These systems enhance efficiency but lack the key characteristics of AI systems: they don't make autonomous decisions, learn patterns beyond narrow tasks, or exhibit the adaptability and generalization typical of modern AI


In what ways can AI-generated outputs impact physical or digital environments?

For a system to qualify as an AI system, it must infer how to generate outputs that influence environments.

Type of outputDescriptionExample
PredictionsEstimates of unknown values based on known inputs
  • Forecasting traffic in self-driving cars
  • Predicting customer churn risk
ContentCreation of new material (text, images, video, etc.) by the system
  • Generative AI like GPT
  • Image/video synthesis
RecommendationsSuggestions based on user data, behaviour, or context
  • Recommending products
  • Suggesting actions
  • Potential hire recommendations
DecisionsAutonomous conclusions or actions taken by the system
  • Credit scoring automation
  • Autonomous driving decisions

How do AI system outputs influence environments?

Outputs of AI system can influence both physical and virtual environments. This means that AI systems are not passive; they actively impact the environments where they are used.


In physical environments, AI systems can directly interact with and manipulate real-world objects. For instance, AI-powered robotic arms in manufacturing plants can precisely assemble products or handle materials.


In virtual environments, AI systems can transform digital spaces by processing data streams, automating software operations, and modifying virtual systems. This includes everything from content recommendation algorithms to automated trading systems.



Ready to explore AI safety and governance in depth? Join the Contrasto AI Club, a community of professionals dedicated to advancing responsible AI development and implementation.