EU AI Act Deep Dive
EU AI Act Deep Dive
Deep Dive
EU Artificial Intelligence Act | Deep Dive
1. Introduction 04
2. Scope of the AI Act 06
3. S
ingle Purpose AI Systems are differentiated
by the associated risk 08
4. R
oles and Obligations according
to the Risk Categories 17
5. G
eneral-Purpose AI follows a different
risk categorization scheme 22
6. Regulatory Governance and Enforcement 24
7. T
he AI Act grants easements for
“sandbox” testing facilities 26
8. N
on-Compliance will come at a high price –
significantly more so than GDPR 28
9. AI Act will come into force step-by step 30
10. Glossary 32
Get in touch 33
Authors 35
03
1. Introduction
One of the key political priorities of the EU ples aims to leave the Act adaptable to as
Commission for the 2019–2024 term was yet unknown iterations of AI technologies.
creating “A Europe fit for the digital age.” However, the public use of general-purpose
This ambitious agenda has led to the tabling AI technology prompted the legislator to
of over 10 significant digital regulations, differentiate between single-purpose AI
addressing areas such as the data economy, and general-purpose AI . The AI Act regu-
cybersecurity, and platform regulation. The lates the market entry for general-purpose
AI Act is a crucial puzzle piece within this AI models, regardless of the risk-based
complex framework of EU digital regulation, categorization of use cases, setting forth
which is striving to establish a comprehen- comprehensive rules for market oversight,
sive framework that addresses the complex- governance, and enforcement to maintain
ities and potential risks associated with AI integrity and public trust in AI innovations.
systems. While the following article focuses
on the AI Act, it should always be viewed Given its abstract nature, the legislation
within the broader context of the entire EU contains areas that are yet to be fully
digital regulatory landscape. defined. These are expected to be elab-
orated on through delegated and imple-
The AI Act introduces a framework aimed menting acts, guidelines by the EU insti-
at regulating the deployment and usage of tutions, as well as harmonized standards
AI within the EU. It establishes a standard- developed by European Standardization
ized process for single-purpose AI (SPAI) Organizations. As a result, businesses can
systems’ market entry and operational expect to receive more detailed guidance
activation ensuring a cohesive approach in the near future.
across EU Member States. The AI Act, a
product safety regulation, adopts a risk- The AI Act is being published in the Official
based approach by categorizing AI systems Journal of the European Union in June or
based on their use case, thereby establish- July 2024, with its entry into force 20 days
ing compliance requirements according thereafter. This will mark the beginning of a
to the level of risk they pose to users. This phased implementation process to put the
includes the introduction of bans on cer- various rules and obligations of the AI Act
tain AI applications deemed unethical or into practice. For businesses, this means
harmful, along with detailed requirements there is now a critical window to prepare
for AI applications considered high-risk to for compliance.
manage potential threats effectively. Fur-
ther, it outlines transparent guidelines for
AI technologies designated with limited risk.
With the risk-based approach, AI ethics are
the heart of the AI Act. Its focus on princi-
04
EU Artificial Intelligence Act | Deep Dive
05
2. Scope of the AI Act
In this chapter, we look at the legal scope of the AI Act and
the technology defined as AI according to the regulation. For
entities in the public sector and businesses across the EU,
understanding these aspects is crucial for ensuring compliance
and fostering AI innovations that respect ethical standards and
societal values.
06
EU Artificial Intelligence Act | Deep Dive
07
3. Single Purpose AI
Systems are differentiated
by the associated risk
The AI Act focuses on how AI is used rather than the technology
itself, employing a risk-based framework. This means obligations
intensify with the user risk level. The Act identifies four specific
risk categories, each with its corresponding set of requirements:
unacceptable risk, high-risk, transparency risk and other risk.
Companies are expected to undergo a process of assessing
how the application of their AI systems falls into the four risk
categories as shown in figure 2.
Fig. 2 – AI Act risk levels, with four layers of obligations for entities
Transparency Risks –
Example: AI Systems with Specific Transparency Obligations
Chatbots, Deepfakes Permitted but subject to transparency obligations
08 1
May still bare regulatory, business and security risks.
EU Artificial Intelligence Act | Deep Dive
3.1 Certain applications which severely mental values such as human dignity, free-
impact the rights of individuals are dom, equality, democracy, data privacy, and
outright banned the rule of law: To safeguard these funda-
Recognizing the advantages of AI, policy- mental values, the AI Act prohibits specific
makers balanced its possibilities with the AI applications. The ban on such systems
core of EU principles being aware that will begin after a 6-month grace period fol-
some AI applications might threaten funda- lowing the AI Act´s entry into force.
Facial recognition databases Creation or expansion of facial recognition databases through the untargeted
based on untargeted scraping scraping of facial images from the internet or CCTV footage
of facial images
09
3.2 The focus of the AI Act is squarely of the AI Act, while the second one pertains
Is all high-risk really high-risk?
set on “High-Risk AI Systems” to products mentioned in Annex I. In either
To ensure that only AI presenting a
3.2.1 Identifying High-Risk AI System scenario, the entity using the AI will have
demonstrably elevated risk is clas-
The core focus of the AI Act revolves around the responsibility to self-determine whether
sified as high-risk, the EU has intro-
high-risk AI systems, as most obligations its products or AI systems fall within these
duced opt-out exceptions for provid-
and protective measures outlined in this act defined categories. Given the highly individ-
ers of certain AI systems in high-risk
refer to high-risk AI applications. High-risk AI ual nature and rapid evolution of AI systems,
fields. AI systems that at first glance
systems are those that are deemed to neg- these detailed assessments will likely occur
fall into the high-risk category but do
atively impact safety or fundamental rights on a case-by-case basis.
not pose significant risks to health,
of EU citizens and given that presumed risk
safety, or fundamental rights may,
need to be assessed before being put on Notably, the list of high-risk applications
under justifications, be excluded
the market and also throughout their life- may be subject to extensions and frequent
from the high-risk AI category. Con-
cycle. updates. For example, the EU Commission
sequently, they are not obliged to
retains the right to include new AI applica-
fulfill all associated obligations. This
There are two potential pathways for AI tions on the list of high-risk AI applications
exemption applies to:
systems to be designated as high-risk under whenever those are deemed to pose a risk
the AI Act. The first one involves inclusion in to health, safety, or fundamental rights of • The AI system is designed for a
the specific applications listed in Annex III EU citizens. specific, narrow procedural task.
“The EU AI Act represents an important step for- enhance the outcome of a human
activity that has already been com-
ward in the governance of artificial intelligence, pleted.
offering organizations a clear framework for deploy- • The AI system is designed to iden-
operator. By following existing structures for man- ments outlined in the high-risk AI
use cases listed in the law.
aging compliance and security risks, organizations However, AI that is profiling will
can navigate this new regulatory landscape with always be considered high-risk. In
order to opt-out operators have to
a robust governance framework and a proactive provide documentation and register
approach to risk management.” in advance.
10
EU Artificial Intelligence Act | Deep Dive
Remote biometric identification systems but only with prior authorization for excep-
Biometrics tional circumstances listed in Annnex II; also biometric categorisation and usage for
emotion recognition
Used for recruitment for instance analysis & filtering of job applications or evaluation
Employment, workers
of candidates; also deciding on promotions, termination of work-related contractual
management and
relationships or allocation of tasks; used to monitor & evaluate the performance &
self-employment
behavior of a worker
Evaluation of eligibility for essential public assistance benefits & services; risk
Essential public & assessment & pricing in case of life & health insurance; evaluation of creditwor-
private services thiness or establishment of credit score; evaluation & classification of emergency
calls as well as establishment of emergency priorities
Research and interpretation of facts & law; application of law; influence of voting
Administration of justice and behavior & outcome
democratic processes
11
3.2.3 AI systems that are Safety these sectors must clarify whether an AI
Components classify as High-Risk regulated by the Act is integrated into a
The AI Act places particular emphasis on product, constituting a safety component,
AI-embedded safety products and the and whether it is subject to third-party con-
associated potential risks. The Act speci- formity assessment. The rules for high-risk
fies in Annex I sectors that are considered applications according to Annex I will apply
high-risk due to their importance for the 36 months after entry into force. The AI Act
health and safety of persons when AI is regulates the following areas:
used in safety components. Operators in
All airports or parts of airports that are not exclusively used for military purposes as
Civil aviation 2 well as all operators, including air carriers, providing services at airports or parts of
airports that are not exclusively used for military purposes3
Appliances burning gaseous fuels used for, amongst others, cooking, refrigeration,
Appliances burning air-conditioning, space heating, hot water production, lighting or washing; also, all
gaseous fuels fittings that are saftey devices or controlling devices incorporated into an applicance
Motor vehicles and trailers; Motor vehicles and their trailers; including autonomous driving3
systems, components and
separate technical units
intended for such vehicles2
2
he sectors mentioned in Section B are subject to specific articles of the EU AI Act and may experience differences in application.
T
12 3
Amongst others.
EU Artificial Intelligence Act | Deep Dive
Civil aviation, The design and production of products, parts and equipment to control aircraft
European air space, remotely by a natural or legal person, as well as the design, production, maintenance
aerodomes2 and operation of aircrafts3
Medical devices for human use and accessories for such devices as well as clinical
Medical devices investigations concerning such medical devices and accessories3
In vitro diagnostic medical devices for human use and accessories for such
In vitro diagnostic medical devices3
devices
Products designed or intended for use in play by children under 14 years of age
Toys (e.g., connected toys and IoT devices)
Lifts permanently serving buildings and constructions for mainly the transport of
Lifts persons with or without goods
Equipment and protective Equipment and protective systems intended for use in potentially explosive atmo-
systems intended for use spheres as well as components incorporated within the equipment or protective
in potentially explosive system3
atmospheres
Any kind of radio equipment that is anything connected via radio waves (e.g., WiFi,
Radio equipment Bluetooth, 5G in laptops, phones, IoT devices)
13
3.3 Rules for Transparency and tarily with code-of-conduct in accordance
Other Risks with ethical and trustworthy AI standards
For AI applications with limited risk to indi- in the union. The AI Office (see chapter 6)
viduals, the main requirement is to follow will support and promote the development
certain rules on transparency. However, of such codes of conduct, considering
while some AI systems may only need to existing technical solutions and industry
adhere to transparency obligations, it is best practices.
important to note that AI systems in other
risk categories are also required to comply It is important to bear in mind that AI sys-
with these transparency rules in addition tems with no obligations under the AI Act
to their specific regulatory requirements. may still pose business and security risks, as
An example for a limited risk AI system is well as regulatory obligations under other
AI-based chatbots which require explicit EU laws, that should not be disregarded.
notification before use, ensuring users
are aware that their interaction is with a
machine and granting them the option to
be redirected to human assistance.
14
EU Artificial Intelligence Act | Deep Dive
15
16
EU Artificial Intelligence Act | Deep Dive
1. If they associate their name or trademark with a high-risk AI system that has already
been introduced to the market or put into service. However, contractual exemptions
can be applied.
2. If they make substantial modifications to a high-risk AI system that has already been
placed on the market, and it continues to pose a high risk in its new use.
3. If they alter the intended purpose of an AI system or general-purpose AI that was not
originally classified as high-risk AI but becomes high-risk AI due to the new modifications.
17
4.2 Distinct responsibilities depending
on the role of each stakeholder Providers and deployers of all AI operators with the necessary knowl-
The following illustration offers a preview systems must ensure adequate AI edge and resources to make well-
of the obligations and tasks that different literacy among their staff and rele- informed decisions about AI systems.
operators will be required to implement. vant individuals, considering their This not only involves understanding
Each obligation is delineated more compre- technical expertise, experience, the accurate application of technical
hensively in the final legal text and may also education, training, and the intended elements during the development
undergo further detailing. Generally speak- use of the AI systems, as well as the phase of AI systems but also extends
ing, there is an obligation of mutual rec- affected persons or groups. The AI to knowing the right measures to
ognition between the Member States, so Act emphasizes the importance of AI apply during its use and correct inter-
that the assessment done by one national literacy, aiming to furnish AI System pretation and usage of the output.
authority has to be recognized by the other.
However, each Member State may act upon
potential violations.
• Verify whether the AI system is in line • Withdraw, recall or refrain from placing the
with the requirements and formalities AI system on the market if it is non-compliant
in the AI Act • Cooperation with competent authorities
• Keeping conformity certifications for
ten years
Importer & Distributor
18
EU Artificial Intelligence Act | Deep Dive
• Safety by design
Elimination or reduction of risks identified
and evaluated pursuant to paragraph 2
in as far as technically feasible through
adequate design and development of the
high-risk AI system
• Protective measures
Where appropriate, implementation of
adequate mitigation and control mea-
sures addressing risks that cannot be
eliminated
1. Pre-Market Phase
This includes a strategy for regulatory
compliance, design control and verification,
system examination, testing and validation
of AI systems, and technical specification.
2. Post-Market Phase
Quality control, reporting of serious inci-
dents, and a post-market monitoring sys-
tem are all required.
3. Continuous Phase
This involves data management systems
and procedures, RMS, communication
with authorities, and document and
record-keeping, including logging. Resource
management, including security of supply,
and an accountability framework are also
included.
19
4.3 Two Types of Conformity Assess- All AI systems listed in Annex I (e.g., aviation, zation) and CENELEC (European Committee
ment under the AI Act automotive, and medical devices) must seek for Electrotechnical Standardization) lead-
The European Union (EU) has established support from third parties. In this case, the ing the process. Providers of high-risk AI
a New Legislative Framework (NLF) that conformity assessment must be performed systems may benefit from a presumption of
certain products must be evaluated under by an accredited “notified body” suitable compliance with data and data governance
before they can be sold in the market. This for the type of AI system being inspected. obligations if the data used for training
framework ensures that these products Notified bodies are conformity assessment their AI systems accurately reflects the spe-
meet specific EU regulations and stand- bodies that have been notified by the noti- cific geographical, behavioral, contextual,
ards for safety, quality, and performance. fying authority. If the AI system is deemed or functional settings in which the systems
As part of the NLF, the AI Act requires compliant by the notified body, the provider are intended to be used. Under these con-
“conformity assessments” followed by a must issue a declaration of conformity. ditions, providers are generally considered
“declaration of conformity” as prerequi- compliant with the obligations mentioned
sites for products to enter the market and Only providers of high-risk biometric sys- in Article 10, meaning they would not need
demonstrate compliance with the respec- tems have the option to conduct internal to undergo the usual rigorous processes of
tive obligations. controls or opt for third-party assessment. validating and testing data sets for biases
and unrepresentative training data.
Under the EU AI Act, the conformity Providers who self-assess are presumed
assessments for high-risk AI systems can compliant if they adhere to harmonized Additionally, providers who have received
be conducted by the providers themselves standards. The Commission issued the a certificate or statement of conformity
or with the support of third parties. For all standardization requests, which will include under a cybersecurity scheme pursuant to
high-risk AI applications listed in Annex III reporting and documentation deliverables the EU Cybersecurity Act are presumed to
(e.g., employment, essential public and to enhance AI system resource efficiency. be compliant with the cybersecurity obliga-
private services), providers may conduct These harmonized standards are expected tions mentioned in Article 15.
a self-assessment based internal controls before the application of the respective
and issue the declaration of conformity. rules and are already in development, with All approved high-risk AI systems will be
CEN (European Committee for Standardi- published in a EU-wide registry.
High-Risk Internal
Use Cases Controls
Identified Declaration of
High-Risk AI Conformity
Biometric
High-Risk
(optional)
20
EU Artificial Intelligence Act | Deep Dive
21
5. General-Purpose AI
follows a different risk
categorization scheme
The regulation of general-purpose AI models, which are trained
on vast datasets and capable of performing a wide array of
tasks, proved to be the most contentious aspect of the AI Act
negotiations.
The AI Act adopts a risk-based approach, without systemic impact, i.e., models with Providers of AI models that are released
with high-risk AI systems subject to more high impact. A general-purpose AI model under a free and open-source licence that
stringent requirements. However, the gen- is classified as a high-impact model when allows for the access, usage, modification,
erality and versatility of general-purpose it demonstrates a systemic risk through and distribution of the model, whose para-
AI make precise risk categorization chal- specific technical criteria. This is presumed meters are publicly available, and which are
lenging, as the intended purpose of down- if the cumulative compute power used not considered systemic risk, will have only
stream systems or applications incorporat- during its training exceeds a certain thresh- limited obligations.
ing these systems is often unclear. old, currently set at 10^25 floating point
operations (FLOPs). Alternatively, the Com-
To address this issue, the final version of mission may classify it as such if advised General-Purpose AI model means
the AI Act introduces a dedicated regime by a scientific panel alert, indicating its an AI model, including when trained
in Chapter V for providers of general- potential for significant impacts. They use with a large amount of data using
purpose AI models (“foundation models”), the assessment criteria listed in Annex XIII self-supervision at scale, that displays
rather than the general-purpose AI sys- which may be adjusted over time to keep significant generality and is capable
tems themselves. An AI model is a core pace with technological advancements to competently perform a wide range
component of an AI system, used to make through delegated acts adopted by the of distinct tasks regardless of the way
inferences from inputs to produce out- Commission. Providers of general-purpose the model is placed on the market
puts. Model parameters typically remain AI models must adhere to certain stand- and that can be integrated into a
fixed after the build phase concludes, ards. To facilitate compliance, the AI Office, variety of downstream systems or
making the risks posed by general- in collaboration with relevant stakeholders applications.
purpose AI models easier to estimate and such as civil society organizations, industry
regulate compared to those of complete representatives, academia, downstream General-Purpose AI System means
AI systems. As models and systems are providers, and independent experts, will an AI application which is based on
treated separately, a general-purpose AI encourage and support the development an underlying general-purpose AI
model itself will not be classified as a high- of additional Union-level codes of practice. model. This application has the capa-
risk AI system. However, a general-purpose These codes of practice are voluntary for bility to serve a variety of purposes,
AI system built upon a general-purpose AI all companies using general-purpose AI both for direct use as well as for inte-
model may still fall into one of the estab- but grant a presumption of conformity to gration in other AI systems.
lished risk categories. For general-purpose anyone who applies them. The AI Office
AI models, the European policymakers is tasked with drawing up these codes of
agreed on a two-tiered approach, which practice, monitoring and evaluating them,
consists of obligations for providers of and being the future recipient of implemen-
general-purpose AI models with and tation reports.
22
EU Artificial Intelligence Act | Deep Dive
Large models and systems capable of competently performing Foundation models trained with large amount of data and with
a wide range of distinctive tasks, such as generating video, text, advanced complexity, capabilities, and performance well above the
images or computer code, or conversing average, which can disseminate systemic risks along the value chain
• Drawing up and keeping up-to-date technical documentation for • Complying with all requirements applicable to all general-purpose
the AI Office and national authorities (as listed in Annex XI) and AI models and systems
downstream providers (as listed in Annex XII)
• Conducting model evaluations
• Protecting intellectual property rights, trade secrets and confi-
dential business information • Assessing and mitigating systemic risks including their sources
• Enabling understanding about the limitations and capabilities of • Conducting adversarial testing
the GPAI models
• Keeping track of, documenting and reporting of serious incidents
• Complying with EU copyright law and disseminating detailed to the EU Commission
summaries about the content used in training
• Ensuring sufficient cybersecurity protection
23
6. Regulatory Governance
and Enforcement
The competences of the enforcement of the AI Act will be distributed
between the newly established AI Office in the European Commission
and supervisory authorities in the Member States.
Both the EU Commission and Member The notifying authority will also be responsi- open related to implementing the AI Act.
States have distinct responsibilities and ble for assigning the conformity assessment The AI Act mentions a variety of aspects
work together to monitor and enforce bodies which upon proper notification can that shall be subject to further implemen-
the new rules for AI systems and general- qualify as notified bodies (see chapter 4). The tation by the AI Office through delegated
purpose AI models. Whereas the EU Com- notified bodies must comply with several and implementing acts. As the AI Act was
mission is mainly responsible for supervision conditions to qualify as one. Such obligations kept intentionally on an abstract level, it is
of general-purpose AI models, the Member include being established as a legal person highly dependent on further clarification.
States’ authorities are responsible for under national law, fulfilling organizational
enforcing the AI systems’ risk-based rules as requirements to fulfill their tasks and being While delegated acts relate mostly to the
well as coordinating the sandboxes on Mem- independent from the high-risk AI providers. amendments to the legislative text, imple-
ber State level. The following chapter pro- menting acts are measures of individual
vides insights on the specific responsibilities, Additionally, each Member State will have application. For instance, the AI Office may
the governance structure and the interplay to assign specific responsibilities and modify the list of each Annex of the AI Act
of EU Commission and Member States. authorities to existing or newly established by means of a delegated Act. A particular
bodies dedicated to protecting fundamen- task, given the implementation timeline, is
6.1 National Level – Member State tal rights concerning AI. These bodies must the establishment of concrete examples
Enforcement operate independently and impartially, that constitute prohibited AI or specifically
The Member States create the market sur- ensuring that companies adhere to funda- do not constitute prohibited AI. These
veillance authority (agency level). The market mental rights principles in AI development, steps aim to ensure the effective imple-
surveillance authority is primarily tasked deployment, and use. mentation of the AI Act and to specify the
with enforcing the AI Act at national level. rules and concepts stipulated in the AI Act.
6.2 European Level – EU Enforcement
The market surveillance authorities are To streamline and oversee the implemen- Moreover, the EU AI Office will be respon-
responsible for ensuring that AI systems tation of the Act, the EU set up the EU sible for enforcing the AI Act obligations for
adhere to the prescribed standards and AI Office in February 2023, a new entity general-purpose AI models. In this context,
regulations. For example, the market surveil- established by the EU Commission. It it will develop designs tools, methodologies,
lance authority will oversee the correctness has a key role in the implementation of and benchmarks to evaluate the capabilities
of the conformity assessment conducted by the AI Act. The AI Office is established as and reach of general-purpose AI models
high-risk AI providers. In the course of inves- a Commission service embedded in DG and identify models with systemic risks in
tigations, market surveillance authorities CONNECT and thus holds more freedoms concert with academia and industry stake-
may take necessary actions such as access- in its decision-making process and can holders. Last but not least, the EU AI Office
ing documentation as well as the training, act in a more dynamic manner. It will be will host a public registry listing all high-risk
validation and testing data sets used for the composed of five main departments. AI applications which entered the market.
development of high-risk AI systems and Each department will be led by a director
accessing the source code of high-risk AI. responsible for overseeing the implemen- Next to the AI Office, there are two more
Providers of high-risk AI are obliged to coop- tation of the AI Act. The AI Office shall EU bodies that will also influence the
erate with the authorities. employ a total of 140 people, including enforcement of the AI Act. First, to enhance
technological experts, lawyers, and policy collaboration and ensure comprehensive
specialists. Currently, it has over 50 tasks guidance on AI regulation, the Advisory
24
EU Artificial Intelligence Act | Deep Dive
Forum will be responsible. This forum 6.3 Interplay of European and National fostering cooperation, sharing expertise,
comprises a diverse array of stakeholders, enforcement and promoting a good understanding of
including industry experts, civil society Comprised of representatives from each AI across the EU. Moreover, to effectively
representatives, academic scholars, and Member State, alongside observers such address all relevant challenges surround-
governmental officials. Appointed by the as the European Data Protection Super- ing the AI Act, the Board will be divided
EU Commission, members of the Advisory visor and the AI Office, the European Arti- in different sub-committees, focusing on,
Forum offer technical expertise and strate- ficial Intelligence Board collaborates with for example, the alignment of sectorial
gic insights to support the implementation relevant stakeholders to ensure consistent or national legislation. One can expect
of the AI Act. Secondly, the Scientific Panel and effective application of the regulation. the different notifying bodies and market
of Independent Experts is responsible for Assigned with tasks ranging from coordi- surveillance authorities of each Mem-
advising and alerting the Commission on nating national competent authorities to ber State to participate in the different
systemic risks of general-purpose AI. issuing recommendations on regulatory sub-committees and representing their
matters, the Board plays a critical role in respective interests.
European Data
European Commission Protection Supervisor
participates
as observer • Each Member State must establish or
designate at least one of each author-
AI Office European Artificial ities as a single point of contact.
• Supporting AI Act and enforcing Intelligence Board • The national authorities have to be inde-
general-purpose AI rules • Members States main members pendent and provided with adequate
• Strengthening development and • Contributes to the coordination of resources.
use of trustworthy AI national authorities
• Fostering international cooperation • Assists, advices and supports with reports
• Cooperating with institutions, experts opinions and expertise annually to
and stakeholders
Notifying Market Surveillance
appoints Authority Authority
appoints
25
7. The AI Act grants
easements for “sandbox”
testing facilities
Member States are mandated to establish AI regulatory sandboxes at the
national level within 24 months of the entry into force of the Regulation,
which is expected in Q3 2026. Member States can, however, establish
a joint sandbox or join an already established sandbox. Since the main
objective is to give all EU-based companies the option to participate in a
regulatory sandbox, equal access and equal coverage for the participating
Member States must be provided. Additionally, Member States have the
option to set up regional or local sandboxes. Hence, it is expected that
bigger states may set up several sandboxes to ensure regional or local
support for SMEs. Apart from that, the European Data Protection
Supervisor may also establish an AI regulatory sandbox for the EU-level.
7.1 AI Regulatory Sandboxes Both, public and private entities can join – The AI regulatory sandboxes serve as cat-
AI regulatory sandboxes are controlled after application – the sandboxes to test alysts for innovation in the AI landscape,
environments where operators of AI sys- their AI systems against the obligations offering a structured and supportive
tems can develop, train, test, and validate of the AI Act. Entities joining the sandbox environment for the development and
AI systems before market deployment. are guided, supervised and supported in testing of AI systems while ensuring com-
They offer a safe space for experimen- identifying risks relating to fundamental pliance with regulatory standards. More
tation, allowing for the exploration of AI rights, health and safety. Furthermore, importantly, the exit reports for successful
applications under the supervision of com- each participating entity should be given an participants of regulatory sandboxes serve
petent authorities. In the spirit of improving exit report detailing the activities carried as a presumption of conformity for the nec-
the EU’s Innovative Initiative, regulatory out in the sandbox and the related results essary conformity assessment of high-risk
sandboxes stand as pioneering project, and learning outcomes. This exit report will AI systems.
facilitating the development and testing function as a document to demonstrate
of AI systems within a controlled environ- compliance with the regulation through the
ment. Additionally, the national competent conformity assessment (presumption of
authorities have to allocate sufficient conformity) and hence may be a competi-
resources to comply with the requirements tive advantage for participating companies.
mentioned in the AI Act. Each sandbox will
have to submit annual reports on the activ-
ities, such as best practices, incidents, les-
sons learned and the set-up of the sandbox
to the EU AI Office.
26
EU Artificial Intelligence Act | Deep Dive
27
8. Non-Compliance will come
at a high price – significantly
more so than GDPR
The AI Act’s penalty regime is structured based on the nature
of the violation, considering whether it involves unacceptable
systems, high-risk AI or general-purpose AI models, with fines
increasing according to the risk category. Simply put, the higher
the risk category, the higher the fine.
Member States are responsible for estab- Fig. 7 – Fines for operators of AI Systems
lishing rules concerning penalties and
ensuring their enforcement. For example,
1. Up to 35 m. EUR or for companies 7% of the GAT,
each Member State has the discretion to
for non-compliance with the prohibitions
determine the use of warnings and other
non-monetary measures, if any. Further-
more, they must consistently consider the
particular interests of SMEs and start-ups.
2. Up to 15 m. EUR or for companies 3% of the GAT,
National authorities are also mandated to
for infringements to obligations of high-risk AI
assess the nature, gravity, and duration
of each infringement, as well as whether
the entity in question is a repeat offender,
when determining the amount of each fine. 3. Up to 15 m. EUR or for companies 3% of the GAT,
for infringements to obligations of general-purpose AI
The higher option applies, unless per-
taining to SMEs or start-ups. In addition to
monetary fines, national supervisors may
forcibly remove non-compliant AI systems 4. Up to 7.5 m. EUR or for companies 1% of the GAT,
from the market. for supplying incorrect, incomplete or misleading information
28
EU Artificial Intelligence Act | Deep Dive
29
9. AI Act will come into
force step-by step
Twenty days after being published in the Offi- from the Act’s entry into force. Others will entry into force of the AI Act or within the
cial Journal, the EU AI Act comes into force, have a longer implementation period of up first 12 months after the entry into force
marking the start of the official implementa- to 36 months. The following illustration out- have 36 months to implement the require-
tion period. However, not all obligations take lines key aspects that all operators in the ments of the EU AI Act. And high-risk AI that
effect simultaneously; some require imme- EU market should keep in mind. entered the market before the entry into
diate action, while others allow for a longer force or within the first 24 months after the
implementation period for operators to com- AI systems that were placed into the EU entry into force is not automatically subject
ply with the established requirements. market before the entry into force of the to the AI Act. Only upon significant changes
AI Act or shortly after may not be directly done to the AI system, they will have to
While most provisions will be implemented affected by the EU AI Act or receive an apply the rules of the AI Act, though it
within the standard 24-month timeframe, extended implementation period, as stipu- remains to be seen what qualifies as signif-
some prohibitions and obligations will be lated in figure 9. Therefore, general-purpose icant changes and how strict the Commis-
enforced sooner, within 6 or 12 months AI that has entered the market before the sion will apply this rule.
Member State
Governance
3 months, MS to designate
3 months
Entry Into National Supervisor
Force
August 2024
Unacceptable Risk
6 months, tentatively Q1 2025
6 months
General-Purpose AI &
Commission Guidelines
on HR, MS information
on Supervisory Contact 12 months
12 months, tentatively Q3 2025
High-Risk AI Safety
Components (Annex I)
36 Months, tentatively Q3 2027 36 months
30
EU Artificial Intelligence Act | Deep Dive
Implementation Period
Large-Scale IT Systems
in the Area of Freedom,
Security and Justice
2030
2030
General-Purpose AI
AI Systems placed
2 years from
on the market
Entry into Force 2 years
before the entry
into force of
the EU AI Act High-Risk AI
Not subject to the
AI Act unless significant
Change of AI System
The European Commission has recently sion’s role includes helping companies Even if not all the technical details have
initiated the AI Pact. This initiative is understand the AI Act, aiding in their been clarified yet, the AI Act gives a suffi-
designed to support businesses in volun- preparation and adjustment, promoting cient impression of the scope and objec-
tarily complying with the AI Act ahead of knowledge exchange, and fostering trust tive of the future regulation. Companies
its legal enforcement in the second quar- in AI technologies. will have to adapt many internal processes
ter of 2026. The AI Pact serves as a collab- and strengthen risk management sys-
orative platform, allowing companies to Furthermore, the CEN (European Com- tems. However, they can build on existing
exchange ideas and strategies for adher- mittee for Standardization) and CENELEC processes within the company and learn
ing to the AI Act’s guidelines. Businesses (European Committee for Electrotech- from measures from previous laws such
are currently invited to show their interest nical Standardization) have commenced as the GDPR. We recommend that com-
in this pact, with a preliminary meeting the process of operationalizing the AI panies start preparing now and sensitize
for stakeholders scheduled for early to Act through standards. For companies their employees to the new law, take stock
mid-2024. By participating, companies applying or planning to apply AI systems, a of their AI systems, ensure appropriate
will pledge to conform to the AI Act and proactive approach is essential to guaran- governance measures, install proper risk
will detail their compliance efforts. These tee compliance by the expected deadline, classification and risk management over AI
measures will be collected and made entities should have an implementation and meticulously review AI systems classi-
public by the Commission. The Commis- plan and start as early as possible. fied as high-risk.
31
10. Glossary
Wording taken from AI Act
Provider: A natural or legal person, public authority, Biometric identification: The automated recognition of
agency or other body that develops an AI system or a gen- physical, physiological, behavioral, or psychological human
eral-purpose AI model or that has an AI system, or a gener- features for the purpose of establishing the identity of a
al-purpose AI model developed and places it on the market natural person by comparing biometric data of that individ-
or puts the AI system into service under its own name or ual to biometric data of individuals stored in a database.
trademark, whether for payment or free of charge.
Biometric verification: The automated, one-to-one veri-
Downstream provider: A provider of an AI system, fication, including authentication, of the identity of natural
including a general-purpose AI system, which integrates an persons by comparing their biometric data to previously
AI model, regardless of whether the model is provided by provided biometric data.
themselves and vertically integrated or provided by another
entity based on contractual relations. Emotion recognition system: An AI system for the pur-
pose of identifying or inferring emotions or intentions of
Deployer: A natural or legal person, public authority, natural persons on the basis of their biometric data.
agency or other body using an AI system under its author-
ity except where the AI system is used in the course of a Biometric categorization system: An AI system for the
personal non-professional activity. purpose of assigning natural persons to specific categories
on the basis of their biometric data, unless it is ancillary
Authorized representative: A natural or legal person to another commercial service and strictly necessary for
located or established in the Union who has received objective technical reasons.
and accepted a written mandate from a provider of an AI
system or a general-purpose AI model to, respectively, Remote biometric identification system: An AI system
perform and carry out on its behalf the obligations and for the purpose of identifying natural persons, without
procedures established by this Regulation. their active involvement, typically at a distance through the
comparison of a person’s biometric data with the biometric
Importer: A natural or legal person located or established data contained in a reference database.
in the Union that places on the market an AI system that
bears the name or trademark of a natural or legal person Real-time remote biometric identification system: A
established in a third country. remote biometric identification system whereby the cap-
turing of biometric data, the comparison and the identifica-
Distributor: A natural or legal person in the supply chain, tion all occur without a significant delay and comprises not
other than the provider or the importer, that makes an AI only instant identification, but also limited short delays in
system available on the Union market. order to avoid circumvention.
Operator: A provider, product manufacturer, deployer, Deep fake: AI-generated or manipulated image, audio or
authorized representative, importer or distributor. video content that resembles existing persons, objects,
places or other entities or events and would falsely appear
Biometric data: Personal data resulting from specific to a person to be authentic or truthful.
technical processing relating to the physical, physiological
or behavioral characteristics of a natural person, such as
facial images or dactyloscopic data.
32
EU Artificial Intelligence Act | Deep Dive
Get in touch
Contact us now to find out more about this legislation
and how we can support you in your AI journey.
33
HUNGARY KOSOVO POLAND
SLOVENIA UKRAINE
34
Authors
35
Deloitte refers to one or more of Deloitte Touche Tohmatsu Limited (DTTL), its global network
of member firms, and their related entities (collectively, the “Deloitte organization”). DTTL (also
referred to as “Deloitte Global”) and each of its member firms and related entities are legally
separate and independent entities, which cannot obligate or bind each other in respect of third
parties. DTTL and each DTTL member firm and related entity is liable only for its own acts and
omissions, and not those of each other. DTTL does not provide services to clients. Please see
www.deloitte.com/de/UeberUns to learn more.
Deloitte provides industry-leading audit and assurance, tax and legal, consulting, financial advisory,
and risk advisory services to nearly 90% of the Fortune Global 500® and thousands of private
companies. Legal advisory services in Germany are provided by Deloitte Legal. Our people deliver
measurable and lasting results that help reinforce public trust in capital markets, enable clients to
transform and thrive, and lead the way toward a stronger economy, a more equitable society and
a sustainable world. Building on its 175-plus year history, Deloitte spans more than 150 countries
and territories. Learn how Deloitte’s approximately 457,000 people worldwide make an impact
that mattersat www.deloitte.com/de.
This communication contains general information only, and none of Deloitte GmbH
Wirtschaftsprüfungsgesellschaft or Deloitte Touche Tohmatsu Limited (DTTL), its global network
of member firms or their related entities (collectively, the “Deloitte organization”) is, by means of
this communication, rendering professional advice or services. Before making any decision or
taking any action that may affect your finances or your business, you should consult a qualified
professional adviser.
Issue 06/2024