0% found this document useful (0 votes)
39 views36 pages

EU AI Act Deep Dive

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
39 views36 pages

EU AI Act Deep Dive

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 36

EU Artificial Intelligence Act

Deep Dive
EU Artificial Intelligence Act | Deep Dive

1. Introduction 04
2. Scope of the AI Act 06
3. S
 ingle Purpose AI Systems are differentiated
by the associated risk 08
4. R
 oles and Obligations according
to the Risk Categories 17
5. G
 eneral-Purpose AI follows a different
risk categorization scheme 22
6. Regulatory Governance and Enforcement 24
7. T
 he AI Act grants easements for
“sandbox” testing facilities 26
8. N
 on-Compliance will come at a high price –
significantly more so than GDPR 28
9. AI Act will come into force step-by step 30
10. Glossary 32
Get in touch 33
Authors 35

03
1. Introduction
One of the key political priorities of the EU ples aims to leave the Act adaptable to as
Commission for the 2019–2024 term was yet unknown iterations of AI technologies.
creating “A Europe fit for the digital age.” However, the public use of general-purpose
This ambitious agenda has led to the tabling AI technology prompted the legislator to
of over 10 significant digital regulations, differentiate between single-purpose AI
addressing areas such as the data economy, and general-purpose AI . The AI Act regu-
cybersecurity, and platform regulation. The lates the market entry for general-purpose
AI Act is a crucial puzzle piece within this AI models, regardless of the risk-based
complex framework of EU digital regulation, categorization of use cases, setting forth
which is striving to establish a comprehen- comprehensive rules for market oversight,
sive framework that addresses the complex- governance, and enforcement to maintain
ities and potential risks associated with AI integrity and public trust in AI innovations.
systems. While the following article focuses
on the AI Act, it should always be viewed Given its abstract nature, the legislation
within the broader context of the entire EU contains areas that are yet to be fully
digital regulatory landscape. defined. These are expected to be elab-
orated on through delegated and imple-
The AI Act introduces a framework aimed menting acts, guidelines by the EU insti-
at regulating the deployment and usage of tutions, as well as harmonized standards
AI within the EU. It establishes a standard- developed by European Standardization
ized process for single-purpose AI (SPAI) Organizations. As a result, businesses can
systems’ market entry and operational expect to receive more detailed guidance
activation ensuring a cohesive approach in the near future.
across EU Member States. The AI Act, a
product safety regulation, adopts a risk- The AI Act is being published in the Official
based approach by categorizing AI systems Journal of the European Union in June or
based on their use case, thereby establish- July 2024, with its entry into force 20 days
ing compliance requirements according thereafter. This will mark the beginning of a
to the level of risk they pose to users. This phased implementation process to put the
includes the introduction of bans on cer- various rules and obligations of the AI Act
tain AI applications deemed unethical or into practice. For businesses, this means
harmful, along with detailed requirements there is now a critical window to prepare
for AI applications considered high-risk to for compliance.
manage potential threats effectively. Fur-
ther, it outlines transparent guidelines for
AI technologies designated with limited risk.
With the risk-based approach, AI ethics are
the heart of the AI Act. Its focus on princi-

04
EU Artificial Intelligence Act | Deep Dive

05
2. Scope of the AI Act
In this chapter, we look at the legal scope of the AI Act and
the technology defined as AI according to the regulation. For
entities in the public sector and businesses across the EU,
understanding these aspects is crucial for ensuring compliance
and fostering AI innovations that respect ethical standards and
societal values.

2.1 Definition of AI Furthermore, the recitals – which clarify


The EU aimed for a clear definition of AI the AI Act’s regulatory text – specify that
systems, aligning closely with the work of the definition of AI does not include basic
international bodies like the OECD. This traditional software or purely rule-based
approach seeks to ensure legal certainty programming created by humans for auto-
and facilitate international convergence matic operations. Despite this, the defini-
and acceptance. tion remains wide, covering the majority of
systems available on the market.
An AI system, as defined in the AI Act, is a
type of technology designed to make pre- Finally, the Commission has been tasked
dictions, content suggestions, or decisions with developing guidelines for applying the
that can impact both physical and virtual definition of an AI system, which grant fur-
environments. It achieves this by using ther guidance of the defining aspects of AI
various techniques, including machine under this regulation.
learning, whereby it learns from data, and
logic-based methods, which follow specific
rules or knowledge structures. These sys- Definition of AI in the AI Act
tems can have different levels of autonomy “AI system is a machine-based sys-
and might operate on their own or as part tem designed to operate with varying
of another product, either integrated into it levels of autonomy and that may
or functioning separately. The adaptability exhibit adaptiveness after deploy-
of an AI system refers to its self-learning ment and that, for explicit or implicit
ability to change its behavior during use. objectives, infers, from the input it
receives, how to generate outputs
such as predictions, content, recom-
mendations, or decisions that can
influence physical or virtual environ-
ments.”

06
EU Artificial Intelligence Act | Deep Dive

2.2 The AI Act is an extra-territorial health, safety and fundamental rights of EU


product safety regulation citizens. Moreover, it extends its jurisdiction
The AI Act affects all AI system operators to non-EU companies entering European
(see chapter 10), including private and public markets with AI products. While there are
organizations of all sizes and sectors that no exemptions for smaller companies, the
offer AI products or services on the EU AI Act acknowledges the unique challenges
market. The primary objective of the AI Act faced by SMEs. Figure 1 explores the entities
is to promote the uptake of trustworthy AI affected as well as use cases that are out of
while ensuring a high level of protection of scope of the AI Act.

Fig. 1 – Scope of Application

1. Applicability 2. Extraterritorial Reach 3. Exemptions


The AI Act applies to • The Act affects any business or organi- Certain use cases as well as entities
• Providers (see chapter 4.1) introducing AI zation that offers AI systems impacting are not covered by the Act:
systems in the EU market, regardless of individuals within the EU, irrespective of • Activities involving the research and
their geographic location. the organization’s location. development of AI systems before they
are released for commercial use or oper-
• Providers and deployers of AI systems • Public sector bodies and international ational deployment.
outside the EU, if the AI system’s output organizations are out of scope if located
is used within the EU. outside the EU. • Free and open-source software is gener-
ally not subject to regulation unless it is
• Deployers (see chapter 4.1) of AI systems categorized as unacceptable or high-risk AI
within the EU. application or a high-impact GPAI model.

• Importers and distributors of AI systems • AI systems used for military or defense


in the EU market. purposes.

• Manufacturers placing products with • AI systems designed exclusively for scien-


embedded AI systems on the EU market tific investigation and discovery.
under their trademark.
• AI systems that were put on the market
before the applicability of the AI Act. They
fall under the AI Act if they undergo sub-
stantial modification.

• AI Systems used in purely personal


non-professional activity.

07
3. Single Purpose AI
Systems are differentiated
by the associated risk
The AI Act focuses on how AI is used rather than the technology
itself, employing a risk-based framework. This means obligations
intensify with the user risk level. The Act identifies four specific
risk categories, each with its corresponding set of requirements:
unacceptable risk, high-risk, transparency risk and other risk.
Companies are expected to undergo a process of assessing
how the application of their AI systems falls into the four risk
categories as shown in figure 2.

Fig. 2 – AI Act risk levels, with four layers of obligations for entities

Unacceptable Risk Artificial Intelligence Systems


Not Mutually
Prohibited
Exclusive
· Manipulation of human behavior, opinions and decisions
· Classification of people based on their social behavior
Example: · Real-time remote biometric identification, except for limited exceptions
Social scoring
High-Risk Artificial Intelligence Systems
Permitted subject to compliance with AI requirements
ex-ante conformity assessment
Example: · AI system applications listed in Annex III
Recruitment · AI systems in safety components already subject to a harmonized EU
standard (Annex I)

Transparency Risks –
Example: AI Systems with Specific Transparency Obligations
Chatbots, Deepfakes Permitted but subject to transparency obligations

Artificial Intelligence Systems with other Risks


Example:
Permitted without restrictions1
Predictive maintenance

08 1
May still bare regulatory, business and security risks.
EU Artificial Intelligence Act | Deep Dive

3.1 Certain applications which severely mental values such as human dignity, free-
impact the rights of individuals are dom, equality, democracy, data privacy, and
outright banned the rule of law: To safeguard these funda-
Recognizing the advantages of AI, policy- mental values, the AI Act prohibits specific
makers balanced its possibilities with the AI applications. The ban on such systems
core of EU principles being aware that will begin after a 6-month grace period fol-
some AI applications might threaten funda- lowing the AI Act´s entry into force.

Tab. 1 – Prohibited AI Applications

Categories Use Cases

Persuasion to engage in unwanted behaviours, nudging for subversion and impair-


ment of autonomy, decision-making and free choices causing or reasonably likely to
AI-enabled manipulative
cause harm
techniques

Excluding: common and legitimate commercial practices, e.g., advertising

Use of individual biometric data as face or fingerprint, to deduce or infer political


opinion, trade union membership, religious or philosophical beliefs, race, sex life or
Biometric categorisation sexual orientation

Excluding: labelling, filtering or categorisation of biometric datasets

Valuation or classification of natural persons or groups based on multiple data


points related to social behaviour and leading to negative treatment
Social scoring
Excluding: lawful evaluation practices of natural persons done for a specific pur-
pose in compliance with national and Union law

In publicly accessible spaces for the purpose of law enforcement


Real-time remote biometric
identification Excluding the exceptions mentioned in Annex II

Assessing persons traits and characteristics to predict the risk of committing a


Risk assessments of natural criminal offence
persons
Unless assessing involvement of a person objectively and verifiably linked to a crime

Facial recognition databases Creation or expansion of facial recognition databases through the untargeted
based on untargeted scraping scraping of facial images from the internet or CCTV footage
of facial images

Identifying or inferring emotions or intentions of natural persons on the basis of


their biometric data elated to places of work and education institutions
Identifing emotions in work-
place and education
Excluding: medical reasons such as therapy and safety reasons such as pilot tired-
ness assessment

09
3.2 The focus of the AI Act is squarely of the AI Act, while the second one pertains
Is all high-risk really high-risk?
set on “High-Risk AI Systems” to products mentioned in Annex I. In either
To ensure that only AI presenting a
3.2.1 Identifying High-Risk AI System scenario, the entity using the AI will have
demonstrably elevated risk is clas-
The core focus of the AI Act revolves around the responsibility to self-determine whether
sified as high-risk, the EU has intro-
high-risk AI systems, as most obligations its products or AI systems fall within these
duced opt-out exceptions for provid-
and protective measures outlined in this act defined categories. Given the highly individ-
ers of certain AI systems in high-risk
refer to high-risk AI applications. High-risk AI ual nature and rapid evolution of AI systems,
fields. AI systems that at first glance
systems are those that are deemed to neg- these detailed assessments will likely occur
fall into the high-risk category but do
atively impact safety or fundamental rights on a case-by-case basis.
not pose significant risks to health,
of EU citizens and given that presumed risk
safety, or fundamental rights may,
need to be assessed before being put on Notably, the list of high-risk applications
under justifications, be excluded
the market and also throughout their life- may be subject to extensions and frequent
from the high-risk AI category. Con-
cycle. updates. For example, the EU Commission
sequently, they are not obliged to
retains the right to include new AI applica-
fulfill all associated obligations. This
There are two potential pathways for AI tions on the list of high-risk AI applications
exemption applies to:
systems to be designated as high-risk under whenever those are deemed to pose a risk
the AI Act. The first one involves inclusion in to health, safety, or fundamental rights of • The AI system is designed for a
the specific applications listed in Annex III EU citizens. specific, narrow procedural task.

• The AI system’s purpose is to

“The EU AI Act represents an important step for- enhance the outcome of a human
activity that has already been com-
ward in the governance of artificial intelligence, pleted.

offering organizations a clear framework for deploy- • The AI system is designed to iden-

ing AI systems responsibly. Ensuring compliance tify decision-making patterns and


deviations but should not replace
and mitigating risks involves conducting thorough or influence prior human assess-

assessments of AI risk classifications, maintaining a ments without appropriate human


review.
comprehensive inventory of AI assets, and clearly • The AI system is designed for pre-
defining the roles and responsibilities of each paratory tasks related to assess-

operator. By following existing structures for man- ments outlined in the high-risk AI
use cases listed in the law.
aging compliance and security risks, organizations However, AI that is profiling will
can navigate this new regulatory landscape with always be considered high-risk. In
order to opt-out operators have to
a robust governance framework and a proactive provide documentation and register
approach to risk management.” in advance.

10
EU Artificial Intelligence Act | Deep Dive

3.2.2 High-Risk Applications according cally considered high-risk. Consequently, the


to Annex III algorithms and decision-­making processes
The AI Act designates specific contexts out- of these AI systems demand robust protec-
lined in Annex III in which AI applications are tions to mitigate potential harm, unless they
more likely to pose heightened risks to con- fall within one of the exceptions mentioned
sumers. Any application falling within Annex III in (see chapter 3.2.1 textbox on exception
categories that potentially threaten health, assessment). The rules for high-risk applica-
safety, fundamental rights, the environment, tions according to Annex III will apply
democracy, or the rule of law is automati- 24 months after entry into force.

Tab. 2 – High-Risk AI Applications Annex III

Categories Use Cases

Remote biometric identification systems but only with prior authorization for excep-
Biometrics tional circumstances listed in Annnex II; also biometric categorisation and usage for
emotion recognition

Safety components in the management & operation of critical digital infrastructure


Critical infrastructure (e.g., road traffic, supply of water/gas/heating/electricity)
defined as in CER Directive

Admission to institutions at all levels as well as assessment of received educational


Education and level; also AI use for evaluation & steering of learning outcomes or monitoring &
vocational training detection of prohibited behavior during tests

Used for recruitment for instance analysis & filtering of job applications or evaluation
Employment, workers
of candidates; also deciding on promotions, termination of work-related contractual
management and
relationships or allocation of tasks; used to monitor & evaluate the performance &
self-employment
behavior of a worker

Evaluation of eligibility for essential public assistance benefits & services; risk
Essential public & assessment & pricing in case of life & health insurance; evaluation of creditwor-
private services thiness or establishment of credit score; evaluation & classification of emergency
calls as well as establishment of emergency priorities

Used in polygraphs or to assess risk to become a victim of criminal offences; eval-


uation of evidence reliability & prosecution of criminal offences; profiling during
Law enforcement
detection, investigation & prosecution of criminal offences; risk assessment of (re-)
offending based on profiling and assessment of behavioral & criminal traits

Used in polygraphs or assessment of security risks; also if used for examination of


Migration, asylum and asylum, visa & residence permits applications or detection, recognition & identifi-
border control management cation of individuals

Research and interpretation of facts & law; application of law; influence of voting
Administration of justice and behavior & outcome
democratic processes

11
3.2.3 AI systems that are Safety these sectors must clarify whether an AI
Components classify as High-Risk regulated by the Act is integrated into a
The AI Act places particular emphasis on product, constituting a safety component,
AI-embedded safety products and the and whether it is subject to third-party con-
associated potential risks. The Act speci- formity assessment. The rules for high-risk
fies in Annex I sectors that are considered applications according to Annex I will apply
high-risk due to their importance for the 36 months after entry into force. The AI Act
health and safety of persons when AI is regulates the following areas:
used in safety components. Operators in

Tab. 3 – High-Risk AI Applications Annex I

Categories Use Cases

All airports or parts of airports that are not exclusively used for military purposes as
Civil aviation 2 well as all operators, including air carriers, providing services at airports or parts of
airports that are not exclusively used for military purposes3

Tractors as well as trailers and interchangeable towed equipment3


Agricultural and
forestry vehicles2

All two- or three-wheel vehicles and quadricycles


Two-, three-wheel vehicles
and quadricycles

Equipment placed or to be placed on board of EU ships


Marine equipment 2

Personal protective equipment designed and manufactured to be worn or held by


a person for protection against one or more risks to that person’s health or safety,
Personal protection
as well as some interchangeable components and connection systems for the
equipment

Appliances burning gaseous fuels used for, amongst others, cooking, refrigeration,
Appliances burning air-conditioning, space heating, hot water production, lighting or washing; also, all
gaseous fuels fittings that are saftey devices or controlling devices incorporated into an applicance

Rail system including vehicles, infrastructure, energy and signaling systems3


Rail system2

Motor vehicles and trailers; Motor vehicles and their trailers; including autonomous driving3
systems, components and
separate technical units
intended for such vehicles2

2
 he sectors mentioned in Section B are subject to specific articles of the EU AI Act and may experience differences in application.
T
12 3
Amongst others.
EU Artificial Intelligence Act | Deep Dive

Categories Use Cases

Civil aviation, The design and production of products, parts and equipment to control aircraft
European air space, remotely by a natural or legal person, as well as the design, production, maintenance
aerodomes2 and operation of aircrafts3

Medical devices for human use and accessories for such devices as well as clinical
Medical devices investigations concerning such medical devices and accessories3

In vitro diagnostic medical devices for human use and accessories for such
In vitro diagnostic medical devices3
devices

Machinery, interchangeable equipments and lifting accessories (e.g., robots)3


Machinery

Products designed or intended for use in play by children under 14 years of age
Toys (e.g., connected toys and IoT devices)

Recreational craft as well as propulsion engines installed on watercraft3


Recreational craft and
personal watercraft

Lifts permanently serving buildings and constructions for mainly the transport of
Lifts persons with or without goods

Equipment and protective Equipment and protective systems intended for use in potentially explosive atmo-
systems intended for use spheres as well as components incorporated within the equipment or protective
in potentially explosive system3
atmospheres

The design, manufacture and conformity assessment of pressure equipment and


Pressure equipment assemblies

Any kind of radio equipment that is anything connected via radio waves (e.g., WiFi,
Radio equipment Bluetooth, 5G in laptops, phones, IoT devices)

New cableway installations designed to transport persons, to modifications of


Cableway installations cableway installations requiring a new authorization, and to subsystems and safety
components for cableway installations

13
3.3 Rules for Transparency and tarily with code-of-conduct in accordance
Other Risks with ethical and trustworthy AI standards
For AI applications with limited risk to indi- in the union. The AI Office (see chapter 6)
viduals, the main requirement is to follow will support and promote the development
certain rules on transparency. However, of such codes of conduct, considering
while some AI systems may only need to existing technical solutions and industry
adhere to transparency obligations, it is best practices.
important to note that AI systems in other
risk categories are also required to comply It is important to bear in mind that AI sys-
with these transparency rules in addition tems with no obligations under the AI Act
to their specific regulatory requirements. may still pose business and security risks, as
An example for a limited risk AI system is well as regulatory obligations under other
AI-based chatbots which require explicit EU laws, that should not be disregarded.
notification before use, ensuring users
are aware that their interaction is with a
machine and granting them the option to
be redirected to human assistance.

Other risk AI systems do not have any obli-


gations under the AI Act. This classification
could encompass a considerable portion
of existing AI applications across various
sectors, including spam filters, AI-enabled
video games, and inventory management
systems. All operators can comply volun-

14
EU Artificial Intelligence Act | Deep Dive

15
16
EU Artificial Intelligence Act | Deep Dive

4. Roles and Obligations


according to the Risk
Categories
4.1 Providers, deployers, distributors Fig. 3 – Definition of Roles
and importers are accountable
The responsibilities outlined in the AI Act
vary significantly based on the specific cir-
cumstances of each AI operator (see chap-
ter 10). The rules and obligations can differ Provider Deployer
depending on the type of role and use case A natural or legal person, public authority, Any natural or legal person, public
involved. Moreover, while the AI Act out- agency or other body that develops an AI authority, agency or other body using
lines the obligations, these will be further system or a general purpose AI model or an AI system under its authority except
specified in the coming years through del- that has an AI system or a general purpose where the AI system is used in the
egated and implementing acts of the Com- AI model developed and places them on course of a personal non-professional
mission, as well as harmonized standards the market or puts the system into service activity
and work of other working groups. under its own name or trademark, whether
for payment or free of charge
Consequently, it is essential for each oper-
ator to identify their relevant risk category.
The AI Act organizes operators into four
principal categories: provider, deployer,
distributor, and importer. Each category is
held to distinct obligations under the AI Act Distributor Importer
and is defined as follows: Any natural or legal person within the Any natural or legal person located or
supply chain, besides the provider or established in the Union that places on
importer, that delivers an AI system to the market an AI system that bears the
the Union market name or trademark of a natural or legal
person established outside the Union

Special case for providers:


If any of the following scenarios occurs, any operator (such as a distributor or deployer)
may transition into a "provider" and, consequently, be obligated to fulfill the responsibil-
ities associated with high-risk AI and general-purpose AI as a provider:

1. If they associate their name or trademark with a high-risk AI system that has already
been introduced to the market or put into service. However, contractual exemptions
can be applied.

2. If they make substantial modifications to a high-risk AI system that has already been
placed on the market, and it continues to pose a high risk in its new use.

3. If they alter the intended purpose of an AI system or general-purpose AI that was not
originally classified as high-risk AI but becomes high-risk AI due to the new modifications.

17
4.2 Distinct responsibilities depending
on the role of each stakeholder Providers and deployers of all AI operators with the necessary knowl-
The following illustration offers a preview systems must ensure adequate AI edge and resources to make well-­
of the obligations and tasks that different literacy among their staff and rele- informed decisions about AI systems.
operators will be required to implement. vant individuals, considering their This not only involves understanding
Each obligation is delineated more compre- technical expertise, experience, the accurate application of technical
hensively in the final legal text and may also education, training, and the intended elements during the development
undergo further detailing. Generally speak- use of the AI systems, as well as the phase of AI systems but also extends
ing, there is an obligation of mutual rec- affected persons or groups. The AI to knowing the right measures to
ognition between the Member States, so Act emphasizes the importance of AI apply during its use and correct inter-
that the assessment done by one national literacy, aiming to furnish AI System pretation and usage of the output.
authority has to be recognized by the other.
However, each Member State may act upon
potential violations.

Fig. 4 – High-level Overview/Summary of Obligations of High Risk AI Systems

• Risk management system • Provision of information to deployers


• Data governance • Human oversight
• Quality management • Accuracy, robustness and cybersecurity
• Technical documentation • Automatically generated logs
• Record keeping and document keeping • Transparency
Provider

• Apply provider’s instructions for use of • Report any malfunctions, incidents, or


AI system risks to the AI system’s provider or
• Guarantee human oversight distributor promptly
• Validate input data to ensure its • Save logs if under their control
suitability for intended use • Fundamental rights impact assessment
Deployer • Continuous monitoring of AI system’s activity for certain systems

• Verify whether the AI system is in line • Withdraw, recall or refrain from placing the
with the requirements and formalities AI system on the market if it is non-compliant
in the AI Act • Cooperation with competent authorities
• Keeping conformity certifications for
ten years
Importer & Distributor

18
EU Artificial Intelligence Act | Deep Dive

One of the main obligations for providers of


high-risk AI systems is setting up a Risk Man-
agement System (RMS) around the respec-
tive system which is covering all phases of
an Ai system’s lifecycle. According to the AI
Act, the following steps should be ensured:

• Safety by design
Elimination or reduction of risks identified
and evaluated pursuant to paragraph 2
in as far as technically feasible through
adequate design and development of the
high-risk AI system

• Protective measures
Where appropriate, implementation of
adequate mitigation and control mea-
sures addressing risks that cannot be
eliminated

• Information for safety


Provision of information required pur-
suant to Article 13 (Transparency and pro-
vision of information to deployers) and,
where appropriate, training to deployers

Another important requirement is setting


up a Quality Management System which
shall also encompass the AI system’s entire
lifecycle with regard to the following factors:

1. Pre-Market Phase
This includes a strategy for regulatory
compliance, design control and verification,
system examination, testing and validation
of AI systems, and technical specification.

2. Post-Market Phase
Quality control, reporting of serious inci-
dents, and a post-market monitoring sys-
tem are all required.

3. Continuous Phase
This involves data management systems
and procedures, RMS, communication
with authorities, and document and
record-keeping, including logging. Resource
management, including security of supply,
and an accountability framework are also
included.

19
4.3 Two Types of Conformity Assess- All AI systems listed in Annex I (e.g., aviation, zation) and CENELEC (European Committee
ment under the AI Act automotive, and medical devices) must seek for Electrotechnical Standardization) lead-
The European Union (EU) has established support from third parties. In this case, the ing the process. Providers of high-risk AI
a New Legislative Framework (NLF) that conformity assessment must be performed systems may benefit from a presumption of
certain products must be evaluated under by an accredited “notified body” suitable compliance with data and data governance
before they can be sold in the market. This for the type of AI system being inspected. obligations if the data used for training
framework ensures that these products Notified bodies are conformity assessment their AI systems accurately reflects the spe-
meet specific EU regulations and stand- bodies that have been notified by the noti- cific geographical, behavioral, contextual,
ards for safety, quality, and performance. fying authority. If the AI system is deemed or functional settings in which the systems
As part of the NLF, the AI Act requires compliant by the notified body, the provider are intended to be used. Under these con-
“conformity assessments” followed by a must issue a declaration of conformity. ditions, providers are generally considered
“declaration of conformity” as prerequi- compliant with the obligations mentioned
sites for products to enter the market and Only providers of high-risk biometric sys- in Article 10, meaning they would not need
demonstrate compliance with the respec- tems have the option to conduct internal to undergo the usual rigorous processes of
tive obligations. controls or opt for third-party assessment. validating and testing data sets for biases
and unrepresentative training data.
Under the EU AI Act, the conformity Providers who self-assess are presumed
assessments for high-risk AI systems can compliant if they adhere to harmonized Additionally, providers who have received
be conducted by the providers themselves standards. The Commission issued the a certificate or statement of conformity
or with the support of third parties. For all standardization requests, which will include under a cybersecurity scheme pursuant to
high-risk AI applications listed in Annex III reporting and documentation deliverables the EU Cybersecurity Act are presumed to
(e.g., employment, essential public and to enhance AI system resource efficiency. be compliant with the cybersecurity obliga-
private services), providers may conduct These harmonized standards are expected tions mentioned in Article 15.
a self-­assessment based internal controls before the application of the respective
and issue the declaration of conformity. rules and are already in development, with All approved high-risk AI systems will be
CEN (European Committee for Standardi- published in a EU-wide registry.

Fig. 5 – Third Party Assessment vs Internal Controls

High-Risk Internal
Use Cases Controls

Identified Declaration of
High-Risk AI Conformity

High-Risk Third Party


Safety Assessment
Components by Notified Body

Biometric
High-Risk
(optional)

20
EU Artificial Intelligence Act | Deep Dive

4.4 Distinctive Aspects of Fundamental and detailing risk mitigation strategies.


Rights Impact Assessment for High- Once completed, deployers must inform
Risk AI Systems the market surveillance authority of the
Before deploying high-risk AI systems, assessment results. If a data protection
bodies governed by public law, private impact assessment has already been con-
operators providing public services, as ducted, it should be integrated with the
well as those involved in evaluating credit­ fundamental rights impact assessment. To
worthiness and risk assessment for life aid deployers in fulfilling these obligations,
insurance policies must conduct a fun- the EU AI Office will develop a question-
damental rights impact assessment. This naire template for simplified implementa-
involves describing the system’s intended tion. Member States may assign or estab-
use, frequency of use, identifying affected lish institutions to oversee the protection
individuals or groups, assessing potential of fundamental right as further explained
risks, outlining human oversight measures, in chapter 6.1.

21
5. General-Purpose AI
follows a different risk
categorization scheme
The regulation of general-purpose AI models, which are trained
on vast datasets and capable of performing a wide array of
tasks, proved to be the most contentious aspect of the AI Act
negotiations.

The AI Act adopts a risk-based approach, without systemic impact, i.e., models with Providers of AI models that are released
with high-risk AI systems subject to more high impact. A general-purpose AI model under a free and open-source licence that
stringent requirements. However, the gen- is classified as a high-impact model when allows for the access, usage, modification,
erality and versatility of general-purpose it demonstrates a systemic risk through and distribution of the model, whose para-
AI make precise risk categorization chal- specific technical criteria. This is presumed meters are publicly available, and which are
lenging, as the intended purpose of down- if the cumulative compute power used not considered systemic risk, will have only
stream systems or applications incorporat- during its training exceeds a certain thresh- limited obligations.
ing these systems is often unclear. old, currently set at 10^25 floating point
operations (FLOPs). Alternatively, the Com-
To address this issue, the final version of mission may classify it as such if advised General-Purpose AI model means
the AI Act introduces a dedicated regime by a scientific panel alert, indicating its an AI model, including when trained
in Chapter V for providers of general-­ potential for significant impacts. They use with a large amount of data using
purpose AI models (“foundation models”), the assessment criteria listed in Annex XIII self-supervision at scale, that displays
rather than the general-purpose AI sys- which may be adjusted over time to keep significant generality and is capable
tems themselves. An AI model is a core pace with technological advancements to competently perform a wide range
component of an AI system, used to make through delegated acts adopted by the of distinct tasks regardless of the way
inferences from inputs to produce out- Commission. Providers of general-purpose the model is placed on the market
puts. Model parameters typically remain AI models must adhere to certain stand- and that can be integrated into a
fixed after the build phase concludes, ards. To facilitate compliance, the AI Office, variety of downstream systems or
making the risks posed by general-­ in collaboration with relevant stakeholders applications.
purpose AI models easier to estimate and such as civil society organizations, industry
regulate compared to those of complete representatives, academia, downstream General-Purpose AI System means
AI systems. As models and systems are providers, and independent experts, will an AI application which is based on
treated separately, a general-purpose AI encourage and support the development an underlying general-purpose AI
model itself will not be classified as a high- of additional Union-level codes of practice. model. This application has the capa-
risk AI system. However, a general-purpose These codes of practice are voluntary for bility to serve a variety of purposes,
AI system built upon a general-purpose AI all companies using general-purpose AI both for direct use as well as for inte-
model may still fall into one of the estab- but grant a presumption of conformity to gration in other AI systems.
lished risk categories. For general-purpose anyone who applies them. The AI Office
AI models, the European policymakers is tasked with drawing up these codes of
agreed on a two-tiered approach, which practice, monitoring and evaluating them,
consists of obligations for providers of and being the future recipient of implemen-
general-purpose AI models with and tation reports.

22
EU Artificial Intelligence Act | Deep Dive

Tab. 4 – Obligations of General-Purpose AI Models

General-Purpose AI Models High-Impact General-Purpose AI Models (“systemic risk”)

Large models and systems capable of competently performing Foundation models trained with large amount of data and with
a wide range of distinctive tasks, such as generating video, text, advanced complexity, capabilities, and performance well above the
images or computer code, or conversing average, which can disseminate systemic risks along the value chain

• Drawing up and keeping up-to-date technical documentation for • Complying with all requirements applicable to all general-­purpose
the AI Office and national authorities (as listed in Annex XI) and AI models and systems
downstream providers (as listed in Annex XII)
• Conducting model evaluations
• Protecting intellectual property rights, trade secrets and confi-
dential business information • Assessing and mitigating systemic risks including their sources

• Enabling understanding about the limitations and capabilities of • Conducting adversarial testing
the GPAI models
• Keeping track of, documenting and reporting of serious incidents
• Complying with EU copyright law and disseminating detailed to the EU Commission
summaries about the content used in training
• Ensuring sufficient cybersecurity protection

• Reporting on energy efficiency and estimate energy consumption


for training

Providers of free and open source GPAI


models only have to provide detailed sum-
mary about the content used for training
and abide with EU copyright laws. If deemed
as a GPAI model with systemic risk all obliga-
tions apply.

23
6. Regulatory Governance
and Enforcement
The competences of the enforcement of the AI Act will be distributed
between the newly established AI Office in the European Commission
and supervisory authorities in the Member States.

Both the EU Commission and Member The notifying authority will also be responsi- open related to implementing the AI Act.
States have distinct responsibilities and ble for assigning the conformity assessment The AI Act mentions a variety of aspects
work together to monitor and enforce bodies which upon proper notification can that shall be subject to further implemen-
the new rules for AI systems and general-­ qualify as notified bodies (see chapter 4). The tation by the AI Office through delegated
purpose AI models. Whereas the EU Com- notified bodies must comply with several and implementing acts. As the AI Act was
mission is mainly responsible for supervision conditions to qualify as one. Such obligations kept intentionally on an abstract level, it is
of general-­purpose AI models, the Member include being established as a legal person highly dependent on further clarification.
States’ authorities are responsible for under national law, fulfilling organizational
enforcing the AI systems’ risk-based rules as requirements to fulfill their tasks and being While delegated acts relate mostly to the
well as coordinating the sandboxes on Mem- independent from the high-risk AI providers. amendments to the legislative text, imple-
ber State level. The following chapter pro- menting acts are measures of individual
vides insights on the specific responsibilities, Additionally, each Member State will have application. For instance, the AI Office may
the governance structure and the interplay to assign specific responsibilities and modify the list of each Annex of the AI Act
of EU Commission and Member States. authorities to existing or newly established by means of a delegated Act. A particular
bodies dedicated to protecting fundamen- task, given the implementation timeline, is
6.1 National Level – Member State tal rights concerning AI. These bodies must the establishment of concrete examples
Enforcement operate independently and impartially, that constitute prohibited AI or specifically
The Member States create the market sur- ensuring that companies adhere to funda- do not constitute prohibited AI. These
veillance authority (agency level). The market mental rights principles in AI development, steps aim to ensure the effective imple-
surveillance authority is primarily tasked deployment, and use. mentation of the AI Act and to specify the
with enforcing the AI Act at national level. rules and concepts stipulated in the AI Act.
6.2 European Level – EU Enforcement
The market surveillance authorities are To streamline and oversee the implemen- Moreover, the EU AI Office will be respon-
responsible for ensuring that AI systems tation of the Act, the EU set up the EU sible for enforcing the AI Act obligations for
adhere to the prescribed standards and AI Office in February 2023, a new entity general-purpose AI models. In this context,
regulations. For example, the market surveil- established by the EU Commission. It it will develop designs tools, methodologies,
lance authority will oversee the correctness has a key role in the implementation of and benchmarks to evaluate the capabilities
of the conformity assessment conducted by the AI Act. The AI Office is established as and reach of general-purpose AI models
high-risk AI providers. In the course of inves- a Commission service embedded in DG and identify models with systemic risks in
tigations, market surveillance authorities CONNECT and thus holds more freedoms concert with academia and industry stake-
may take necessary actions such as access- in its decision-making process and can holders. Last but not least, the EU AI Office
ing documentation as well as the training, act in a more dynamic manner. It will be will host a public registry listing all high-risk
validation and testing data sets used for the composed of five main departments. AI applications which entered the market.
development of high-risk AI systems and Each department will be led by a director
accessing the source code of high-risk AI. responsible for overseeing the implemen- Next to the AI Office, there are two more
Providers of high-risk AI are obliged to coop- tation of the AI Act. The AI Office shall EU bodies that will also influence the
erate with the authorities. employ a total of 140 people, including enforcement of the AI Act. First, to enhance
technological experts, lawyers, and policy collaboration and ensure comprehensive
specialists. Currently, it has over 50 tasks guidance on AI regulation, the Advisory

24
EU Artificial Intelligence Act | Deep Dive

Forum will be responsible. This forum 6.3 Interplay of European and National fostering cooperation, sharing expertise,
comprises a diverse array of stakeholders, enforcement and promoting a good understanding of
including industry experts, civil society Comprised of representatives from each AI across the EU. Moreover, to effectively
representatives, academic scholars, and Member State, alongside observers such address all relevant challenges surround-
governmental officials. Appointed by the as the European Data Protection Super- ing the AI Act, the Board will be divided
EU Commission, members of the Advisory visor and the AI Office, the European Arti- in different sub-committees, focusing on,
Forum offer technical expertise and strate- ficial Intelligence Board collaborates with for example, the alignment of sectorial
gic insights to support the implementation relevant stakeholders to ensure consistent or national legislation. One can expect
of the AI Act. Secondly, the Scientific Panel and effective application of the regulation. the different notifying bodies and market
of Independent Experts is responsible for Assigned with tasks ranging from coordi- surveillance authorities of each Mem-
advising and alerting the Commission on nating national competent authorities to ber State to participate in the different
systemic risks of general-purpose AI. issuing recommendations on regulatory sub-­committees and representing their
matters, the Board plays a critical role in respective interests.

Fig. 6 – Regulatory Governance Structure

EU Level Member State level

European Data
European Commission Protection Supervisor
participates
as observer • Each Member State must establish or
designate at least one of each author-
AI Office European Artificial ities as a single point of contact.
• Supporting AI Act and enforcing Intelligence Board • The national authorities have to be inde-
general-purpose AI rules • Members States main members pendent and provided with adequate
• Strengthening development and • Contributes to the coordination of resources.
use of trustworthy AI national authorities
• Fostering international cooperation • Assists, advices and supports with reports
• Cooperating with institutions, experts opinions and expertise annually to
and stakeholders
Notifying Market Surveillance
appoints Authority Authority
appoints

Advisory Forum Scientific Panel of notifies


• Provides technical expertise, Independent Experts
• Memberships a balanced interest • Supports AI Office with risk-advisory Notified Bodies for
group, 2–4 years and contribute to tool and Third-Party Conformity Assessments
methodology-development &
general advisory
• Expert pool can be accessed by
member states
• Consists of independent scientists
with AI expertise

25
7. The AI Act grants
easements for “sandbox”
testing facilities
Member States are mandated to establish AI regulatory sandboxes at the
national level within 24 months of the entry into force of the Regulation,
which is expected in Q3 2026. Member States can, however, establish
a joint sandbox or join an already established sandbox. Since the main
objective is to give all EU-based companies the option to participate in a
regulatory sandbox, equal access and equal coverage for the participating
Member States must be provided. Additionally, Member States have the
option to set up regional or local sandboxes. Hence, it is expected that
bigger states may set up several sandboxes to ensure regional or local
support for SMEs. Apart from that, the European Data Protection
Supervisor may also establish an AI regulatory sandbox for the EU-level.

7.1 AI Regulatory Sandboxes Both, public and private entities can join – The AI regulatory sandboxes serve as cat-
AI regulatory sandboxes are controlled after application – the sandboxes to test alysts for innovation in the AI landscape,
environments where operators of AI sys- their AI systems against the obligations offering a structured and supportive
tems can develop, train, test, and validate of the AI Act. Entities joining the sandbox environment for the development and
AI systems before market deployment. are guided, supervised and supported in testing of AI systems while ensuring com-
They offer a safe space for experimen- identifying risks relating to fundamental pliance with regulatory standards. More
tation, allowing for the exploration of AI rights, health and safety. Furthermore, importantly, the exit reports for successful
applications under the supervision of com- each participating entity should be given an participants of regulatory sandboxes serve
petent authorities. In the spirit of improving exit report detailing the activities carried as a presumption of conformity for the nec-
the EU’s Innovative Initiative, regulatory out in the sandbox and the related results essary conformity assessment of high-risk
sandboxes stand as pioneering project, and learning outcomes. This exit report will AI systems.
facilitating the development and testing function as a document to demonstrate
of AI systems within a controlled environ- compliance with the regulation through the
ment. Additionally, the national competent conformity assessment (presumption of
authorities have to allocate sufficient conformity) and hence may be a competi-
resources to comply with the requirements tive advantage for participating companies.
mentioned in the AI Act. Each sandbox will
have to submit annual reports on the activ-
ities, such as best practices, incidents, les-
sons learned and the set-up of the sandbox
to the EU AI Office.

26
EU Artificial Intelligence Act | Deep Dive

7.1.1 Real-world Testing of High-Risk 7.2 Measures Supporting SMEs and


AI Systems Outside of the Regulatory Start-ups to Meet Act Standards
Sandbox The AI Act aims to simplify certain aspects
While regulatory sandboxes offer con- of regulatory requirements for SMEs and
trolled environments for initial testing and start-ups. Member States are tasked with
validation, real-world testing complements implementing measures to support SMEs
these efforts by providing insights into and start-ups in navigating the regulatory
real-world performance, usability, and landscape of AI. This includes granting
user feedback, ultimately contributing to them priority access to AI regulatory
the responsible and effective deployment sandboxes, organizing tailored awareness-­
of AI technologies. Real-world testing of raising and training activities, establishing
AI systems outside regulatory sandboxes communication channels for advice and
offers providers the opportunity to test inquiries, as well as facilitating their partici-
high-risk AI systems listed in Annex III. Such pation in the standardization process.
testing requires adherence to a detailed
real-world testing plan approved by market Furthermore, penalties for breach of obli-
surveillance authorities. Providers must gations shall be adjusted based on specific
ensure compliance with Union and national factors such as the size and market pres-
law, including ethical considerations. Test- ence of SMEs and start-ups.
ing can be conducted independently or in
collaboration with prospective deployers, Additionally, the EU AI Office plays a role by
with informed consent from participants. providing standardized templates, main-
Moreover, strict conditions govern testing taining an information platform, conducting
duration, data protection, oversight, and awareness campaigns, and promoting best
reversibility of AI system decisions and practices in public procurement proce-
any incidents must be reported promptly dures related to AI systems. These efforts
to market surveillance authorities. Before aim to empower SMEs and start-ups to
applying the AI system to individuals, an comply with regulations and thrive in the AI
informed consent from subjects is essen- ecosystem.
tial, detailing the nature, objectives, dura-
tion, rights, and contact information for fur-
ther inquiries. Finally, it is important to note
that providers bear liability for damages
arising from testing activities.

27
8. Non-Compliance will come
at a high price – significantly
more so than GDPR
The AI Act’s penalty regime is structured based on the nature
of the violation, considering whether it involves unacceptable
systems, high-risk AI or general-purpose AI models, with fines
increasing according to the risk category. Simply put, the higher
the risk category, the higher the fine.

Member States are responsible for estab- Fig. 7 – Fines for operators of AI Systems
lishing rules concerning penalties and
ensuring their enforcement. For example,
1. Up to 35 m. EUR or for companies 7% of the GAT,
each Member State has the discretion to
for non-compliance with the prohibitions
determine the use of warnings and other
non-­monetary measures, if any. Further-
more, they must consistently consider the
particular interests of SMEs and start-ups.
2. Up to 15 m. EUR or for companies 3% of the GAT,
National authorities are also mandated to
for infringements to obligations of high-risk AI
assess the nature, gravity, and duration
of each infringement, as well as whether
the entity in question is a repeat offender,
when determining the amount of each fine. 3. Up to 15 m. EUR or for companies 3% of the GAT,
for infringements to obligations of general-purpose AI
The higher option applies, unless per-
taining to SMEs or start-ups. In addition to
monetary fines, national supervisors may
forcibly remove non-compliant AI systems 4. Up to 7.5 m. EUR or for companies 1% of the GAT,
from the market. for supplying incorrect, incomplete or misleading information

28
EU Artificial Intelligence Act | Deep Dive

29
9. AI Act will come into
force step-by step
Twenty days after being published in the Offi- from the Act’s entry into force. Others will entry into force of the AI Act or within the
cial Journal, the EU AI Act comes into force, have a longer implementation period of up first 12 months after the entry into force
marking the start of the official implementa- to 36 months. The following illustration out- have 36 months to implement the require-
tion period. However, not all obligations take lines key aspects that all operators in the ments of the EU AI Act. And high-risk AI that
effect simultaneously; some require imme- EU market should keep in mind. entered the market before the entry into
diate action, while others allow for a longer force or within the first 24 months after the
implementation period for operators to com- AI systems that were placed into the EU entry into force is not automatically subject
ply with the established requirements. market before the entry into force of the to the AI Act. Only upon significant changes
AI Act or shortly after may not be directly done to the AI system, they will have to
While most provisions will be implemented affected by the EU AI Act or receive an apply the rules of the AI Act, though it
within the standard 24-month timeframe, extended implementation period, as stipu- remains to be seen what qualifies as signif-
some prohibitions and obligations will be lated in figure 9. Therefore, general-­purpose icant changes and how strict the Commis-
enforced sooner, within 6 or 12 months AI that has entered the market before the sion will apply this rule.

Fig. 8 – Implementation Timeline AI Act

Legislative proposal European Parliament & All Other Use Cases


published by European Council (High-Risk AI (Annex III),
EU Commission opinions formed and MS Sandboxes, etc.)
21 April 2021 political agreement 24 Months, tentatively Q3 2026

Legislative Process Implementation Period

Member State
Governance
3 months, MS to designate
3 months
Entry Into National Supervisor
Force
August 2024
Unacceptable Risk
6 months, tentatively Q1 2025
6 months

General-Purpose AI &
Commission Guidelines
on HR, MS information
on Supervisory Contact 12 months
12 months, tentatively Q3 2025

High-Risk AI Safety
Components (Annex I)
36 Months, tentatively Q3 2027 36 months

30
EU Artificial Intelligence Act | Deep Dive

Fig. 9 – Implementation Timeline AI Act – Special Cases

Implementation Period

Large-Scale IT Systems
in the Area of Freedom,
Security and Justice
2030
2030

General-Purpose AI
AI Systems placed
2 years from
on the market
Entry into Force 2 years
before the entry
into force of
the EU AI Act High-Risk AI
Not subject to the
AI Act unless significant
Change of AI System

The European Commission has recently sion’s role includes helping companies Even if not all the technical details have
initiated the AI Pact. This initiative is understand the AI Act, aiding in their been clarified yet, the AI Act gives a suffi-
designed to support businesses in volun- preparation and adjustment, promoting cient impression of the scope and objec-
tarily complying with the AI Act ahead of knowledge exchange, and fostering trust tive of the future regulation. Companies
its legal enforcement in the second quar- in AI technologies. will have to adapt many internal processes
ter of 2026. The AI Pact serves as a collab- and strengthen risk management sys-
orative platform, allowing companies to Furthermore, the CEN (European Com- tems. However, they can build on existing
exchange ideas and strategies for adher- mittee for Standardization) and CENELEC processes within the company and learn
ing to the AI Act’s guidelines. Businesses (European Committee for Electrotech- from measures from previous laws such
are currently invited to show their interest nical Standardization) have commenced as the GDPR. We recommend that com-
in this pact, with a preliminary meeting the process of operationalizing the AI panies start preparing now and sensitize
for stakeholders scheduled for early to Act through standards. For companies their employees to the new law, take stock
mid-2024. By participating, companies applying or planning to apply AI systems, a of their AI systems, ensure appropriate
will pledge to conform to the AI Act and proactive approach is essential to guaran- governance measures, install proper risk
will detail their compliance efforts. These tee compliance by the expected deadline, classification and risk management over AI
measures will be collected and made entities should have an implementation and meticulously review AI systems classi-
public by the Commission. The Commis- plan and start as early as possible. fied as high-risk.

31
10. Glossary
Wording taken from AI Act

Provider: A natural or legal person, public authority, Biometric identification: The automated recognition of
agency or other body that develops an AI system or a gen- physical, physiological, behavioral, or psychological human
eral-purpose AI model or that has an AI system, or a gener- features for the purpose of establishing the identity of a
al-purpose AI model developed and places it on the market natural person by comparing biometric data of that individ-
or puts the AI system into service under its own name or ual to biometric data of individuals stored in a database.
trademark, whether for payment or free of charge.
Biometric verification: The automated, one-to-one veri-
Downstream provider: A provider of an AI system, fication, including authentication, of the identity of natural
including a general-purpose AI system, which integrates an persons by comparing their biometric data to previously
AI model, regardless of whether the model is provided by provided biometric data.
themselves and vertically integrated or provided by another
entity based on contractual relations. Emotion recognition system: An AI system for the pur-
pose of identifying or inferring emotions or intentions of
Deployer: A natural or legal person, public authority, natural persons on the basis of their biometric data.
agency or other body using an AI system under its author-
ity except where the AI system is used in the course of a Biometric categorization system: An AI system for the
personal non-professional activity. purpose of assigning natural persons to specific categories
on the basis of their biometric data, unless it is ancillary
Authorized representative: A natural or legal person to another commercial service and strictly necessary for
located or established in the Union who has received objective technical reasons.
and accepted a written mandate from a provider of an AI
system or a general-purpose AI model to, respectively, Remote biometric identification system: An AI system
perform and carry out on its behalf the obligations and for the purpose of identifying natural persons, without
procedures established by this Regulation. their active involvement, typically at a distance through the
comparison of a person’s biometric data with the biometric
Importer: A natural or legal person located or established data contained in a reference database.
in the Union that places on the market an AI system that
bears the name or trademark of a natural or legal person Real-time remote biometric identification system: A
established in a third country. remote biometric identification system whereby the cap-
turing of biometric data, the comparison and the identifica-
Distributor: A natural or legal person in the supply chain, tion all occur without a significant delay and comprises not
other than the provider or the importer, that makes an AI only instant identification, but also limited short delays in
system available on the Union market. order to avoid circumvention.

Operator: A provider, product manufacturer, deployer, Deep fake: AI-generated or manipulated image, audio or
authorized representative, importer or distributor. video content that resembles existing persons, objects,
places or other entities or events and would falsely appear
Biometric data: Personal data resulting from specific to a person to be authentic or truthful.
technical processing relating to the physical, physiological
or behavioral characteristics of a natural person, such as
facial images or dactyloscopic data.

32
EU Artificial Intelligence Act | Deep Dive

Get in touch
Contact us now to find out more about this legislation
and how we can support you in your AI journey.

CENTRAL EUROPE REGIONAL LEADS

Jan Michalski Simina Mut Gregor Strojin


Partner, Partner, Deloitte Legal Central Europe
Central Europe GenAI Leader Deloitte Legal Central Europe Leader AI Regulatory CoE Leader
jmichalski@deloittece.com smut@deloittece.com gstrojin@deloittece.com

ALBANIA BOSNIA & HERZEGOVINA BULGARIA

Ina Cota Elma Delalic-Haskovic Mila Goranova


Manager Manager Senior Consultant
icota@deloittece.com edelalic@deloittece.com mgoranova@deloittece.com

Ened Topi Zerina Pacariz Adelina Mitkova


Senior Manager, Manager Senior Manager,
Deloitte Legal zpacariz@deloittece.com Deloitte Legal
etopi@deloittece.com amitkova@deloittece.com

CROATIA CZECH REPUBLIC ESTONIA, LATVIA, LITHUANIA

Zrinka Vrtarić Jaroslava Kracunova Maksims Naumovs


Attorney at law Partner, Data Modernization
in cooperation Deloitte Legal and Analytics Offering Lead
with Deloitte Legal jkracunova@deloittece.com at Deloitte Central Europe,
zvrtaric@kip-legal.hr AI & Data Director
mnaumovs@deloittece.com

Ratko Drča Jakub Holl Romans Taranovs


Director Director AI & Data Director
rdrca@deloittece.com jholl@deloittece.com rtaranovs@deloittece.com

33
HUNGARY KOSOVO POLAND

dr. Lili Albert LL.M. Donika Ahmeti Mateusz Ordyk


Senior Associate, Senior Manager Partner,
Deloitte Legal dahmeti@deloittece.com Deloitte Legal
lialbert@deloittece.com mordyk@deloittece.com

Gergő Barta, Ph.D. Ardian Rexha Scibor Lapies


Senior Manager Senior Manager Partner
AI Risk & Compliance Deloitte Legal slapies@deloittece.com
gbarta@deloittece.com arrexha@deloittece.com

ROMANIA SERBIA SLOVAKIA

Simina Mut Stefan Ivic Dagmar Yoder


Partner, Partner Partner,
Deloitte Legal stivic@deloittece.com Deloitte Legal
Central Europe Leader dyoder@deloittece.com
smut@deloittece.com

Andrei Paraschiv Miroslava Gaćeša Pavol Szabo


Partner Director Senior Managing
anparaschiv@deloittece.com mgacesa@deloitteCE.com Associate,
Deloitte Legal
pszabo@deloittece.com

SLOVENIA UKRAINE

Ana Kastelec, LL.M. Andrii Krasnyi


Attorney at law, Local Partner Director
in Law Firm Deloitte Legal akrasnyi@deloittece.com
Reff – Branch in Slovenia
akastelec@deloittece.com

Lan Filipič Vadym Matuzenko


Director Senior Manager
lfilipic@deloittece.com vmatuzenko@deloittece.com

34
Authors

Dr. Till Contzen David Thogmartin


Partner | Legal Partner | Risk Advisory
Intangibles, Data & Technology aiStudio | AI & Data Analytics
(Head for Germany) Deloitte Germany
Deloitte Legal Germany

Torsten Berge Mosche Orth


Director Manager
Algorithm & AI Assurance Lead DE Deloitte EU Policy Centre
Deloitte Germany

Zoe Marie Lohoff


Algorithm Assurance
Deloitte Germany

35
Deloitte refers to one or more of Deloitte Touche Tohmatsu Limited (DTTL), its global network
of member firms, and their related entities (collectively, the “Deloitte organization”). DTTL (also
referred to as “Deloitte Global”) and each of its member firms and related entities are legally
separate and independent entities, which cannot obligate or bind each other in respect of third
parties. DTTL and each DTTL member firm and related entity is liable only for its own acts and
omissions, and not those of each other. DTTL does not provide services to clients. Please see
www.deloitte.com/de/UeberUns to learn more.

Deloitte provides industry-leading audit and assurance, tax and legal, consulting, financial advisory,
and risk advisory services to nearly 90% of the Fortune Global 500® and thousands of private
companies. Legal advisory services in Germany are provided by Deloitte Legal. Our people deliver
measurable and lasting results that help reinforce public trust in capital markets, enable clients to
transform and thrive, and lead the way toward a stronger economy, a more equitable society and
a sustainable world. Building on its 175-plus year history, Deloitte spans more than 150 countries
and territories. Learn how Deloitte’s approximately 457,000 people worldwide make an impact
that matters​at www.deloitte.com/de​.

This communication contains general information only, and none of Deloitte GmbH
Wirtschaftsprüfungsgesellschaft or Deloitte Touche Tohmatsu Limited (DTTL), its global network
of member firms or their related entities (collectively, the “Deloitte organization”) is, by means of
this communication, rendering professional advice or services. Before making any decision or
taking any action that may affect your finances or your business, you should consult a qualified
professional adviser.

No representations, warranties or undertakings (express or implied) are given as to the


accuracy or completeness of the information in this communication, and none of DTTL, its
member firms, related entities, employees or agents shall be liable or responsible for any loss
or damage whatsoever arising directly or indirectly in connection with any person relying on this
communication. DTTL and each of its member firms, and their related entities, are legally separate
and independent entities.

Issue 06/2024

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy