AI in UK 1690306074
AI in UK 1690306074
in the UK
July 2023
Contents Regulating AI in the UK 2
Contents
If you would like more information on this report, or if you would like to
discuss implementing our recommendations, please contact our policy
research team at hello@adalovelaceinstitute.org.
Executive summary
‘Regulating AI’ means addressing issues that could harm public trust in
AI and the institutions using them, such as data-driven or algorithmic
social scoring, biometric identification and the use of AI systems in law
enforcement, education and employment.
3 ‘Three Proposals to Strengthen the EU Artificial Intelligence Act’ (Ada Lovelace Institute 2021)
https://www.adalovelaceinstitute.org/blog/three-proposals-strengthen-eu-artificial-intelligence-act/
Executive summary Regulating AI in the UK 5
4 Department for Science, Innovation & Technology and Office for Artificial Intelligence, Establishing A Pro-Innovation Approach
to AI Regulation (2023) https://www.gov.uk/government/publications/establishing-a-pro-innovation-approach-to-regulating-ai/
establishing-a-pro-innovation-approach-to-regulating-ai-policy-statement
5 ‘Data Protection and Digital Information (No. 2) Bill - Parliamentary Bills - UK Parliament’ https://bills.parliament.uk/bills/3430
6 ‘Tech Entrepreneur Ian Hogarth to Lead UK’s AI Foundation Model Taskforce’ (GOV.UK)
https://www.gov.uk/government/news/tech-entrepreneur-ian-hogarth-to-lead-uks-ai-foundation-model-taskforce
7 ‘UK to Host First Global Summit on Artificial Intelligence’ (GOV.UK)
https://www.gov.uk/government/news/uk-to-host-first-global-summit-on-artificial-intelligence
Executive summary Regulating AI in the UK 6
Our recommendations fall into three categories, reflecting our three tests
for effective AI regulation in the UK: coverage, capability and urgency.
Coverage
AI is being deployed and used in every sector but the UK’s diffuse legal and regulatory network
for AI currently has significant gaps. Clearer rights and new institutions are needed to ensure that
safeguards extend across the economy.
Challenge Recommendation
Legal rights and Recommendation 1: Rethink the elements of the Data Protection and
protections Digital Information Bill that are likely to undermine the safe development,
New legal analysis deployment and use of AI, such as changes to the accountability
shows safeguards framework.
for AI-assisted
decision-making Recommendation 2: Review the rights and protections provided by
don’t properly existing legislation such as the UK General Data Protection Regulation
protect people. (GDPR) and the Equality Act 2010 and – where necessary – legislate to
introduce new rights and protections for people and groups affected by
AI to ensure people can achieve adequate redress.
Regulatory gaps Recommendation 5: Set out how the five AI principles will be
The Government implemented in domains where there is no specific regulator and/or
hasn’t addressed ‘diffuse’ regulation and also across the public sector.
how its proposed
AI principles will
apply in many
sectors.
Capability
Regulating AI is resource-intensive and highly technical. Regulators, civil society organisations and
other actors need new capabilities to properly carry out their duties.
Challenge Recommendation
Scope and powers Recommendation 6: Introduce a statutory duty for legislators to have
Regulator mandates regard to the principles, including strict transparency and accountability
and powers vary obligations.
greatly, and many
will be unable to Recommendation 7: Explore the introduction of a common set of
force AI users powers for regulators and ex ante, developer-focused regulatory
and developers to capability.
comply with all the
principles. Recommendation 8: Clarify the law around AI liability, to ensure that
legal and financial liability for AI risk is distributed proportionately along
AI value chains.
The regulatory Recommendation 10: Create formal channels to allow civil society
ecosystem organisations, particularly those representing vulnerable groups, to
Other actors such meaningfully feed into future regulatory processes, the work of the
as consumer Foundation Model Taskforce and the AI Summit.
groups, trade
unions, charities Recommendation 11: Establish funds and pooled support to enable civil
and assurance society organisations like consumer groups, trade unions and advisory
providers will need organisations to hold those deploying and using AI accountable.
to play a central
role in holding AI Recommendation 12: Support the development of non-regulatory tools
accountable. such as standards and assurance.
Urgency
The widespread availability of foundation models such as GPT-4 is accelerating AI adoption and
risks scaling up existing harms. Government, regulators and the Foundation Model Taskforce need
to take urgent action.
Challenge Recommendation
Leadership Recommendation 17: Ensure the AI Summit reflects diverse voices and
Priorities for AI an expansive definition of ‘AI safety’.
development
are currently set Recommendation 18: Consider public investment in, and development
by a relatively of, AI capabilities to steer applications towards generating long-term
small group of public benefit.
large industry
players.
Introduction Regulating AI in the UK 10
Introduction
The UK approach mechanisms if harms occur, and enjoyment of their benefits. Regulation
rests on two main will need to be carefully designed to avoid entrenching the power of
elements: AI existing players – in an already consolidated digital landscape9 – and to
principles that create space for the UK to be competitive.
existing regulators
will be asked to
implement, and a
Box 1: What do the public want from AI regulation?
set of new ‘central
functions’ to In June 2023 the Ada Lovelace Institute published the results of a nationally
support them to representative survey of UK public attitudes to 17 types of AI-powered
do so. technologies.10
The survey found that most members of the British public are concerned
about risks from a broad range of AI systems, including those that contribute
to employment decisions, determine welfare benefits, or even power in-home
devices and can infringe on privacy. Concerns cited ranged from the potential for
AI to worsen transparency and accountability in decision-making to the risk of
personal data being shared inappropriately.
• 62% said they would like to see laws and regulations guiding the use of AI
technologies
• 59% said that they would like clear procedures in place for appealing to a
human against an AI decision
• 56% want to make sure that ‘personal information is kept safe and secure’
• 54% want ‘clear explanations of how AI works’.
When asked about who should be responsible for ensuring that AI is used safely,
people most commonly choose an independent regulator, with 41% in favour.
Support for this differs somewhat by age, with 18–24-year-olds most likely to
say companies developing AI should be responsible for ensuring it is used safely
(43% in favour), while only 17% of people aged over 55 support this.13
9 Ada Lovelace Institute, Rethinking data and rebalancing power (2022) https://www.adalovelaceinstitute.org/report/rethinking-data/
10 Ada Lovelace Institute and The Alan Turing Institute, How do people feel about AI? A nationally representative survey of public
attitudes to artificial intelligence in Britain (2023) https://www.adalovelaceinstitute.org/report/public-attitudes-ai/
11 Ibid.
12 Ada Lovelace Institute and The Alan Turing Institute (n 11).
13 Ibid.
Introduction Regulating AI in the UK 12
To address this the Government has signalled its intention to begin the
development of a more comprehensive regulatory framework for AI. In
2023 alone it has published a consultation on a policy paper – A pro-
innovation approach to AI regulation,15 begun to assemble a £100m
Foundation Model Taskforce,16 and announced that Britain will host a
global summit on AI Safety.17 Box 2 provides more information on the
UK’s journey towards comprehensive AI regulation.
These initiatives will shape the UK’s – and potentially the world’s –
approach to AI governance for years to come, and so getting them
right matters. We have analysed the Government’s proposals closely to
understand whether they will achieve these aims. Drawing on extensive
desk research, workshops with experts from across industry, civil society
and academia, and independent legal analysis from law firm AWO,19
the remainder of this report outlines the Government’s plans and puts
forward recommendations for how they can be improved.
• the 2017 publication of ‘Growing the artificial intelligence industry in the UK’,
an independent review commissioned by government and carried out by
Professor Dame Wendy Hall and Jérôme Pesenti20
• the establishment in 2018 of the AI Council to advise the Government on AI
policy and ethics21
• the passage in 2018 of the Data Protection Act, which transposed the
European Union’s General Data Protection Regulation (GDPR) into UK law
• the publication in 2021 of the National AI Strategy, which outlines the
Government’s vision for the development of AI in the UK22
The AI principles
The European Union has proposed the Artificial Intelligence Act (AI Act),
which is likely to become law in 2024.31 The Act is a comprehensive piece of
legislation aimed at ensuring AI is safe and beneficial. This law employs a risk-
based approach and sets different regulatory requirements according to how
dangerous a particular AI technology can be. There are three categories of risk:
For more information on the European AI Act, you can read the Ada Lovelace
Institute’s extensive research in this area.32
Canada, through its proposed Artificial Intelligence and Data Act, takes a similar
approach to the European Union.33 Canada will not ban any AI applications
outright and will instead require AI developers to establish mechanisms that
minimise risks and improve transparency, ensuring AI applications respect anti-
discrimination laws and that their decision-making processes are clear.
31 ‘EU AI Act: First Regulation on Artificial Intelligence | News | European Parliament’ (6 August 2023)
https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence
32 ‘Ada in Europe’ https://www.adalovelaceinstitute.org/our-work/europe/
33 ‘The Artificial Intelligence and Data Act (AIDA) – Companion Document’ (Innovation, Science and Economic Development Canada
2023) https://ised-isde.canada.ca/site/innovation-better-canada/en/artificial-intelligence-and-data-act-aida-companion-document
The UK Government’s Regulating AI in the UK 18
proposals for AI regulation
Brazil’s Senate has put forward a draft AI regulation that also has clear parallels
to the approach of the EU AI Act.34
The United States has yet to propose a nationwide AI regulation. However, the
government has issued a ‘Blueprint for an AI Bill of Rights’ – a set of non-binding
guidelines to promote safe and ethical AI use.35 These guidelines include better
data privacy and protections against unfair decisions by AI systems. At the same
time, individual states and city authorities are developing their own AI regulatory
measures.
China has enacted many AI relevant regulations since 2021, including a law for
personal data protection, an ethical code for AI and most recently guidelines on
the use of generative AI.36 Chinese laws grant users transparency rights to ensure
they know when they interact with AI-generated content and the option to switch
off AI recommendation services. Measures against ‘deepfakes’ – AI-generated
content that is realistic but false – are also in place. However, many of the existing
laws only apply to private companies that use AI and not to the Chinese state.
Other major economies, like Japan, India and Australia have issued guidelines on
AI but have yet to pass any AI-specific legislation.37, 38 ,39
42 Department for Science, Innovation & Technology and Office for Artificial Intelligence, (n 15).
43 Ibid.
The UK Government’s Regulating AI in the UK 21
proposals for AI regulation
We think that AI safety should mean keeping people and society safe from the
range of risks and harms that AI systems cause today – helping to mitigate those
harms, and providing appropriate redress and contestability when they do occur.
Broadly, AI harms can be grouped into four broad categories:
In some cases these harms are common and well-documented44 – such as the
well-known tendency of certain AI systems to reproduce harmful biases – but in
others they may be unusual and speculative in nature. Some commentators have
argued that powerful AI systems may pose extreme or ‘existential’ risks to human
society, while others have condemned such claims as lacking a basis in evidence.
At the Ada Lovelace Institute, we contend that this current polarisation masks
a more reassuring conclusion – that the set of solutions for both will stem from
the same institutional capabilities, particularly the ability for regulators to look
‘upstream’ at AI developers. It will be important for the definition of ‘AI safety’
used by the Government, the Foundation Model Taskforce and the AI Summit to
be an expansive one, reflecting the wide variety of harms that are arising as AI
systems become more capable and embedded in society.
44 For example, see: ‘Welcome to the Artificial Intelligence Incident Database’ https://incidentdatabase.ai/
Meeting the challenge Regulating AI in the UK 22
of regulating AI
The success or
otherwise of the Meeting the challenge
UK’s approach to AI
regulation will be of regulating AI
judged on how
effective it is at
addressing AI
harms The success or otherwise of the UK’s approach to AI regulation will be
judged on how effective it is at addressing AI harms, and – in the event
that they occur – ensuring that those affected can seek appropriate
redress or contestability. The Government’s chosen mechanism for this
is the AI principles, which – if implemented effectively – will help to deliver
these outcomes.
Our research has identified three ‘tests’ that will determine their success:
The remainder of this report provides further detail on these tests and
sets out 18 recommendations for how the UK Government, regulators
and the Foundation Model Taskforce can meet them.
Coverage Regulating AI in the UK 23
The Government’s
proposed Coverage – protections that
framework devolves
implementation of extend across the economy
the AI principles to
existing regulators,
with the support of
‘central functions’. AI harms can occur across the economy, and the mitigations afforded
by the AI principles should extend across the whole economy too. We
are concerned, however, that the coverage of the regulatory system as
proposed by the Government will be uneven.
Gaps in coverage
We asked AWO to consider three scenarios in which the use of AI could result in
unintended harms. These were:
The table below indicates the level of legal protection that AWO
identified in each sector.48
Are there legal Is it likely that a Would the individual Is there a legal right to Is it practical for
requirements that regulator would be able to find out redress for the harm? individuals to enforce
the decision-maker prevent the AI harm about and evidence any legal rights to
must consider in through enforcement the harm? redress?
advance? of those
requirements?
Scenario 1 (Employment)
‘It is not realistic to expect the ICO and EHRC as cross-cutting regulators
to enforce the UK GDPR and EA with a completeness that will reliably
protect against AI harms. They do not have sufficient powers, resources,
or sources of information. They have not always made full use of the
powers they have.’49
49 Ibid.
Coverage Regulating AI in the UK 27
The Ada Lovelace Institute has conducted extensive research on the governance
of biometrics in the UK, including:
Biometrics is one of several areas in which new rights and protections may need
to be introduced in order to ensure effective governance of AI in the UK, as
suggested in Recommendation 2.
50 Ada Lovelace Institute, ‘Independent Legal Review of the Governance of Biometric Data in England and Wales’
https://www.adalovelaceinstitute.org/project/ryder-review-biometrics/
51 Ada Lovelace Institute, ‘The Citizens’ Biometrics Council’ https://www.adalovelaceinstitute.org/project/citizens-biometrics-council/
52 Ada Lovelace Institute, Countermeasures: The need for new legislation to govern biometric technologies in the UK (2022)
https://www.adalovelaceinstitute.org/report/countermeasures-biometric-technologies/
Coverage Regulating AI in the UK 28
This is in contrast with regulators who are mandated to balance different interests
and often take a particular view on questions of law or policy. To operate effectively,
an AI ombudsman would require access to sector-specific expertise, and would
therefore need to work closely with sector-specific regulators and ombudsmen.
54 ‘Projects Selected for the Regulators’ Pioneer Fund (2022)’ (GOV.UK) https://www.gov.uk/government/publications/projects-selected-
for-the-regulators-pioneer-fund/projects-selected-for-the-regulators-pioneer-fund-2022
55 Department for Science, Innovation & Technology and Office for Artificial Intelligence, (n 15).
Coverage Regulating AI in the UK 31
There are a number of ways that the Government could do this. It could
for example expand the remit and functionalities of existing regulators,
to ensure that sectors are adequately covered. It could also consider
introducing a ‘backstop regulator’ linked to the AI ombudsman, to
implement and enforce the AI principles in contexts and sectors that are
not comprehensively regulated at present.
The wider public sector is typically regulated horizontally via these frameworks,
and services like health and social care are subject to specific regulators like
the Care Quality Commission. However, there are many aspects of the public
sector that do not have specific regulators – for example, benefits and tax
administration by central government departments. These services can have
significant impacts on people’s lives, and in many of them AI is already being
extensively used (e.g. the use of AI in fraud prevention).
Regulating AI
effectively is a Capability – an empowered
resource-intensive
technical challenge. and well-resourced regulatory
ecosystem
Beyond statutory publish particular data or provide it to users of their services: education
powers and regulators, for instance, notably lack this power.
responsibilities,
regulators will also Beyond statutory powers and responsibilities, regulators will also need
need significant significant expertise – notably in technical domains – and new funding
expertise – notably to discharge their new AI responsibilities. The Government intends to
in technical provide pooled expertise through the central functions. We welcome
domains – and new this commitment – which was a key recommendation of Regulate to
funding to innovate56 – but are doubtful that it will be sufficient unless the cross-
discharge their new cutting and sectoral regulators with responsibility for regulating AI also
AI responsibilities. receive significant new resources.
The concept of ‘co-governance’ was absent from the White Paper, but
in practice effective regulation depends on collaboration between the
Government, regulators, and a wide variety of organisations representing
users and affected persons.
This is a reality at a national level, where CSOs can speak with a unified
voice, and at a localised level, where these organisations can help to
hold organisations deploying or using AI to account, support individuals
to navigate redress mechanisms, and report incidences of harm.
For this reason, we were disappointed to see that initial Government
communications on the Foundation Model Taskforce and AI Safety
Summit omitted any reference to civil society expertise or participation,
and would welcome a commitment to meaningful involvement of these
groups.
Where regulators or the central functions identify AI risks that are poorly
mitigated or unmanaged by existing regulation, a policy response will be
required from Government. The process for the reporting of these risks,
and the Government’s consideration and response to them, should be
formalised in a notification and reporting process, ideally with some level
of public transparency to ensure accountability for responding.
The Government should consider the case for legislation that would
equip regulators with a common set of AI powers that would put them
on an even footing in addressing AI. We are aware of ongoing research
at The Alan Turing Institute which seeks to map existing regulator
powers and identify gaps. This could complement additional work by
Government or the Foundation Model Taskforce to identify gaps in
relation to foundation models specifically, as set out in Recommendation
14.
• Stronger transparency powers for (all) regulators that enable them to clearly
access, monitor and audit specific technical infrastructures, code and data
underlying a platform or algorithmic system, and could include proactive
notification to regulators of the development of higher-risk systems.
• Transparency rights that grant individuals access to more meaningful
information about decisions and underlying systems (e.g. logic about the
specific decision made about an individual) that would strengthen individual’s
ability to seek redress in practice, and apply these to both automated and
partially automated decision-making.
• Reconsideration of changes to the UK GDPR accountability framework
that reduce and de-standardise recording of information relevant for data
subjects exercising their transparency rights or seeking redress.
• Further rollout of the Algorithmic Transparency Recording Standard61 across
the public sector.
• Transparency labelling for AI-generated content, including chat/voice-
based products or artificially generated content that could deceive content
consumers in relation to real-world people and events.
58 Ian Brown, Allocating Accountability in AI Supply Chains (Ada Lovelace Institute 2023)
https://www.adalovelaceinstitute.org/resource/ai-supply-chains/
59 Ada Lovelace Institute and The Alan Turing Institute (n 10)
60 ‘CDEI Publishes Research on AI Governance’ (GOV.UK)
https://www.gov.uk/government/publications/cdei-publishes-research-on-ai-governance
61 ‘Algorithmic Transparency Recording Standard Hub’ (GOV.UK)
https://www.gov.uk/government/collections/algorithmic-transparency-recording-standard-hub
Capability Regulating AI in the UK 37
62 ‘NHS England » Voluntary, Community and Social Enterprise (VCSE) Health and Wellbeing Alliance’
https://www.england.nhs.uk/hwalliance/
Capability Regulating AI in the UK 39
63 ‘FCA Open Finance Call for Input - Lab Response’ (Finance Innovation Lab)
https://financeinnovationlab.org/insights/open-finance-response/
Capability Regulating AI in the UK 40
• Developing the skills base. The technology sector will need teams,
roles and staff with the skills to conduct risk and impact assessments.
In particular, many methods involve identifying and coordinating
diverse stakeholders, and the use of participatory or deliberative
methods that are not currently widespread in the technology sector,
but are more established in other domains such as participatory
research, policy, design, academic sociology and anthropology.
64 ‘AI Standards Hub – The New Home of the AI Standards Community’ (AI Standards Hub) https://aistandardshub.org/
Capability Regulating AI in the UK 41
For more information on these activities, and how the Government and regulators
can facilitate them, read the Ada Lovelace Institute’s recent research paper.65
65 Ada Lovelace Institute, AI risk: Ensuring effective assessment and mitigation across the AI lifecycle
https://www.adalovelaceinstitute.org/report/risks-ai-systems/
Urgency Regulating AI in the UK 42
Foundation models,
sometimes called Urgency – taking action before
‘general-purpose
AI’ or ‘GPAI’, are it’s too late
powerful AI systems
that are capable of a
range of general
tasks (such as text The third factor is sufficient urgency on current and emerging risks.
synthesis, image The Government envisions a timeline of at least a year before the
manipulation and first iteration of the new AI framework is implemented, with further
audio generation). time needed to evaluate its effectiveness and address any emerging
limitations.
Contact tracing Foundation models are already being used to add novel features to
apps can be taken applications ranging from search engines (like Bing) and productivity
as a test of public software (like Office365) to language learning tools (such as Duolingo
acceptance of Max) and video games (such as AI Dungeon). In some cases they are
powerful available through widely available application programming interfaces
technologies that (APIs), which enable businesses to integrate them into their own services.
entail sensitive data This widespread availability means that they can be integrated into
and are embedded products, services and organisational workflows more easily than many
in everyday life other types of AI.
The first of these relates to where foundation models are located in the AI value
chain. As discussed above existing regulators focus on outcomes, meaning – in
practice – that they’re only equipped or incentivised to look at technology at the
point of use or commercialisation.
Foundation models (like GPT-4) are often the building blocks behind specific
technological products that are available to the public (like Bing), and themselves
sit upstream of complex value chains. This means that regulators may struggle to
identify whether a harm from a product is best remedied by the deployer of the
tool, or if responsibility should live with the upstream foundation model developer.
Determining which organisations in a value chain are most able to address AI-
related harms is a challenge and can create uncertainty around legal liability
for negative outcomes. We contend that granting regulators ex ante powers
(Recommendation 7) and clarifying liability rules (Recommendation 8) will help to
address this.
predominantly happening in the companies that already hold the majority of power
in the digital economy.
Over time, most AI expertise is being acquired by industry: in 2004, only 21% of
AI PhDs went to work in industry, by 2020, almost 70% were employed there. It is
therefore no coincidence that those at the frontier of developing foundation models
and their applications are the same tech platforms who have been dominating the
digital ecosystem for the past decade: Google, Microsoft and Meta.
The rise of foundation models may in turn further entrench the existing market
power of these global corporations, which could make it difficult for a single, small
regulator to challenge – as well as perpetuating wider competition challenges and
market harms.
Finally, there exists a wide spectrum of different release strategies for foundation
models,67 ranging from fully closed or internal use only to downloadable and fully
open source. Models released in a relatively controlled or staged manner may in
some respects be easier to govern, whereas open-source models pose challenges
in terms of regulatory control and liability.
These challenges are not insurmountable, given sufficient time and resource.
We urge the Government to immediately allocate significant resource and future
Parliamentary time to enable a robust, legislatively supported approach to
foundation model governance as soon as possible (Recommendation 13), as well
as taking a number of steps in the immediate term to support better governance of
foundation models (Recommendations 14–17).
67 Solaiman I, ‘The Gradient of Generative AI Release: Methods and Considerations’ (arXiv, 5 February 2023)
http://arxiv.org/abs/2302.04844
Urgency Regulating AI in the UK 45
We also contend that there is a need to review the opportunities for more
proactive enforcement of existing UK law and regulation that addresses
the risks of foundation models (notably the UK GDPR, the Equality Act
2010 and the intellectual property regime). At present, the compliance
of many widely available foundation models with these legal regimes is
questionable.
Another potential pilot project could develop benchmarks and evaluations to test
for the potential harms and risks foundation models may raise in deployment.
These benchmarks and evaluations can be aimed at two layers
Compute is a critical input into AI progress, and much more easily monitored
than other inputs such as data or talent. Beginning to collect and act on
information about compute usage would make it easier in future to systematically
identify potentially high-risk capabilities ahead of time, supporting the
Government to more effectively direct regulatory attention and risk-assessment
to those capabilities.
For more information on potential monitoring activities, read the Ada Lovelace
Institute’s recent report Keeping an eye on AI.70
70 Ada Lovelace Institute, Keeping an eye on AI: Approaches to government monitoring of the AI landscape (2023)
https://www.adalovelaceinstitute.org/report/keeping-an-eye-on-ai/
Urgency Regulating AI in the UK 50
71 Centre for the Governance of AI, ‘Proposing a Foundation Model Information-Sharing Regime for the UK | GovAI Blog’
https://www.governance.ai/post/proposing-a-foundation-model-information-sharing-regime-for-the-uk
Urgency Regulating AI in the UK 51
Addressing AI safety will require legislative time and resource, and in the
shorter term, the voluntary cooperation of industry. Achieving this will be
Urgency Regulating AI in the UK 52
more feasible if most major economies set the same expectations, and
so reaching these agreements – for example on reporting requirements,
as discussed in Recommendation 18 – should be a priority for the
Summit.
The current ‘AI moment’ is a critical inflection point for these challenges:
as AI uptake rapidly increases, societies risk unwittingly locking
ourselves into a set of technologies, and economic dynamics, that are
not necessarily optimal.
75 Ibid.
76 Ada Lovelace Institute (n 1).
Conclusion Regulating AI in the UK 54
Conclusion
As this report sets out, robust domestic policy will underpin the
fulfilment of this ambition: otherwise, the system proposed by the
Government risks being undermined by challenges relating to the
coverage of the UK regulatory system, the capability of regulators and
other actors to discharge their functions, and failure to act now on
urgent and critical risks.
The recommendations set out in this report reflect the Ada Lovelace
Institute’s current thinking on how these challenges can be overcome.
We will continue to work with the Government, regulators, civil society
organisations, politicians from all parties and the wider policy community
to develop approaches to policy and practice and help to ensure that AI
regulation in the UK works for people and society. If you would like more
information on this report, or if you would like to discuss implementing
our recommendations, please contact our policy research team at
hello@adalovelaceinstitute.org.
Acknowledgements Regulating AI in the UK 55
Acknowledgements and
methodology
Roundtable attendees
Access Now
AI Law Hub
AWO
Brookings Institute
Centre for Long Term Resilience
Centre for the Study of Existential Risk
Chatham House
Collective Intelligence Project
Conjecture
Connected by Data
Acknowledgements Regulating AI in the UK 56
DeepMind
DefendDigitalMe
Distributed AI Research Institute
Equality and Human Rights Commission
Form Ventures
Centre for the Governance of AI
Hertie School
HuggingFace
Information Commissioner’s Office
ICO
Legal Education Foundation
Mozilla
Newcastle Law School
Office for AI
Ofcom
Open Data Institute
Open Future
Royal Society
Stiftung Neue Verantwortung
The Future Society
The Tony Blair Institute for Global Change
Trades Union Congress
Which?
Worker Info Exchange
Yale University
About Regulating AI in the UK 57
The mission of the Ada Lovelace Institute is to ensure that data and AI
work for people and society. We believe that a world where data and
AI work for people and society is a world in which the opportunities,
benefits and privileges generated by data and AI are justly and equitably
distributed and experienced.
Website: www.adalovelaceinstitute.org
Twitter: @AdaLovelaceInst
Email: hello@adalovelaceinstitute.org
Permission to share: This document is published
under a creative commons licence: CC-BY-4.0