Generative AI Regulation
Generative AI Regulation
REGULATION AND
CYBERSECURITY
A GLOBAL VIEW OF POLICYMAKING
TABLE OF
CONTENTS
INTRODUCTION PAGE 2
GENERATING EFFECTIVE
GOVERNMENT ACTION PAGE 22
CONCLUSION PAGE 33
INTRODUCTION
In an era marked by unprecedented technological advance-
ments, the explosion of generative artificial intelligence (GenAI)
in the public consciousness stands out. It has the possibility to
change how people live, learn, and work, and has already shifted
the paradigm in cybersecurity. It is therefore not surprising that
governments around the world are looking closely at GenAI and
how it will impact the lives of their citizens. Some have already
begun to regulate or otherwise oversee the usage and develop-
ment of GenAI tools, while others are moving more cautiously,
focusing their efforts on discovery and research. In both cases,
GenAI (or foundational) models pose particular regulatory chal-
lenges given their adaptability and range of use.
1
Disclosure: Cyber
2
Attribution: The abil-
3
Data Jurisdiction:
4
Leadership
5
Cyber Assistance:
incident reporting ity to determine The rise of offshore Accountability and Given the global
requirements vary responsibility in data centers has Liability: Historically, reach of both cyber-
widely across the cyberspace will be raised several ques- indictments of indi- and GenAI-related
globe, with organiza- complicated by tions about data pri- vidual leaders for harms, we anticipate
tions required to dis- GenAI technologies, vacy, jurisdiction, cybersecurity-related a greater investment
close an incident as adversaries have collection, and stor- wrongdoing have in and desire for inter-
anywhere from 4 to more tools to hide age mechanisms. been relatively rare, national capacity
48+ hours after dis- their identities and Governments are however they are building programs,
covery. With GenAI activities. This will be already working to becoming more fre- geared at both reme-
increasing the speed true for forensics pro- limit the input of indi- quent.1 The emerging diating attacks and
of both offensive and fessionals in both viduals’ information Chief AI Officer role proactively hardening
defense cyber opera- government and into GenAI models. may see similar legal cyber defenses.
tions, governments industry as obfusca- We expect similar or criminal exposure
may feel pressure to tion applications of conversations about for any incomplete
shorten the window GenAI models data jurisdiction to cybersecurity-related
for these disclosures mature. continue regarding or risk-related
moving forward, the outputs and out- disclosures.
which may limit an comes of these
organization’s ability models.
to provide human
oversight in assessing
and remediating the 1
“SEC Charges SolarWinds and Chief Information Security
incident in the critical Officer with Fraud.” U.S. Securities and Exchange
Commission, 31 Oct. 2023, www.sec.gov/news/press-
hours after discovery. release/2023-227.
G7
UNITED NATIONS UNITED KINGDOM Hiroshima AI
GDC Process Bletchley
Process
Declaration
UNITED
NATIONS G7
• Incident Reporting
POLICYMAKING
APPROACH
• Goal based: An authority sets out an • Risk based: An authority defines
objective, rather than exact rules, regulations based on its assessment of
specifications, or standards. risks and mitigations.
2
“New UN Advisory Body Aims to Harness AI for the Common Good.” United Nations, 26
Oct. 2023, news.un.org/en/story/2023/10/1142867.
3
Von der Leyen, Ursula. “Statement by President Von Der Leyen on the Political Agreement
on the EU AI Act.” European Commission, 9 Dec. 2023, ec.europa.eu/commission/
presscorner/detail/en/statement_23_6474.
4
“AI Pact.” European Commission, 15 Nov. 2023, digital-strategy.ec.europa.eu/en/policies/
ai-pact.
Once the law takes effect, companies that are not in compliance
with the Act will be fined in the range of 1.5% to 7% of global
sales or up to 35 million euro, whichever is greater. The new-
ly-formed European AI Office within the European Commission
will oversee coordination among European authorities, as well
as implementation and enforcement of the rules on general
purpose AI.
5
Foon Yun Chee. “What Are Europe’s Landmark AI Regulations?” Reuters, 9
Dec. 2023, https://www.reuters.com/technology/what-are-europes-landmark-ai-
regulations-2023-12-09/.
6
“G7 Hiroshima Leaders’ Communiqué.” G7 Hiroshima, 20 May 2023, www.g7hiroshima.
go.jp/documents/pdf/Leaders_Communique_01_en.pdf.
7
“G7 Hiroshima Process on Generative Artificial Intelligence (AI).” OECD, 7 Sept. 2023,
www.oecd.org/publications/g7-hiroshima-process-on-generative-artificial-intelligence-ai-
bf3c0c60-en.htm.
8
G7 Hiroshima AI Process: G7 Digital & Tech Ministers’ Statement.” University of Toronto,
7 Sept. 2023, www.g8.utoronto.ca/ict/2023-statement.html.
9
“Hiroshima Process International Guiding Principles for Organizations Developing
Advanced AI System.” G7 2023 Hiroshima Summit, www.mofa.go.jp/files/100573471.pdf.
10
“Hiroshima Process International Code of Conduct for Organizations Developing
Advanced AI Systems.” G7 2023 Hiroshima Summit, www.mofa.go.jp/files/100573473.pdf.
11
“G7 Leaders’ Statement on the Hiroshima AI Process.” G7 2023 Hiroshima Summit, 30 Oct.
2023, www.mofa.go.jp/files/100573466.pdf.
12
“Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial
Intelligence.” The White House, 30 Oct. 2023, www.whitehouse.gov/briefing-room/
presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-
development-and-use-of-artificial-intelligence/.
13
“DHS CISA and UK NCSC Release Joint Guidelines for Secure AI System Development.”
U.S. Cybersecurity and Infrastructure Security Agency, 26 Nov. 2023, www.cisa.gov/news-
events/news/dhs-cisa-and-uk-ncsc-release-joint-guidelines-secure-ai-system-development.
14
“Blueprint for an AI Bill of Rights.” The White House, Oct. 2022, www.whitehouse.gov/
wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf.
15
“AI Risk Management Framework.” U.S. National Institute of Standards and Technology, 26
Jan. 2023, airc.nist.gov/AI_RMF_Knowledge_Base/AI_RMF.
16
“NIST AI Public Working Groups.” U.S. National Institute of Standards and Technology,
airc.nist.gov/generative_ai_wg.
17
“A Pro-innovation Approach to AI Regulation.” Gov.UK, 3 Aug. 2023, www.gov.uk/
government/publications/ai-regulation-a-pro-innovation-approach/white-paper.
18
“About the AI Standards Hub.” AI Standard Hub, aistandardshub.org/the-ai-standards-hub/.
19
“A Pro-innovation Approach to AI Regulation, Section 334.” Gov.UK, 3 Aug. 2023, www.
gov.uk/government/publications/ai-regulation-a-pro-innovation-approach/white-
paper#section334.
20
“The Bletchley Declaration by Countries Attending the AI Safety Summit, 1-2 November
2023.” Gov.UK, 1 Nov. 2023, www.gov.uk/government/publications/ai-safety-summit-2023-
the-bletchley-declaration/the-bletchley-declaration-by-countries-attending-the-ai-safety-
summit-1-2-november-2023.
21
Full Translation: China’s ‘New Generation Artificial Intelligence Development Plan’ (2017).”
DigiChina, Stanford University, 1 Aug. 2017, digichina.stanford.edu/work/full-translation-
chinas-new-generation-artificial-intelligence-development-plan-2017/.
22
“Translation: Measures for the Management of Generative Artificial Intelligence Services
(Draft for Comment) – April 2023.” DigiChina, Standard University, 12 Apr. 2023, digichina.
stanford.edu/work/translation-measures-for-the-management-of-generative-artificial-
intelligence-services-draft-for-comment-april-2023/.
SINGAPORE
Singapore has taken a measured approach to the global race to
regulation, with government officials confirming “we are currently
not looking at regulating AI” as of July 2023.24 In lieu of formal
regulation, Singapore is advocating for responsible AI measures
and testing and guidance for individuals and enterprises, build-
ing on strategy planning conducted prior to widespread AI
access in late 2022, which include the following pillars:
23
Marr, Bernard. “China’s AI Landscape: Baidu’s Generative AI Innovations In Art And
Search.” Forbes, 27 Sept. 2023, www.forbes.com/sites/bernardmarr/2023/09/27/chinas-ai-
landscape-baidus-generative-ai-innovations-in-art-and-search/?sh=58991e4e419a.
24
Chiang, Sheila. “Singapore Is Not Looking to Regulate A.I. Just yet, Says the City-state’S
Authority.” CNBC, 19 Jun. 2023, www.cnbc.com/2023/06/19/singapore-is-not-looking-to-
regulate-ai-just-yet-says-the-city-state.html.
25
Thong, Josh Lee Kok. “AI Verify: Singapore’s AI Governance Testing Initiative Explained.”
Future of Privacy Forum, 6 Jun. 2023, fpf.org/blog/ai-verify-singapores-ai-governance-
testing-initiative-explained/.
26
“Africa: AU’S Malabo Convention Set to Enter Force After Nine Years.” Data Protection
Africa, 19 May 2023, dataprotection.africa/malabo-convention-set-to-enter-force/.
27
“African Union Convention on Cyber Security and Personal Data Protection.” African
Union, 27 Jun. 2014, au.int/sites/default/files/treaties/29560-treaty-0048_-_african_union_
convention_on_cyber_security_and_personal_data_protection_e.pdf.
4. Establish Standards
28
“Klobuchar Fighting AI Voice Cloning.” Amy Klobuchar, United States Senator, 7 Nov. 2023,
www.klobuchar.senate.gov/public/index.cfm/2023/11/klobuchar-fighting-ai-voice-cloning.
The ease and efficiency that makes GenAI popular with the gen-
eral public applies equally to those who would use it for nefari-
ous purposes. Attackers don’t need to know how to code to use
AI to generate ransomware and dangerous hacking tools.
Schemers who might otherwise struggle with language barriers
can now generate phishing text that can convincingly imperson-
ate the people most trusted by their targets. Grooming, traffick-
ing, and sexual abuse can all be facilitated by AI-generated fake
profiles and believable chats. And terrorists and extremists can
generate effective propaganda and incite misinformation for
rapid, targeted recruiting.
CONSIDER TECHNOLOGY
SAFEGUARDS AND FEASIBILITY
The full uses and applications of AI will never be easily defined,
since the possibilities for utilization increase as the technology
develops. Therefore, any regulatory and legal safeguards pro-
posed must be flexible to keep up with technological advances.
Many initiatives stipulate users should have the ability to opt out
of engagement with AI systems and have access to a human
alternative, where appropriate. The need for reasonable and
appropriate AI safeguards is clear but determining, implement-
ing, and enforcing those safeguards poses significant challenges
with this rapidly-developing technology.
ESTABLISH STANDARDS
Standards serve as the operational bedrock for AI, intricately
weaving systems, processes, and tools into a cohesive fabric.
Much like the ubiquitous Wi-Fi standard that effortlessly unites
diverse devices worldwide, AI standards are poised to define the
future landscape of innovation.
29
Borner, Peter. “Consent Fatigue.” The Data Privacy Group, 13 Jun. 2022,
thedataprivacygroup.com/blog/consent-fatigue/.
CONCLUSION
Governments and regulatory bodies will continue to consider
GenAI’s implications throughout 2024 and beyond. Effective gov-
ernance and regulation will require finding a balance between
hope and fear. While that balance is found, these technologies
will continue to change our lives, both for better and for worse.
30
“FACT SHEET: Biden-Harris Administration Secures Voluntary Commitments from Leading
Artificial Intelligence Companies to Manage the Risks Posed by AI.” The White House,
21 Jul. 2023, www.whitehouse.gov/briefing-room/statements-releases/2023/07/21/fact-
sheet-biden-harris-administration-secures-voluntary-commitments-from-leading-artificial-
intelligence-companies-to-manage-the-risks-posed-by-ai/.
31
“Ensuring Safe, Secure, and Trustworthy AI.” The White House, 21 Jul. 2023, www.
whitehouse.gov/wp-content/uploads/2023/07/Ensuring-Safe-Secure-and-Trustworthy-AI.
pdf.
32
“FACT SHEET: Biden-Harris Administration Secures Voluntary Commitments from
Eight Additional Artificial Intelligence Companies to Manage the Risks Posed by
AI.” The White House, 12 Sept. 2023, www.whitehouse.gov/briefing-room/statements-
releases/2023/09/12/fact-sheet-biden-harris-administration-secures-voluntary-
commitments-from-eight-additional-artificial-intelligence-companies-to-manage-the-
risks-posed-by-ai/.