0% found this document useful (0 votes)
70 views129 pages

E Book First Global Report On AI in Legal Practice

The LLI Whitepaper presents the first global report on the state of artificial intelligence (AI) in legal practice, highlighting the challenges law firms face in 2023, including declining productivity and rising operational costs. It emphasizes the growing role of AI in addressing these challenges, with a focus on generative AI tools that can enhance efficiency and reduce costs. The report also discusses the need for responsible AI implementation and the potential economic impact of AI on the legal industry, estimating significant annual savings and productivity gains.

Uploaded by

2000fahadhassan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
70 views129 pages

E Book First Global Report On AI in Legal Practice

The LLI Whitepaper presents the first global report on the state of artificial intelligence (AI) in legal practice, highlighting the challenges law firms face in 2023, including declining productivity and rising operational costs. It emphasizes the growing role of AI in addressing these challenges, with a focus on generative AI tools that can enhance efficiency and reduce costs. The report also discusses the need for responsible AI implementation and the potential economic impact of AI on the legal industry, estimating significant annual savings and productivity gains.

Uploaded by

2000fahadhassan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 129

LLI Whitepaper

First Global Report on


the State of Artificial
Intelligence in Legal
Practice

Edited by: Michał Jackowski, Michał Araszkiewicz

In cooperation with: Adam Zadroz-ny, Andrzej Pore, bski

With expert commentary by Federico Costantini, Gijs van Dijck,


Martin Ebers, Enrico Francesconi, Saptarshi Ghosh, Jakub Harašta,
Marc Lauritsen, Tomer Libal, Juliano Maranhão, Masha Medvedeva,
Ugo Pagallo, Ken Satoh, Burkhard Schäfer, Giovanni Sileno,
Jaromír Savelka, Minghui Xiong, Xiao Chi, Vern R. Walker,
-
Bernhard Waltl, John Zeleznikow, Tomasz Zurek.
DOI: 10.38023/5501c854-14eb-4529-b6b9-44343e477f44
TABLE OF CONTENTS

1. Introduction ............................................................................................ 5

1.1. Foreword............................................................................................... 5

1.2. A picture of the law firm market in 2023............................................ 5

1.2.1. Declining financial results and customer optimism.............. 5

1.2.2. A decrease in the productivity of lawyers and an increase


in the cost of operating law firms........................................... 6

1.2.3. Summary................................................................................... 9

1.3. The role of artificial intelligence in meeting the challenges facing


law firms................................................................................................. 8

1.3.1. AI - basic concepts................................................................... 8

1.3.2. AI in the legal services market in past reports...................... 9

1.4. Authors and methodology of the report........................................... 10

1.4.1. Editorial team............................................................................ 10

1.4.2. Report methodology............................................................... 17

2. Current state of AI implementation in law firms................ 22

2.1. What part of lawyers’ tasks can be replaced by AI .......................... 22

1.2. AI adoption in law firms........................................................... 23

1.3. Criteria for the selection of AI tools by law firms.................. 24

1.4. Where lawyers get their AI knowledge.................................. 26

1.5. What tools do law firms use.................................................... 27

1.6. Impact of generative AI on technological development..... 28


1.7. Caution in the process of implementing AI in law firms....... 28

LLI WHITEPAPER | Nº 3 (EN) | 2023 | 3


3. AI’s impact on the legal industry labor market:................. 31

4. How to implement AI in a law firm responsibly.................. 35

1.5. Lack of proper standards as an obstacle to AI


implementation:....................................................................... 35

1.6. AI’s impact on solving legal industry problems -


predictions................................................................................ 35

5. Expert Comments to AI report.................................................... 43

6. Summary and conclusions:............................................................... 122

7. Bibliography: .......................................................................................... 125

4 | LLI WHITEPAPER | Nº 3 (EN) | 2023


1. INTRODUCTION

1.1. FOREWORD
This report represents the results of the survey “AI in Law Firms 2023”, together with
expert commentary provided by leading researchers working in the field of AI and
Law, Legal Informatics and the Law of AI. Many of them are affiliated with organiza-
tions that have been delving into the nexus between artificial intelligence and law
for over three decades. Despite this long-standing focus, the recent surge in the
adoption of artificial intelligence over the past year prompted a decision to expand
the research scope to encompass empirical considerations. Our objective was to
understand the current tangible impact of AI on the realm of legal practice. To this
end, we surveyed over 200 law firms globally, representing nearly 100,000 legal pro-
fessionals. We sought to ascertain their perspectives on the rapid proliferation of
AI, exploring how they employ AI tools, their concerns, perceived limitations, and
the opportunities they identify. Subsequently, this report was reviewed by a select
group of researchers who provided insights into our findings and formulated their
own views on the future development of AI tools’ implementation in legal practice.
This publication harmoniously melds the empirical and statistical outcomes with a
scientific exploration of AI’s evolution in the legal sector.

1.2. A PICTURE OF THE LAW FIRM MARKET


IN 2023
1.2.1. Declining financial results and customer optimism

Law firms entered 2022 following two years of crisis related to the COVID-19 pan-
demic. Evidently, they have learned how to measure up to its effects both internally,
in their teams, and externally, in their relationships with clients, which have not only
been maintained but improved. The year 2022 brought good financial results for
law firms.

We are at the end of 2023, and we can already point out that it brings many more
challenges, such as slowing demand, less customer optimism, rising costs, declin-
ing team productivity, and inflation, which is a global trend. Added to this are tech-

LLI WHITEPAPER | Nº 3 (EN) | 2023 | 5


nological changes and the explosive popularity of generative artificial intelligence,
which are contributing factors to this report. It is resulting in a change in the work
of law firm clients. As they see for themselves how the use of generative artificial
intelligence improves productivity and efficiency and reduces operating costs, they
may also require it from lawyers, starting with in-house departments but ending
with law firms. A significant reshuffling of the hierarchy in the legal market is an-
ticipated as a result. Firms that adapt to this trend can significantly improve their
position in the market. On the other hand, using artificial intelligence-based tools
is also encumbered with numerous risks.

According to the Thomson Reuters Institute 2023 “Report on the State of the Legal
Market,” the last quarter of 2022 already shows a decline in demand for legal ser-
vices, especially among the largest firms and especially in the transactional industry
(for example, AmLaw100 reported a 9% decline in demand for transactional work
Q4 2022). The year 2023 is a continuation of this trend.

Another indicator that points to some stagnation in the industry is profit per share-
holder (PPEP), which, having reached a record high in 2021, declined in 2022 and for
the first time since 2009.

In the second half of 2022 and into 2023, we also see a decline in optimism among
law firm clients about increasing spending on in-house counsel. Spending is ex-
pected to decline in banking, finance and insurance, and, according to some in-
house counsel, also in litigation and mergers and acquisitions.

As the only one among the segments in the market, it was mid-sized companies
that saw an increase in demand. The aforementioned report describes this as a
result of customers seeking high quality but at better rates offered by smaller and
more efficient companies.

1.2.2. A decrease in the productivity of lawyers and an increase in


the cost of operating law firms

In 2022, law firms grew and, especially in the first half of the year, competed hard for
talent. The result was the highest increase in direct spending on salaries since 2008
as well as additional costs associated with business development and the return of
teams to offices after the pandemic. After the 2022 fourth-quarter revenue decline
described above and the deterioration in performance, legal firms faced a need to
cut costs in 2023. Hence the layoffs that firms began to experience in 2023. They
primarily affect junior lawyers and associates, affecting up to a dozen per cent of
the teams.

6 | LLI WHITEPAPER | Nº 3 (EN) | 2023


Along with the increase in employment costs, we are seeing another factor affect-
ing the decline in the profitability of law firms. In 2022, they experienced record-low
employee productivity. For the past few years, we have seen a trend related to a
decline in the average number of hours worked per employee. In 2022, however,
firms experienced a sharp decline to an average of 119 monthly hours. Analyzing
the reasons for the decline in productivity is difficult but may be related to remote
work and the difficulty of returning to offices (in 2022, more than 40% of young law-
yers said they were reluctant to return) as well as post-pandemic problems, such as
professional burnout or a sense of an exclusively transactional relationship with the
employer with the disappearance of a relationship of valuing professionalism. This
is experienced, depending on the group surveyed, the company and its actions
toward employees by up to 37% of all lawyers.

The decline in productivity is admittedly associated with increased remuneration


earned by law firms. In the US, the increase averaged 4.8% in 2022. Meanwhile,
the average annual inflation rate in the US over the same period was 5.0%. Strong
growth has thus been outpaced by inflation and exacerbated the financial chal-
lenges facing law firms. Law firms in other parts of the world are in a similar situa-
tion. The inflationary cost increase is accompanied by problems related to the need
to negotiate new inflation-adjusted rates, which do not always keep pace with the
inflation rate.

1.2.3. Summary

Law firms in 2023 therefore face a number of challenges:

– wage increases below the inflation rate

– a sharp, higher-than-inflationary increase in the cost of employees and other


costs of doing business

– a decrease in the efficiency of the lawyer’s work, which, combined with the
billable hours model, results in a decrease in revenue for companies

– adverse customer reaction to continued price increases and the search for
efficient billing models.

In 2024, companies need to respond to these challenges to compete in the techno-


logically evolved legal services market effectively, and the use of generative AI is
one among many of the solutions they see.

LLI WHITEPAPER | Nº 3 (EN) | 2023 | 7


1.3. THE ROLE OF ARTIFICIAL INTELLIGENCE
IN MEETING THE CHALLENGES FACING
LAW FIRMS
1.3.1. AI - basic concepts

Artificial intelligence has been with us for several decades. However, 2022 and es-
pecially 2023 are when we experienced an explosion of generative AI popularity
after the AI winter.

ChatGPT, Bard, GitHub Copilot, Midjourney, AnyLawyer and other generative arti-
ficial intelligence tools that have attracted public attention in 2023 are the result of
significant investments in recent years that have helped advance machine learning
and deep learning. Over the past decade, artificial intelligence has penetrated our
lives gradually. However, it was mainly visible in B2B relations and did not break
through into the public consciousness, except for moments when we watched its
breakthrough victories in Chess or AlphaGo. It wasn’t until 2022 that ChatGPT be-
gan to be in widespread use by more than 100 million users, achieving more than
1.6 billion page views in June 2023. In addition, plug-ins for ChatGPT, its competi-
tors (such as Bard and Claude), and applications based on large language models,
using AI for efficiency improvements in almost every sphere of life, began to be in
widespread use. In the legal world, these include tools such as Harvey, Spellbook,
Lexion and AnyLawyer.

Large language models contain elaborate artificial neural networks inspired by the
billions of neurons connected in the human brain and are part of what is called
deep learning. They can process enormous and diverse sets of unstructured data
and perform more than one type of task. Tools based on generative AI can work
with text, images, sounds, videos, and computer code. They can create, summa-
rize, edit, change, and classify materials. However, the fundamental change we are
seeing is the combination of usability with the ability to communicate with tools
in ordinary language. They interact with users by communicating in a human-like
manner. Consequently, with project work outside the human team, we have a new
stakeholder, which is AI and the tools built with it.

After the report prepared in June 2023 by McKinsey & Company, “The economic
potential of generative AI. The next productivity frontier”, we can cite a number of
challenges facing businesses related to AI. Language models require the training
of generative AI. The computing power required to do so can become a bottleneck
in its development. The second challenge is the lack of regulation and operating in

8 | LLI WHITEPAPER | Nº 3 (EN) | 2023


a grey area of regulation. However, we are already in the process of adopting new
regulations in the second half of 2023. The Interim Measures for the Management
of Generative Artificial Intelligence Services in China already goes into effect on
August 15, 2023. Canada is working on the AI and Data Act, Bill C-27, and the Euro-
pean Union is leading a June 2023-based trilogue on the AI Act. The UK has created
the Digital Regulation Cooperation Forum, comprising 4 digital regularity agen-
cies. And in the US Executive Order on Safe, Secure, and Trustworthy Artificial Intel-
ligence was issued on October 30, 2023. It can therefore be inferred what the regu-
latory trends are and that future regulations will require AI to be more transparent,
honest, accountable and trained on data obtained from legitimate sources. Such
a move could significantly increase its costs. Nevertheless, generative AI funding,
while still a fraction of total AI investment, is significant and increasing - reaching a
total of $12 billion in the first five months of 2023 alone. Venture capital and other
private outside investment in generative artificial intelligence grew at an average
annual rate of 74% from 2017 to 2022.

A huge development of tools is associated with this fast financial path. GPT 3.0
was released in March 2022. In November 2022, GPT 3.5 and Chat GPT using this
language model were released, including a fine-tuning process for machine learn-
ing algorithms designed specifically for conversation modelling. Four months later,
Open AI released GPT-4 with much greater capabilities, including multimodal ca-
pabilities enabling it to process text, images, and videos. In March 2023, Anthropic
released an LLM called Claude, which significantly, just 2 months later, had 10 times
more text processing capabilities. In May 2023, Google released PaLM2, which is
the Bard chatbot engine allowing Google customers to collaborate with generative
AI, implemented in the European Union in July 2023.

A considerable advantage of generative artificial intelligence is its adoption across


almost the entire world. Significant outside private investment is still concentrated
in the US, but technology adoption is happening everywhere. This includes the le-
gal industry, where the first companies using AI (Della, Kira, Legal Sifter, Luminance)
or generative AI (Spellbook, Harvey) were established in the US, but the adoption
of others is occurring primarily in Europe and Asia (AnyLawyer).

1.3.2. AI in the legal services market in past reports

Few studies to date address the issue of AI adoption in the legal services market.
The aforementioned McKinsey & Company 2023 report predicts that the services
most susceptible to the impact of AI are those related to sales, marketing, cus-
tomer operations and software engineering. It is in these spheres that this technol-
ogy can add value to the entire organization by revolutionizing internal knowledge

LLI WHITEPAPER | Nº 3 (EN) | 2023 | 9


management systems. It can help employees acquire and share knowledge stored
in the organization. Employees can access it continuously by formulating queries
to internal databases like they would ask a human being, and the technology can
be constantly engaged in dialogue. This provides teams quick access to relevant
information, enabling them to make more informed decisions and develop effec-
tive strategies efficiently.

McKinsey & Company estimates the impact on the legal industry will also be sig-
nificant. It estimates the potential impact of implementing generative AI at nearly
$100 billion annually, representing about 15-20% of legal services spending in com-
panies where AI would be implemented. In addition, according to the report’s au-
thors, artificial intelligence has the potential to generate about $180-260 billion in
additional value in the legal industry, with a particular focus on the banking indus-
try, and a relatively high potential for change in the insurance, telecommunications,
real estate and energy industries.

A study was also conducted by Above the Law in 2023, which surveyed 275 lawyers
about their attitudes toward artificial intelligence. More than 80% of the lawyers
surveyed agreed that generative artificial intelligence would create “transforma-
tive efficiencies” in research and routine tasks. Lawyers also shared their views on
the possibility of AI replacing some legal professions. Some 71% said generative
AI could replace document review lawyers within the next decade, and 68% said it
could have a similar impact on law librarians. Some 41% said paralegals could be
replaced in the next 10 years.

From these two reports, one can conclude that AI adoption in the legal industry
also carries great potential. However, it has not yet been sufficiently explored. This
prompted the authors to conduct their own study and prepare the first report that
will comprehensively discuss the state of AI adoption in the legal industry, lawyers’
attitudes toward these changes, and predictions regarding potential changes.

1.4. AUTHORS AND METHODOLOGY OF


THE REPORT
1.4.1. Editorial team

The report was prepared by an interdisciplinary team and is not affiliated with any
organization. The e-book was prepared in cooperation with Liquid Legal Institute.
The editors are

10 | LLI WHITEPAPER | Nº 3 (EN) | 2023


1. Michal Jackowski, professor of constitutional law, Ph.D at the Wrocław Univer-
sity, attorney-at-law and tax advisor, co-founder of DSK Law Firm, one of Po-
land’s largest tech law firms, co-founder of AnyLawyer, a start-up implementing
generative AI in legal firms and large organizations, member of International
Association for Artificial Intelligence and Law and ITechLaw.

2. Michal Araszkiewicz, PhD in legal theory, assistant professor at the Department


of Legal Theory at Jagiellonian University in Kraków, attorney-at-law, author
of over 100 scientific publications, involved in the activities of the Executive
Committee International Association for Artificial Intelligence and Law and the
JURIX Foundation.

The report was prepared in cooperation with

1. David Cambria, Managing Director of Legal Business Solutions in PwC, Chief


Services Officer in Baker McKenzie, director of global operations in leading law
firms, awarded with many Corporate Counsel awards as Best Law Dept, Found-
er of leading Corporate Counsel’s organizations, as Association of Corporate
Counsel, CLOC-Corporate Legal Operations Consortium and many more, lec-
turer of University of Chicago School of Law.

2. Federico Costantini, (M.D. in Law, Ph.D. in Philosophy of Law) is Associate Pro-


fessor of Legal informatics in the Department of Law at the University of Udine
(Italy). He has published several contributions on data protection, AI ethics and
DLT legal issues, and has been invited as a speaker at national and international
conferences. He is currently involved in the Cost Action 19143 (GDHRNet), on
the protection of human rights in the online context, and in the TRUTHSTER
project (https://truthster.io), concerning the certification of media contents
via blockchain. He is a member of the NuMe Lab http://nume.uniud.it, the
AI4HRC Lab http://clp.dimi.uniud.it/projects/ai4hrc and the Italian Distributed
Ledger Technology Working Group (https://dltgroup.dmi.unipg.it). After being
appointed as Data Protection Officer of the University of Udine for almost a
year, he currently serves as a legal expert in two Internal Board Reviews (DI4A,
DMIF).

3. Gijs van Dijck integrates legal, empirical, and computational analysis in order to
improve the description, application, understanding, and evaluation of the law.
He has taught courses on tort law, contract law, property law, empirical legal
research, and computational legal research. Gijs has published in top journals
including the Journal of Empirical Legal Studies and the Oxford Journal of Le-
gal Studies. He has been a speaker at various conferences, including ones at
Oxford, Harvard, Yale, Duke and Cornell. He was a visiting scholar at Stanford
University in 2011. Gijs is a Professor of Private Law, director of the Maastricht

LLI WHITEPAPER | Nº 3 (EN) | 2023 | 11


Law and Tech Lab, Principal Investigator at Brightlands Institute for Smart Soci-
ety (BISS), and researcher at M-EPLI.

4. Martin Ebers, President of the Robotics & AI Law Society (RAILS), Germany, and
Professor of IT Law at the University of Tartu, Estonia. Moreover, he is a per-
manent fellow at the law faculty of the Humboldt University of Berlin, and co-
director of the German Institute for Energy and Competition Law in the Pub-
lic Sector. In 2022, Martin was awarded a five year grant from the Wallenberg
Foundation (WASP-HS, the Wallenberg Program on Humanities and Society for
AI and Autonomous System – Guest Professorship at Örebro University, Swe-
den) to conduct research on “Private Rule-making and European Governance
of AI and Robotics”. His latest books are amongst others Algorithms and Law
(Cambridge University Press, 2020), Contracting and Contract Law in the Age
of Artificial Intelligence (Hart Publishing, 2022), and the Stichwortkommentar
Legal Tech (Nomos Publishing, 2023).

5. Enrico Francesconi, a Research Director at IGSG-CNR, the Institute for Legal


Informatics and Judicial Studies of the National Research Council of Italy and,
currently, he is Policy Officer of the European Parliament. His main research
interests include Semantic Web technologies and AI techniques for the legal
domain. He has been Policy Officer of the European Commission - DG Publica-
tions Office. He is President of the Nominating Committee of the International
Association for Artificial Intelligence and Law (IAAIL) and Member of the Steer-
ing Committee of the Jurix Foundation. He was IAAIL President for the period
2020-2021 and Member of the IAAIL Executive Committee (2014-2021). He is
Section Editor on Ontology and Knowledge Representation of the Artificial In-
telligence and Law journal (Springer), co-Editor in Chief of the Journal on Open
Access to Law (Cornell University, Law School), Scientific Advisory Board Mem-
ber of Law, Governance and Technology Series (Springer). He was Conference
Chair of the Fourteenth International Conference on Artificial Intelligence and
Law (ICAIL 2013, Rome), Program Chair of the 34th International Conference
on Legal Knowledge and Information Systems (Jurix 2022), Program co-Chair
of the International Conference on Electronic Government and the Informa-
tion Systems Perspective (EGOVIS). He is contract professor of Information Re-
trieval and Semantic Web Technologies at the Computer Science Faculty of the
University of Florence.

6. Saptarshi Ghosh, (http://cse.iitkgp.ac.in/~saptarshi/) is an Associate Profes-


sor of Computer Science and Engineering, at Indian Institute of Technology,
Kharagpur. His research interests include Legal analytics, Social media ana-
lytics, and Algorithmic bias and fairness. He obtained his Ph.D. in Computer
Science from IIT Kharagpur, and was a Humboldt Postdoctoral Fellow at Max
Planck Institute for Software Systems, Germany. He has published more than

12 | LLI WHITEPAPER | Nº 3 (EN) | 2023


100 research papers, including several papers in top conferences (including
WWW, SIGIR, CIKM, ICWSM, CSCW) and journals (including ACM TWEB, IEEE
TCSS, AI and Law). He was awarded the Institution of Engineers (India) Young
Engineer Award 2017-18 in the Computer Engineering discipline. He presently
leads a Max Planck Partner Group at IIT Kharagpur, that focuses on topics re-
lated to Algorithmic bias and fairness. His works have been awarded at the top
Law-AI conferences, including the Best Paper Award at JURIX2019 and the Best
Student Paper Award at ICAIL2021.

7. Jakub Harašta is an assistant professor at the Institute of Law and Technology,


Faculty of Law, Masaryk University. He graduated in law (master’s and doctoral
degrees) and security studies (master’s degree). He was a visiting research fel-
low at Minerva Center for the Rule of Law under Extreme Conditions (University
of Haifa, 2015) and a visiting postdoctoral fellow at the Center for Cyber Law
& Policy (University of Haifa, 2018). His short-term research stays included Ex-
eter Law School (UK), NATO Cooperative Cyber Defence Centre of Excellence
(Estonia), and Max-Planck-Institut zur Erforschung von Gemeinschaftsgütern
(Germany). In his research, Jakub focuses mainly on legal informatics and cy-
bersecurity, tackling regulation, compliance and the broader implications of
emerging and disruptive technology.

8. Marc Lauritsen, president of Capstone Practice Systems, is a Massachusetts


lawyer and educator with an extensive background in practice, teaching, man-
agement, and research. He helps people work more effectively through knowl-
edge systems. He has taught at five law schools, done pathbreaking work on
document drafting and decision support systems, and run several software
companies. Marc is a fellow of the College of Law Practice Management, past
co-chair of the American Bar Association’s eLawyering Task Force, and the au-
thor of The Lawyer’s Guide to Working Smarter with Knowledge Tools.

9. Tomer Libal, an Ethical AI Research Scientist at the University of Luxembourg


and an Assistant Professor at the American University of Paris. As a Principal
Investigator, he leads projects like the FNR CORE’s “Examples Based AI Le-
gal Guidance (ExAILe)”, focusing on AI systems for legal research. Addition-
ally, Tomer is the CEO of Enidia AI, a pioneering startup dedicated to offering
ethical AI solutions to the legal community. The core mission of Enidia AI is to
bridge the gap between the sophisticated capabilities of AI technology and
the intricacies of regulated legal services. Before specializing in AI for law, he
worked at France’s INRIA and Microsoft Research on knowledge representa-
tion and proof assistants. He also teamed up with CEA on software property
validation and was a development team lead at Quigo Technologies. He’s a
frequent contributor to AI and Law forums like ICAIL and JURIX, serving both
as a committee member and invited speaker in conferences and workshops.

LLI WHITEPAPER | Nº 3 (EN) | 2023 | 13


10. Juliano Maranhão is Associate Professor of Legal Theory at the University of
Sao Paulo Law School, where he obtained this phd on Law. He was visiting
scholar at the University of Utrecht (2005) and the Goethe University of Frank-
furt (2016-2019). He won the Donald Berman Award by the Internacional As-
sociation on Artificial Intelligence and Law and was a member of the executive
committe of that association from 2019-2023. Maranhao is an Alexander von
Humboldt Fellow Researcher and the Director of the Lawgorithm Association
of Research on Artificial Intelligence and Law.

11. Masha Medvedeva is an Assistant Professor in Legal Technologies with dual


appointment at eLaw- Center for Law and Digital Technologies and the De-
partment of Business Studies at the Faculty of Law, Leiden University, the Neth-
erlands. Medvedeva is a legal tech engineer with a background in developing
systems for predicting court decisions. Her work lies within analysing and im-
plementing solutions for automation in the legal domain, as well as analysing
products that are currently on the market. Much of her research is focused on
the limitations of legal technologies and their impact on law and legal practice,
and she is one of the authors of the Typology of Legal Technologies. Her cur-
rent research focuses on collection, creation and evaluation of resources for
research in natural language processing in the legal domain.

12. Ugo Pagallo, a former lawyer and current professor of Jurisprudence at the Uni-
versity of Turin (Italy), is Vice-President of the Italian Association of Legal Infor-
matics. Author of thirteen monographs, a hundred essays in scholarly journals
and book chapters, he has been a member of many international projects and
research, collaborating with such institutions as the European Commission, the
World Health Organization, and the Japanese government. His main interests
are Artificial Intelligence (AI) & law, network theory, governance, human-robot
interaction, and information technology law.

13. Andrzej Porębski, a researcher at the Faculty of Law and Administration at the
Jagiellonian University, MA of IT and Econometrics, MA of Law and MA of So-
ciology, data analyst and statistician, conducting the research project funded
by the National Science Center, Poland, entitled “The Understandability Re-
quirement of Machine Learning Systems Used in the Application of Law”, who
prepared its statistical part.
14. Ken Satoh is a professor in the Principles of Informatics Research Division, (Na-
tional Institute of Informatics), and Sokendai (The Graduate University of Ad-
vanced Studies) Japan. He has been working on the logical foundations of AI,
especially non-monotonic reasoning. He is currently investigating juris-infor-
matics (aiming at the amalgamation of informatics and law, as bio-informatics
amalgamates informatics and biology). He is a member of the steering com-

14 | LLI WHITEPAPER | Nº 3 (EN) | 2023


mittee of JURISIN workshops (International Workshop on Juris-informatics) and
the COLIEE competition (Competition on Legal Information Extraction and En-
tailment).

15. Burkhard Schäfer is a Professor of Computational Legal Theory at the Universi-


ty of Edinburgh, and a member of the SCRIPT Centre for IT and IP Law, which he
also led as Director from 2010 to 2022. His main field of research is the interface
between computer technology, science and the law, in particular questions of
formalization of legal reasoning, law compliance by design and legal expert
systems. He has published over 120 papers in the field of legal expert system
design, the semantic web, and legal responses to new technologies from a
comparative perspective. He is a member of the “legal technologist” accredi-
tation panel of the Law Society of Scotland, and served recently as a member
of the Ethical Digital Nation expert group of the Scottish government.

16. Giovanni Sileno is an Assistant Professor at the Socially Intelligent Artificial Sys-
tems research group at the University of Amsterdam, and a member of the Civ-
ic AI Lab. With a background in electronic engineering, a PhD in AI & Law, and
postdoc studies in cognitive systems and data-sharing infrastructures, he has
been working in various fields related to AI and Computer Science research,
such as computational legal theory, agent-oriented programming, cognitive
modelling, computational policy design and operationalization.

17. Jaromír Savelka, a researcher associate at the School of Computer Science,


Carnegie Mellon University. He focuses on applications of natural language
processing and machine learning in diverse areas, including education and law.
Dr. Savelka has published in Q1 journals and presented at top-tier international
conferences on topics ranging from a network analysis of public institutions to
legal information retrieval. Currently, he is investigating applications of large
language models (LLMs) in computing education and the semantic processing
of legal texts. He is particularly interested in how LLMs can be leveraged to
support students in programming classes and to help instructors in authoring
computer science educational materials (texts, assessments, learning objec-
tives). Additionally, Dr. Savelka is exploring the potential of LLMs in assisting
legal professionals in analyzing large collections of legal documents.
18. Minghui Xiong, holds a Master Degree in Logic from Southwest University and
a Ph. D in Logic from Sun Yat-sen University. Currently is the chair of digital
jurisprudence as Qiushi Distinguished professor at the Zhejiang University
Guanghua Law School. He is head of the ZJU Law & AI Laboratory. His research
focuses on the connections between logic, law, and artificial intelligence, as
a contribution to logics for automatic legal reasoning and argumentation. He
is vice president of the Chinese Society of Logic, vice director of the Artificial

LLI WHITEPAPER | Nº 3 (EN) | 2023 | 15


Intelligence Logic Committee of the Chinese Association for Artificial Intel-
ligence, and member of the Executive Committee of the Computation Law
Committee of the China Computer Federation. Among his editorial activities,
he is a member of the editorial board of Argumentation since 2013. He also
regularly serves as a PC member and participates in important international
conferences and academic events such as ICAIL, JURIX, ECA, etc.

19. Xiao Chi is a first-year Ph.D. student in the field of digital law at Zhejiang Univer-
sity. She has obtained a Bachelor’s Degree of Science majoring in Mathemat-
ics at University of Liverpool and a Master’s Degree of Science majoring in
Scientific and Data Intensive Computing at University College London. During
her master’s program, she focused on research related to epistemic graphs
and completed a dissertation titled “A Filtering-based General Approach to
Learning Rational Constraints of Epistemic Graphs”. Xiao joined the ZJU Law &
AI Laboratory after enrolling in the doctoral program, and her doctoral super-
visor is Professor Minghui Xiong. She has participated in several seminars on
explainable artificial intelligence, where she discussed and wrote papers with
professors of logic and computer science. Xiao has also participated in several
conferences in the fields of Logic, AI, and Law. Her doctoral research focuses
on legal informatics, especially on persuasion systems and mediation systems.

20. Vern R. Walker is Professor Emeritus of Law at the Maurice A. Deane School of
Law at Hofstra University (New York). He has a Ph.D. (philosophy) from the Uni-
versity of Notre Dame, and a J.D. (law) from Yale Law School. He was a partner
in the Washington, D.C., law firm of Swidler & Berlin (working extensively with
expert witnesses and scientific evidence). At Hofstra Law, his courses included
scientific evidence, torts, and administrative law, and he founded and directed
the Research Laboratory for Law, Logic and Technology (LLT Lab). He has pub-
lished extensively on legal reasoning and factfinding, on the use of scientific
evidence in legal proceedings, and on the use of computational and artificial
intelligence approaches to legal analysis. He designs computer software for
representing legal knowledge, mining argumentation from legal documents,
and modeling legal reasoning. His most recent publication is the book, Beyond
Language: A Philosophical Journey (Wipf & Stock, 2021).

21. Bernhard Waltl, a computer scientist and expert at the intersection of law and
computer science, is a specialist in artificial intelligence and NLP. He works on
different topics in the field of legal operations, legal tech, and legal innovation.
He co-founded Liquid Legal Institute in 2018. He is am a firm believer that col-
laboration is the key to a successful, people-centric and digital future.

22. Adam Zadroz-ny, assistant professor at the National Center for Nuclear Studies
Poland, lecturer of Natural Language Processing at Cognitive Studies Univer-

16 | LLI WHITEPAPER | Nº 3 (EN) | 2023


sity of Warsaw, co-founder and Head of AI of a start-up implementing genera-
tive AI in legal firms and large organizations, who consulted its major parts.

23. John Zeleznikow has conducted research and taught in Australian, US, French,
Dutch, Israeli, Belgian, German, UK, Estonian, Spanish and Polish universities
for fifty years. He is the author of 4 research monographs and 105 refereed jour-
nal papers. He has an H index of 38 with 4784 citations. Professor Zeleznikow
has also won over $A8.5 million in competitive research grants. These in-
clude ten Australian Research Council Grants, three European Union Grants
and Dutch, French, Scottish and Spanish research grants. He has successfully
supervised 20 PhD students and 6 postdoctoral fellows. Over the past thirty
years, Professor Zeleznikow has focused on how Artificial Intelligence can be
used to enhance legal decision-making. His research findings have been uti-
lised by legal and mediation firms, West Midlands Police (UK), CONSOB (Ital-
ian Stock Exchange Regulator), Victoria Legal Aid and Relationships Australia.
He has performed pioneering research on using machine learning and game
theory to support legal decision-making.
-
24. Tomasz Zurek, is currently employed as a post-doc researcher at the Informat-
ics Institute, University of Amsterdam. He is also an Associate Fellow of T.M.C.
Asser Institute in The Hague, and an assistant professor at the Institute of Com-
puter Science at Maria Curie Sklodowska University in Lublin, Poland (currently
on leave). He has got MA in management (1999) and a Ph.D. in computer sci-
ence (2004). His dissertation concerns the utilisation of artificial intelligence
in banking. His current scientific interests focus on the representation of legal
knowledge and modelling of legal reasoning and argumentation, especially
the modelling of informal ways of reasoning. He is an author of over 50 peer-
reviewed papers, a member of program committees of main AI and Law confer-
ences, and a member of the International Association of Artificial Intelligence
and Law.

1.4.2. Report methodology

1.4.2.1. Research group


We decided to conduct a survey to explore the actual sentiment of lawyers toward
artificial intelligence (AI) and to assess the adoption of this technology in the legal
industry. In addition, we wanted to identify challenges and predictions related to
the use of AI in legal practice. Our survey included firms operating around the
world, representing a variety of fields and comprising a diverse group of employ-
ees.

LLI WHITEPAPER | Nº 3 (EN) | 2023 | 17


We wanted to conduct a survey that was as representative as possible. We con-
ducted more than 1,000 interviews with lawyers from all over the world, which led to
more than 200 law firms from all over the world filling out surveys. The people who
filled out the surveys were either managing partners or Chief Innovation / Informa-
tion / Knowledge Officers, i.e. people who had a full concentration of knowledge
about AI adoption and approaches to innovation in their organizations, most often
directly involved in their firm’s AI implementation processes.

The survey covered 203 companies, representing a total of nearly 100,000 employ-
ees, including some 50,000 lawyers. We analyzed data collected in different parts
of the world to get a representative global picture. Our goal was to understand
lawyers’ views on AI and to assess the extent to which the technology is already
present in the legal industry.

In order to see more accurately how changes are shaped in each group, we divided
companies into five classes, taking into account the number of employees in each
class. The first group included companies with 1-10 employees, which accounted
for 22.7% of the study sample. The second group was companies with 11-39 em-
ployees, which accounted for 26.6% of the study sample. The third group was com-
panies with 40-99 employees, representing 20.2% of the study sample. The next
group was companies with 100-999 employees, which represented 13.3% of the
surveyed sample. Finally, the fifth group included companies with 1,000 employees
or more, representing 17.2% of the surveyed sample.

The results of our survey showed that the average percentage of lawyers who deal
with artificial intelligence in their work is 24.5%, with a median of 15.2%. These num-
bers are indicative of the degree to which lawyers are involved in using AI as part
of their work. It seems that this technology is becoming more and more present in
the legal industry, but there are still differences between companies with different
numbers of employees.

In addition, we divided the companies into five categories based on their main
fields of activity. The first category is firms specializing in litigation, which repre-
sented 7.4% of the surveyed sample. The second category is corporate firms, which
represented 23.2% of the surveyed sample. The third category was companies
working with new technologies, which accounted for 9.4% of the surveyed sample.
The fourth category included companies that described themselves as general-
ists, handling all matters, accounting for 13.3% of the surveyed sample. The fifth
category was companies with other business profiles, which together accounted for
46.8% of the surveyed sample.

18 | LLI WHITEPAPER | Nº 3 (EN) | 2023


We also included a geographic breakdown of companies in the survey, distinguish-
ing between those based in the US and those outside the US, mainly in the Eu-
ropean region. We wanted to explore whether, with the US being the site of the
largest investments in AI, there is a significant difference in AI adoption in the legal
industry in the US and the rest of the world, and whether US-based companies
view the future of artificial intelligence and its challenges differently. 30.5% of the
companies surveyed were based in the US, while the rest, or 69.5%, represented
companies outside the US.

1.4.2.2. Description of the study

We divided the study into two parts. The first part is a statistical report. It is aimed
to find out how law firms use AI and what attitudes they have toward generative ar-
tificial intelligence. The survey provides valuable information from both a business
and scientific perspective. The results obtained provide a better understanding of
the actual sentiment of lawyers toward artificial intelligence, as well as an assess-
ment of the degree of adoption of this technology in the legal industry. The survey
is also focused on challenges and predictions related to the use of AI in legal prac-
tice. In turn, this information can be used by companies and researchers to make
decisions on how to move forward with the implementation of AI in legal practice
and to identify areas for further research and development.

We began the survey by asking in what spheres artificial intelligence could change
the work of lawyers. So we started with a question about the proportion of mun-
dane and repetitive tasks in companies that could be improved by using artificial in-
telligence. In question one, we wanted to know the percentage of such tasks com-
pared to tasks requiring in-depth legal knowledge and strategic thinking. Then, in
the second question, we asked companies to identify the most common mundane
and repetitive tasks that their lawyers perform on a regular basis and that could
be streamlined with artificial intelligence. We offered a list of possible tasks, such
as document review, contract drafting, contract revision, legal research, contract
analysis, due diligence, e-discovery, intellectual property management, compli-
ance monitoring, case management and others.

In another question, we asked companies whether they had implemented any tools
or solutions based on artificial intelligence. We offered three possible answers: yes,
no, we are currently exploring options. If the answer was yes, we asked about areas
or areas of practice where the company has implemented AI tools or solutions.
We offered a list of possible areas, such as contract analysis, legal research, infor-
mation retrieval, automation/document generation, e-discovery, intellectual prop-
erty management, regulatory compliance and risk management, dispute outcome

LLI WHITEPAPER | Nº 3 (EN) | 2023 | 19


prediction/risk assessment, dispute resolution support, client solutions and case
management system.

The next questions were about the practical aspects of using AI tools. We asked
what specific artificial intelligence tools the company uses, if any. We were also
interested in how many lawyers in the company use these AI tools or solutions, and
how long the company has been using them. In addition, we asked whether the
company conducted a pilot program or trial period before fully implementing AI
solutions, and whether the tools are used primarily for internal organizational tasks
or for tasks related to client work.

Next, we asked whether lawyers at the company are free to choose which technol-
ogy they want to use, or whether any technology must be approved in advance by
the company. We were also interested in how large the company’s AI innovation
department is, and whether the company employs AI specialists, such as legal en-
gineers or prompt engineers.

We also asked whether the company advises clients on implementing AI solutions


in their organizations and whether the company conducts its own research and de-
velopment work related to artificial intelligence in the legal field. These questions
were aimed at determining whether, in addition to the practical implementation of
artificial intelligence, the practice of law participates in its theoretical and research
development.

Another question asked about a company’s openness to adopting AI technology.


We offered a rating scale from 1 to 5, where 1 meant the least openness and 5
meant the most openness. We wanted to determine how a rather conservative in-
dustry approaches such a modern and rapidly developing technology.

In a follow-up question, we asked companies to identify the most important fac-


tors when evaluating AI solutions suitable for their company. We offered a list of
possible factors, such as cost, ease of use, integration with existing systems, data
security and privacy, support and training offered by software vendors, vendor
reputation, customizability, explainability and others.

Finally, we asked companies where they look for new artificial intelligence ideas and
solutions. We offered a list of possible sources, such as scientific conferences, aca-
demic publications, law schools, bar events, legal technology/artificial intelligence
events, blogs, newsletters, Twitter, LinkedIn, networking with other professionals
and online communities. We’ve also left space to enter other sources.

20 | LLI WHITEPAPER | Nº 3 (EN) | 2023


The next part of the survey was about what lawyers think about the challenges
that await them in the face of AI development and also predictions relating to this
sphere. In this section of the survey, we asked about the challenges facing lawyers
in the era of artificial intelligence - both legal, ethical and organizational. In the next
question, we asked how AI can help lawyers meet these challenges. Respondents
were also given the opportunity to express their predictions about the impact of
artificial intelligence on the legal industry in the next 3 years. We asked what kind of
artificial intelligence system they would like to implement in their law firm.

The second part of the report features seventeen commentaries from esteemed
scientists who have long studied the relationship between artificial intelligence and
law. These commentaries respond to the report’s findings, highlighting both sur-
prising elements and those that align with their prior intuitions. We reference these
expert opinions, and in our conclusion, we integrate them to formulate conclusions
that may be incorporated into future editions of this report.

LLI WHITEPAPER | Nº 3 (EN) | 2023 | 21


2. CURRENT STATE OF AI
IMPLEMENTATION IN LAW FIRMS

2.1. WHAT PART OF LAWYERS’ TASKS CAN


BE REPLACED BY AI
We asked law firms about the extent to which their employees are burdened with
mundane and repetitive tasks compared to tasks requiring in-depth legal knowl-
edge and strategic thinking.

The average percentage of mundane and repetitive tasks in the surveyed compa-
nies was 38.2%. This means that more than a third of the work carried out in law firms
is routine tasks. However, these responses vary depending on the main focus of the
business. In litigation firms, the percentage of repetitive activities replaceable by
AI rises to an impressive 52%. This means that in companies that specialize in legal
processes, mundane and repetitive tasks account for more than half of the work.

Responses also vary depending on the size and structure of the company. In me-
dium-sized companies (with 11 to 99 lawyers) and in companies where more than
50% of employees are lawyers, the share of mundane and repetitive tasks is lower.
In such companies, the average percentage share of these tasks is less than 38.2%.
This may suggest that in larger firms with more resources and capabilities, mun-
dane and repetitive tasks are better distributed and automated, allowing lawyers
to focus on tasks requiring specialized knowledge and skills.

The most frequently indicated mundane and repetitive tasks that could be stream-
lined through the use of artificial intelligence were legal research, document review
and contract drafting. As many as 79.7% of respondents selected legal research as
a task that could be streamlined using artificial intelligence. In addition, 72.1% of
respondents identified document review as a task that could be streamlined, and
55.8% of respondents identified contract drafting. More than 40% of respondents
see potential in implementing AI in due diligence, case management, compliance
monitoring, contract analysis, and contract proofreading.

These results indicate the potential of using artificial intelligence in legal work. Au-
tomating these mundane and repetitive tasks can bring many benefits, such as sav-

22 | LLI WHITEPAPER | Nº 3 (EN) | 2023


ing time and resources, improving accuracy and efficiency, and being able to focus
on more complex and demanding tasks that require in-depth legal knowledge and
strategic thinking.

1.2. AI adoption in law firms

More than half of the surveyed law firms are already implementing artificial intelli-
gence (AI). More specifically, 51% of firms have already implemented AI-based tools
and solutions, while 12% are currently exploring options in this area. Only 37% of
firms said they are neither implementing AI-based tools nor exploring such options.

The results varied by geography and how large the companies surveyed were.
Analysis of the data showed that companies in the United States are more likely
to use AI tools than companies outside the US. The smallest and largest compa-
nies are the most likely to use the new technology. Mid-sized companies show less
tendency to implement the technology. While this is not surprising when it comes
to large companies (100+ lawyers), which are structured to have a separate unit re-
sponsible for innovation and resources for implementation, the high propensity of
small companies is surprising. This seems to be the result of a grassroots revolution
that is entering the legal world by the firms that are the most flexible and struggling
for the market. As indicated earlier, medium-sized firms are the least likely to lose
the market in 2022-2023, and have already achieved better efficiency by traditional
methods compared to large firms. Hence, their lower propensity to invest in tech-
nology may stem from this. Failure to invest and follow this path of modernity may
be a risky decision in the long run.

LLI WHITEPAPER | Nº 3 (EN) | 2023 | 23


We asked participants to rate their openness to adopting AI technologies on a
scale of 1 to 5, with 1 being the lowest and 5 being the highest. The data shows
that most firms have a high level of openness to AI in their operations. The median
score was 4, suggesting that most companies are very open to implementing the
technology. In addition, the most frequently declared value was 5, confirming high
openness to AI. 68.5% of companies declared one of these two highest values,
while only 12.8% of companies declared one of the two lowest values. 18.7% of
companies declared a middle value.

The survey results are in line with expert predictions, given that many companies
are already using or exploring AI tools. To better understand the factors influenc-
ing openness to AI, a statistical analysis was conducted to clarify which company
characteristics have the greatest impact on this variable. The study found that
openness to AI correlates most closely with company size. The smallest companies
(1-10 employees) and those with more than 100 employees showed, on average,
the highest openness to AI implementation. In contrast, medium-sized companies
(11-99 employees) had the lowest openness. However, these differences were not
significant, averaging no more than one point.

Based on these results, it can be concluded that most law firms are open to im-
plementing AI technology. The implementation of this technology is particularly
popular in the United States and among very small and very large firms. In contrast,
medium-sized firms show less tendency to use AI. It is interesting to note that open-
ness to implementing AI was also declared by those companies that have neither
yet implemented it nor are currently looking for solutions. This could mean that
even these companies anticipate that AI adoption is inevitable.

1.3. Criteria for the selection of AI tools by law firms

We asked law firms what they consider when choosing tools that use artificial tech-
nology. The answers are not surprising. The key for lawyers is data security and pri-
vacy, protection of sensitive client information and the secrecy of the legal profes-
sion. The increase in cyber-attacks and privacy and data breaches has made data
protection all the more of a priority for law firms. Choosing AI tools that guarantee
a high level of security and ensure data confidentiality is an understandable deci-
sion for law firms. This is confirmed by the fact that as many as 79.6% of respondents
included this factor as a key consideration when choosing AI tools.

Another important factor is cost. The survey found that 75.5% of respondents
stressed the importance of cost when choosing AI tools. This is certainly a response
to the already described economic challenges of the legal industry. AI tools that

24 | LLI WHITEPAPER | Nº 3 (EN) | 2023


offer favorable financial terms and at the same time meet the requirements of law
firms are preferred by the majority of respondents.

The last factor considered by more than half of respondents is the ease of use of
AI tools. This option was marked by 61.7% of companies. Law firms want the tools
they use to be intuitive and easy to use. They require them to be accessible to all
team members, regardless of their technological expertise. All of this is aimed at
streamlining daily operations and increasing productivity. Therefore, AI tools that
offer a simple and intuitive user interface are preferred by law firms.

Responses that should also be considered relevant are ease of integration with ex-
isting systems (almost half of the responses), and explainability and customizability
(both considered relevant by more than 30% of respondents). However, the results
indicate that the market is quite immature. Companies do not see a significant role
for vendors. Only 26% consider vendor support important, and only 18% consider
vendor reputation important.

In conclusion, the choice of AI tools by law firms is mainly dictated by data security
and privacy, cost and ease of use. Law firms are aware of the need to protect client
data and choose tools that are able to provide a high level of security. At the same
time, cost control and ease of use are of great importance for work efficiency. The
conclusion of these statistics is that law firms are trying to find the right balance
between these factors to choose AI tools that meet their needs and ensure success
in today’s competitive legal environment.

LLI WHITEPAPER | Nº 3 (EN) | 2023 | 25


1.4. Where lawyers get their AI knowledge

We asked lawyers where, in an era of the explosion of generative AI and the need to
upgrade their skills in this area, they get their knowledge. The question is also important
to understand how the legal industry is adapting to these new technological trends.

The most popular source of information about AI for lawyers is networking or con-
tact with other professionals in the industry. As many as 68.8% of surveyed lawyers
admitted that they gain information on AI through conversations and contacts with
colleagues. This is extremely important, as networking enables the exchange of ex-
perience and knowledge with other lawyers who are already working with AI tools.
This allows lawyers to learn more about the practical applications of AI in the legal
field and what benefits it can bring to their firms.

Another important source of information about AI for lawyers is AI-related events. Re-
search has shown that as many as 59.3% of lawyers use such events to learn more about
AI and its applications in the legal industry. Conferences, seminars and workshops are
ideal opportunities to gain knowledge from AI experts. At these events, lawyers have
the opportunity to listen to presentations, participate in panel discussions and ask ques-
tions, allowing them to better understand the topic and be able to apply AI to their work.

Surprisingly few lawyers use the academic world to gain knowledge about AI. Only
a little over 30% use academic conferences and publications, and even fewer use
bar events (29%) and law schools (14%). Still, a marginal source of information (aside
from LinkedIn, which yields about 33% of indications) is the Internet.

26 | LLI WHITEPAPER | Nº 3 (EN) | 2023


1.5. What tools do law firms use
We asked a group of respondents who represented companies using artificial intel-
ligence about the fields or practice areas in which their companies have chosen to
implement AI-based tools or solutions.

Analysis of the results showed that the most frequently indicated areas for imple-
menting AI-based tools are document automation, which was indicated by 39%
of respondents, and legal research, which was indicated by 34% of respondents.
Respondents are least likely to indicate the use of AI-based tools for managing
intellectual property, predicting litigation outcomes and risks, compliance and risk
management, and dispute resolution.

The statistical models conducted also showed some correlations between the re-
spondents’ answers and the characteristics of their companies. Virtually all e-dis-
covery choices were made by US-based companies. In addition, the largest compa-
nies with 1,000 or more employees were significantly more likely to choose contract
analysis, while they were least likely to choose information retrieval, compared to
other size groups. In contrast, the information retrieval option was typically chosen
by companies with 40 or fewer employees.

The smallest companies, those with fewer employees, were significantly more likely
to choose legal research in the context of implementing AI tools.

The most important conclusion, however, is that there is still no dominant AI tech-
nology used by law firms. This indicates the immaturity of the market and the still
huge space to be developed for technology companies.

LLI WHITEPAPER | Nº 3 (EN) | 2023 | 27


1.6. Impact of generative AI on technological development

We asked companies that use AI tools how long they have been using them. The
vast majority of companies (59.2%) have been using such tools for less than a year.
The results also showed that both 1-2 years and 2-5 years each had been using AI
for 18 companies (17.5%). A surprisingly small number of companies, only 6 (5.8%),
indicated that they have been using these solutions for more than five years.

Companies that use the tools longer are primarily larger companies with 100 or
more employees. The effect is even stronger for companies with 1,000 or more
employees. Smaller companies, especially those with 1 to 40 employees, were less
likely to use AI tools for a long time. Other company characteristics, such as indus-
try or geographic location, had no significant effect on the length of use of AI tools.

The answers to this question lead to several conclusions. First, they indicate that
the explosion of generative AI has influenced the spread of artificial intelligence. It
is the spread of this technology over the past year that has prompted law firms to
adopt AI in their operations. Secondly, they confirm that teams of a few dozen are
already successfully implementing AI-based tools. Finally, the results of this survey
suggest that larger firms have more financial capacity and resources to invest in
AI-based technologies over the long term, and in these firms, the use of AI tools
appears to be more established and widespread.

In the context of technological development, these results suggest that larger


companies have an advantage in realizing the potential of generative AI. However,
it is important to note that over time, incremental experience and technological
development may also make smaller firms able to deploy AI tools for longer peri-
ods of time. There is a need for further research to better understand the factors
determining the length of use of AI tools in law firms, in order to promote even use
of these technologies across the sector.

1.7. Caution in the process of implementing AI in law firms

A survey of law firms reveals that the process of implementing artificial intelligence
(AI) is being undertaken by these companies with caution and attention. According
to the results, 58% of firms have chosen to pilot AI solutions before full adoption,
which indicates their desire to test and understand the potential benefits and risks
of the technology. Additionally, 50% of companies are monitoring the implementa-
tion of these tools very closely. Interestingly, however, half of the companies are
leaving the implementation of artificial intelligence for a bottom-up revolution,

28 | LLI WHITEPAPER | Nº 3 (EN) | 2023


meaning that employees have the opportunity to propose and implement innova-
tive AI-based solutions.

It is noteworthy that 49.5% of respondents believe that the technology used by


lawyers in the firm must be preapproved by the firm (partners, CIO, etc.). For a third
of companies (32%), lawyers’ free choice of technology is coupled with the need
for pre-approval. In contrast, 18.5% of companies give lawyers freedom of choice
in this regard. These figures testify to the awareness of law firms that are trying to
maintain control over the application of AI in their work while giving employees
some flexibility.

Analyzing the results in the context of firm size, it can be seen that the larger the
firm, the higher the percentage of “more restrictive” responses. In small firms
(less than 40 employees), 38.3% allowed lawyers to choose the technology of their
choice at will, while in firms with 100 or more employees, such answers were not
given even once. In the case of the largest firms (1,000 employees or more), as many
as 23 out of 25 (92%) responses were “technology must be pre-approved,” while in
the other size groups, this response occurred in a range of about 23% to about 48%
of cases. This suggests that larger companies tend to have a greater tendency to
limit lawyers’ freedom to choose technology.

The survey also found that 43% of companies reported the presence of an AI inno-
vation department. The median size of the AI innovation department was 2-3. Most
firms have departments with 1-2 employees, indicating that they are not separate
units, but rather individual specialists working on technology within the firm. This
suggests that most law firms prefer a flexible approach to AI innovation, relying
on individual experts who are responsible for developing and implementing the
technology. However, they recognize the challenges and benefits of AI and are
investing in it.

It is also noteworthy that 13% of companies employ specialists dedicated to arti-


ficial intelligence, such as legal engineers or prompt engineers. This confirms the
growing importance of AI in the legal sector and the need for specialists who will
be responsible for using this technology in legal practice.

Several conclusions can be drawn from the above results. First, law firms are taking
cautious steps in the AI implementation process, starting with a pilot or trial pe-
riod. Second, larger firms are more willing to impose restrictions on lawyers’ choice
of technology. Third, most companies prefer a flexible approach to AI innovation,
relying on individual specialists. Finally, the growing employment of dedicated AI
specialists indicates the growing importance of this technology in the legal sector.

LLI WHITEPAPER | Nº 3 (EN) | 2023 | 29


These findings suggest that law firms understand both the potential and risks of AI
and are trying to strike a balance between the freedom to choose the technology
and the control over its use. A cautious approach to AI implementation can help
law firms take advantage of the technology’s benefits while minimizing potential
risks and threats.

30 | LLI WHITEPAPER | Nº 3 (EN) | 2023


3. AI’S IMPACT ON THE LEGAL
INDUSTRY LABOR MARKET:

The impact of artificial intelligence (AI) on the labor market is a research topic that
is attracting the attention of many experts. Here it is worth citing two reports that
address this issue and were published in June 2023: “AI at Work” by the Boston
Consulting Group and “The economic potential of generative AI. The next produc-
tivity frontier” by McKinsey & Company, analyze this impact on various aspects of
lawyers’ work.

According to the AI at Work report, employees are now more optimistic about the
impact of artificial intelligence on their work compared to how they viewed it five
years ago. In particular, generative AI seems to be seen as a tool that will save time
and promote innovation in various legal roles. However, the level of excitement
varies by seniority and country. Those at the top of the organization’s hierarchy are
more positive about the technology, while frontline employees express more con-
cern. In addition, employees are concerned that companies are not taking steps
to implement AI responsibly. This includes issues related to the implementation
of procedures and regulations in this regard, as well as insufficient attention to up-
grading the skills of staff unprepared to cooperate with AI. Interesting two figures
worth quoting - 36% of employees say their current job will be replaced by AI. 86%
of employees say they require additional training and upskilling as AI will change
their jobs.

The McKinsey & Company report, meanwhile, focuses on the potential impact of
generative artificial intelligence on knowledge work, particularly on decision-mak-
ing and collaboration activities that previously had the lowest automation poten-
tial. Previous generations of automation technologies were effective at automating
data management tasks, such as data collection and processing. However, genera-
tive artificial intelligence, with its ability to understand and use natural language,
increases the potential to automate these types of activities. As a result, generative
AI is likely to have the greatest impact on knowledge work, particularly on decision-
making and collaboration activities, which previously had the lowest automation
potential.

The McKinsey & Company report also shows that many job activities that involve
communication, supervision, documentation and general human interaction
have the potential to be automated using generative artificial intelligence. This

LLI WHITEPAPER | Nº 3 (EN) | 2023 | 31


could accelerate the transformation of work in professions such as education and
technology, where the potential for automation was expected to emerge sooner.
The report also shows that business and legal professionals using generative AI
indicate twice as much potential to automate their operations as those not using
AI.

The survey conducted by our team looked at the impact of AI on the legal la-
bor market. We did not focus on this issue, but it clearly resonated with several
questions. When we asked lawyers about the biggest challenges in the era of
artificial intelligence, we got a range of responses, among which were legal is-
sues (80.2%), privacy and security (66.8%), and AI accuracy and reliability (63.9%).
However, AI’s impact on the labor market, i.e., layoffs of workers or changes in
job roles, was selected as the least challenging by only 11.4% of respondents.
This suggests that lawyers are not concerned about AI’s significant impact on
the labor market.

V What do you perceive as the top challenges facing lawyers in the age of
AI?

Table 22. Frequency table for multiple response question about lawyers’ top chal-
lenges in the age of AI

Answer (n = 202) Number % of resp. opting for % of choices

Ethical issues 60 29.7% 10.0%

Privacy & security 135 66.8% 22.6%

AI accuracy & reliability 129 63.9% 21.6%

Adaptation to AI 57 28.2% 9.5%

Legal issues 162 80.2% 27.1%

Labor market 23 11.4% 3.8%

Lack of explainability 31 15.3% 5.2%

Other 1 0.5% 0.2%

32 | LLI WHITEPAPER | Nº 3 (EN) | 2023


Figure 24. Top challenges facing lawyers in the age of AI (n = 202)

On the other hand, in response to a question about the anticipated impact of AI


on the legal industry in the next 3 years, respondents pointed to several issues
that will undoubtedly affect the way lawyers work. First, AI tools will become an
important part of the legal industry’s workflow, according to 53.2% of respondents.
This suggests that lawyers will have to adapt to new technologies and learn to use
AI-based tools.

Second, paralegal tasks such as data collection and processing will be automated.
According to 53.2% of respondents, lawyers will need to retrain and adapt to these
changes, which may affect their current roles.

Third, there is a forecast of high demand for AI lawyers. According to 38.8% of


respondents, lawyers specializing in AI will be more likely to find jobs and grow
professionally.

The survey shows that lawyers are not afraid of the impact of artificial intelligence
on the job market. They see the need to retrain and adapt to new tools and develop
AI-related specialties. However, the labor market is not considered a major chal-
lenge in the context of the introduction of artificial intelligence. Lawyers see AI as
an opportunity to improve efficiency and as a tool that can streamline their work.

LLI WHITEPAPER | Nº 3 (EN) | 2023 | 33


In addition, the survey shows that lawyers see an end to some of the simple ways
of doing things, such as current paralegal tasks, which have great potential for au-
tomation using artificial intelligence. This means that lawyers will have to redefine
their roles and focus on more advanced aspects of legal practice, such as decision-
making, collaboration and management.

Responses to the question, “What are your predictions for the impact of artificial
intelligence on the legal industry in the next 1 and 3 years?” (n = 201)

34 | LLI WHITEPAPER | Nº 3 (EN) | 2023


4. HOW TO IMPLEMENT AI IN A
LAW FIRM RESPONSIBLY

1.5. Lack of proper standards as an obstacle to AI implementation:

We asked lawyers about the challenges to be faced in implementing AI. The most
frequently cited challenges were legal issues (legal liability and regulation - 80.2%
of respondents chose this answer), privacy and security (66.8%), and the accuracy
and reliability of artificial intelligence (63.9%). These responses were selected sig-
nificantly more often than issues such as ethical issues (29.7%), the need to adapt
to AI (22%), lack of explainability (15.3%) and the labor market issues already de-
scribed (11.4%). What is interesting is that answers were given similarly regardless of
the size and location of the company surveyed, and the differences across statisti-
cal groups were small.

1.6. AI’s impact on solving legal industry problems - predictions

Lawyers are not afraid of new technologies. When asked about the opportunities
associated with new technologies, they see the great potential of artificial intel-
ligence as the answer to future challenges.

LLI WHITEPAPER | Nº 3 (EN) | 2023 | 35


Most respondents agree that implementing AI will streamline repetitive tasks and
improve productivity (87%). More than half of respondents also answered that AI
is an opportunity to reduce human error (56.5%). These options were selected by
more than half of the respondents. The next responses no longer so obviously indi-
cate how AI can help lawyers meet the challenges of the future. 31.5% of responses
indicate that technology will help scale the business and the firm. More than 20% of
respondents answered that the technology enables data-driven decision-making
(28.5%), promotes innovative approaches within the firm (28%), better access to
information (24%) and inspires creative approaches to law (22%). Only 15% of re-
sponses recognize that AI can improve collaboration with customers.

When asked how they foresee the impact of AI on lawyer work in the next 3 years,
more than half indicate that tools using artificial intelligence will become an impor-
tant part of lawyers’ workflows (53.2%). The same number of responses indicate that
paralegals’ tasks will be automated during this period (53.2%). Half of the lawyers
are overconfident that there will be widespread adoption of artificial intelligence in
various areas of legal practice (49.8%).

A significant portion of those responding also indicates that legal and ethical de-
bates about artificial intelligence and its role in the legal world (45.3%), the afore-
mentioned demand for lawyers familiar with AI (38.8%), and innovations related to
AI implementation (33.8%) await us within 3 years.

36 | LLI WHITEPAPER | Nº 3 (EN) | 2023


We then asked law firms what resources they needed to successfully implement AI in
their organizations. More than half of those surveyed responded that it was neces-
sary to develop internal policies and guidelines for the use of artificial intelligence
(61.7%). This appears to be in response to a concern about a lack of regulation and
accountability. Lawyers do not like to operate in a sphere without clear regulations
and expect such self-regulation from their organizations. A large number of re-
sponses also expressed such a need for corporations (38.8%).

Second, lawyers feel they need additional education and training. Such a need for
both the legal and non-lawyer teams was seen by 52.2%. Small companies and the
largest companies indicated this most often.

More than 42% of those surveyed require improved security and privacy/data pro-
tection in the AI-related sphere. In contrast, the least frequently indicated op-
tions are investment in AI research and development (29.9%) and cooperation with
technology partners (38.8%). This leads to the conclusion that the majority of law
firms expect to receive off-the-shelf solutions, but almost 1/3 of firms are willing
to invest in new technologies. Despite the rarest of choices, this is a very sizable
number. It should come as no surprise that this option was indicated least often
by small firms.

LLI WHITEPAPER | Nº 3 (EN) | 2023 | 37


Lawyers answered what kind of AI system they would implement in their organiza-
tion, provided the system was secure and accurate. Here, lawyers again indicated a
great openness to AI tools. Among 203 surveys, 1142 solutions were indicated. Each
respondent indicated an average of 5-6 solutions. The most preferred AI systems
were a document generator (84.2%), a document verification tool (71,4%), a docu-
ment summarization tool (69.5%), a case law analysis tool (63.5%), and a compliance
and risk management system (59.1%). The fewest companies indicated a system to
support negotiations (21.7%) and a legal argument assistant (27.1%).

38 | LLI WHITEPAPER | Nº 3 (EN) | 2023


Finally, we asked law firms what one tool they consider the most essential, a tool
that could solve the most critical needs, assuming no budgetary or other con-
straints. Nearly 42% of respondents indicated that it was a document generator.
Other answers were selected much less frequently (from a few to ten per cent), but
it is worth noting that a total of 58% of the responses refer to working with docu-
ments (verification, summarizing, generating, changing). This indicates that in this
sphere lawyers see a key need in automation.

Glossary:

AI winter AI Winter is a stage in the evolution of artificial intelli-


gence when the pace of research and innovation slows
down, often marked by a reduction in funding and a gen-
eral disillusionment with the field’s ability to deliver on its
promises. This can have significant implications for busi-
nesses, investors, and the wider technology landscape.

Artificial intelligence Artificial intelligence (AI) is the ability of software to per-


(AI) form tasks that traditionally require human intelligence.

Application pro- An application programming interface (API) is a way to


gramming interface programmatically access (usually external) models, data
(API) sets or other software elements.

LLI WHITEPAPER | Nº 3 (EN) | 2023 | 39


Artificial neural net- Artificial neural networks (ANNs) consist of interconnect-
works (ANNs) ed layers of software calculators called “neurons.” These
networks can absorb huge amounts of input data and
process it through multiple layers that extract and learn
data features.

ChatGPT plug-in ChatGPT Plug-in refers to a specific application or exten-


sion that leverages the ChatGPT models, which are state-
of-the-art natural language processing models devel-
oped by OpenAI. These plug-ins can be integrated into
various platforms and software to enable advanced text
generation and comprehension capabilities.

Deep learning Deep learning is a subset of machine learning that uses


deep neural networks, which are layers of interconnected
“neurons” whose connections have parameters or weights
that can be trained. It is particularly effective for learning
from unstructured data, such as images, text and audio.

Early and late sce- The early and late scenarios are the extreme scenarios of
narios our job automation model. The “earliest” scenario adjusts
all parameters to the extreme probabilities, resulting in
faster development and implementation of automation,
while the “latest” scenario adjusts all parameters in the
opposite direction. Reality is likely to be somewhere in
between these two scenarios.

Foundation models Foundation models (FMs) are deep learning models


(FMs) trained on massive amounts of unstructured, unlabeled
data that can be used for a wide range of tasks out of the
box or customized for specific tasks through fine-tuning.
Examples of such models include GPT-4, PaLM, DALL-E 2
and Stable Diffusion.

Generative AI Generative AI is artificial intelligence that is typically


built using foundational models and has capabilities that
previous AI did not have, such as the ability to generate
content. Fundamental models can also be used for non-
generative purposes (for example, to classify user senti-
ment as negative or positive based on call transcriptions),
while offering significant improvements over earlier mod-
els. For simplicity, when we refer to generative AI in this
article, we include all use cases for fundamental models.

40 | LLI WHITEPAPER | Nº 3 (EN) | 2023


Graphics processing Graphics processing units (GPUs) are computer chips that
units (GPUs) were originally developed for creating computer graph-
ics (such as in video games) and are also useful in deep
learning applications. In contrast, traditional machine
learning and other analytics typically run on central pro-
cessing units (CPUs), usually referred to as the computer’s
“processor.”

Labor productivity Labor productivity is the ratio of GDP to the total num-
ber of hours worked in the economy. Increases in labor
productivity are due to increases in the amount of capital
available to each worker, the education and experience
of the workforce, and technological improvements.

Large language Large language models (LLMs) are a class of foundational


models (LLMs) models that can process huge amounts of unstructured
text and learn relationships between words or parts of
words, called tokens. This allows LLMs to generate natu-
ral language text, performing tasks such as summariza-
tion or knowledge extraction. GPT-4 (which underlies
ChatGPT) and LaMDA (the model behind Bard) are ex-
amples of LLMs.

Machine learning Machine learning (ML) is a subset of artificial intelligence


(ML) in which a model gains capabilities after being trained
or shown many sample data points. Machine learning
algorithms detect patterns and learn how to make pre-
dictions and recommendations by processing data and
experiences rather than receiving explicit programming
instructions. Algorithms also adapt and can become
more effective in response to new data and experiences.

Modality Modality is a category of high-level data, such as num-


bers, text, images, video and audio.

Structured data Structured data is tabular data (for example, organized in


tables, databases or spreadsheets) that can be used to
effectively train certain machine learning models.

LLI WHITEPAPER | Nº 3 (EN) | 2023 | 41


Transformers Transformers are a relatively new neural network architec-
ture that relies on self-attention mechanisms to transform
a sequence of input data into a sequence of output data,
while focusing its attention on the important parts of the
context around the input data. Transformers do not rely
on splines or recurrent neural networks.

Tuning Tuning is the process of adjusting a pre-trained founda-


tional model to achieve better performance on a specific
task. It involves a relatively short period of training on a
labeled data set that is much smaller than the data set
on which the model was initially trained. This additional
training allows the model to learn and adapt to the nu-
ances, terminology and specific patterns found in the
smaller data set.

42 | LLI WHITEPAPER | Nº 3 (EN) | 2023


5. EXPERT COMMENTS TO AI
REPORT

It has been long emphasized that enhancing communication between legal busi-
nesses interested in or already implementing computational solutions, including
AI systems, and the relevant research communities, is highly needed. In this con-
nection, we have invited researchers specializing in the problems arising at the
intersection of computational technology on the one hand and legal theory and
practice on the other hand, in particular in Legal Informatics, Information Tech-
nology Law, and, specifically, the area of AI and Law, which focuses on the theo-
retical aspects of and practical development of intelligent tools supporting the
performance of legal tasks. The latter community has been active since the 1980s
and accumulated broad expertise in the application of manifold formal models and
computational technologies in legal contexts - to mention legal expert systems,
rule- and case-based models of legal reasoning, models representing and support-
ing legal argumentation in general, ontologies for law, negotiation support systems
as well as – more recently – Machine Learning-enhanced models for predictive jus-
tice, legal text analytics, summarization of legal documents or generation of legally
relevant text with Large Language Models. The activities of his research community
are promoted on a global level by the International Association for Artificial Intel-
ligence and Law (iaail.org), under the auspices of which the International Confer-
ence for Artificial Intelligence and Law (IAAIL) is organized every two years (the
last two editions were the 19 thedition in Braga, Portugal 2023 and the 18th entirely
online edition due to Covid-19 pandemic, in Sao Paulo, Brazil, 202). The confer-
ence is an in-cooperation event with ACM-SIGAI, who publish the proceedings,
and with the AAAI (Association for the Advancement of Artificial Intelligence). An-
other important organization is the JURIX Foundation for Legal Knowledge Based
Systems (jurix.nl) incorporated in the Netherlands with which the series of JURIX
– the International Conference on Legal Knowledge and Information Systems, or-
ganized annually, with proceedings published by the IOS Press, is affiliated. The
last two editions occurred in Saarbrucken, Germany (the 35th edition, in 2022) and
Vilnius, Lithuania (the 34th, in 2021). While neither the IAAIL nor JURIX Foundation
coordinates research as such, the conferences organized by them enable the com-
munity to meet on a regular basis and exchange ideas and criticism. Journal-level
research of the community is publicized principally in the Artificial Intelligence and
Law journal, which has recently had its 30 th anniversary, documented by a special
issue comprising four extensive co-authored overview papers devoted to particu-
lar decades of the research’s evolution and, additionally, topical overviews (Gov-

LLI WHITEPAPER | Nº 3 (EN) | 2023 | 43


ernatori et al. 2022, Sartor et al. 2022, Villata et al. 2022, Araszkiewicz et al. 2022).
It is worth adding that another venue for research presentations in the field is the
JURISIN International Workshop on Juris-informatics, organized by the Japanese
AI and Law community. Thanks to these structures, the AI and Law community
has accommodated multidirectional deep knowledge on various applications of
information technology in the field of law, encompassing theoretical models and
practical prototypical implementations and developing the knowledge concerning
methodologies, advantages, challenges and limitations concerning particular AI-
based techniques and approaches. Therefore it is worthwhile that the awareness
of the breadth and depth of research in the field of AI and Law should increase in
the legal sector. Of course, the digitalization of legal work does not concern the
use of ML-based technologies or computational models of legal reasoning as such
solely but involves taking into consideration the legal issues (which may and do vary
across jurisdictions and branches of law), the business processes modelling and
management, the integration and coordination between the AI technologies and
other implemented IT solutions, the security and confidentiality of information, etc.
There exist scientific events and communities which cover this wide scope, and for
instance, the IRIS (Internationales Rechtsinformatik Symposion), organized annually
in the University of Salzburg, is worth mentioning in this context, not only because
its proceedings volumes are published by Weblaw, the publisher of this Report.
RAILS (Robotics and AI Law Society), in turn, is a community which focuses on the
development of legal frameworks for the responsible development of intelligent
systems, that “facilitates technical developments, avoids discrimination, ensures
equal treatment and transparency, protects fundamental democratic principles
and ensures that all parties involved are adequately participating in the economic
results of the digitalization” (https://ai-laws.org). Last but certainly not least, we
would like to mention here the Liquid Legal Institute (LLI) (https://www.liquid-legal-
institute.com), with in-cooperation with whom this report is being published and
disseminated. The LLI is an open, collaborative, interdisciplinary platform focused
on innovative thinking about the legal profession in context, particularly with re-
gard to how technological progress may enhance or transform the models of legal
services provision. Founded in 2018, the LLI is based in Munich. The LLI platform
connects over 1000 members and experts (individuals, SMEs, large corporations)
who are directly or indirectly involved in the digital transformation of law. The LLI
actively advocates for the digitization of law and particularly for the utilization of
innovation in legal practice.

This part of the report is, therefore, designed to turn the attention of the legal busi-
ness to the opportunities which may result from the increased communication of
the sector with the abovementioned expert communities, taking into account their
differentiated, although to an extent overlapping, scopes of interest and expertise.

44 | LLI WHITEPAPER | Nº 3 (EN) | 2023


Such communication may not only facilitate the development and implementation
processes concerning the responsible implementation of different AI tools in legal
firms, but it may also contribute to avoiding dead ends or reinventing the wheel-
like situations. On the other hand, the scientific community may also profit from
the knowledge about the current state of AI implementation in law firms and the
opinions concerning the opportunities, challenges and limitations perceived in the
legal business sector.

The invited experts were asked to prepare brief commentaries on the survey re-
sults. They were presented with an executive summary of the statistical analysis
of results and the full text of the statistical report. We did not impose any specific
structure on commentaries, but, in order to attain a degree of uniformity, we sug-
gested the following overall approach:

The first part may comprise the reaction to the findings of the survey. Are the results
conforming to your expectations, or do you find some of them surprising? How do
you assess the overall state of implementation of AI in the surveyed law firms? Do
you have any commentary with regard to the differences associated with the loca-
tion of the law firm, its size, or area of expertise? What is your opinion concerning
how law firms address the risks and benefits of AI in the workplace? What is your
opinion about their expressed preferences and needs concerning the technology?

The second part of the commentary could contain, for instance, your predictions
concerning the applications of AI technology legal practice and the associated
risks and potential benefits, as well as recommendations related to what should
be done (or avoided) in the coming years in connection with these processes. For
instance, you may indicate the existence of tools and projects addressing the ex-
pressed needs of the law firms, the expected progress in relevant scientific fields
(and associated implementation works), the need for initiation of dialogue between
certain stakeholders and platforms for such a dialogue, development of standards
and good practices etc. Please do feel free to also address different topics following
your area of expertise.

Eventually, we received eighteen commentaries from experts representing different


generations, scientific backgrounds and research interests and experience. Some
of the researchers followed the suggested structure more closely, and some of
them decided to apply their own ordering of comments. Some of them preferred to
accompany their commentary with references and others refrained from doing so.
However, each of the submitted commentaries definitely brings a series of essential
insights into the results of the survey, on many levels: the applied methodology, the
assessment of the results or the broader context enabling a deeper understanding

LLI WHITEPAPER | Nº 3 (EN) | 2023 | 45


of either the results or of future challenges. A detailed discussion of these insights
against the background of relevant ongoing research would require a separate,
extensive study. However, we allowed ourselves to conduct a lightweight qualita-
tive analysis of the commentaries, in order to investigate whether it is possible to
systematize the main themes of the opinions. In doing so, we followed the general
principles of thematic analysis - first, we labelled the commentaries with codes and
then, proceeding iteratively, we grouped them into four overarching themes, which
may be listed as follows:

(1) Prevalence of AI Implementation in Law Firms,

(2) Types of Risks Perceived by Law Firms,

(3) Relatively Limited Concern for Explainability of AI Tools, and

(4) Automation of Repetitive or Mundane Tasks.

Prevalence of AI Implementation in Law Firms

The majority of experts concur that the integration of AI in legal firms apparently
is no longer in its infancy, but rather is becoming entrenched in standard practice.
This survey result is often perceived as, to a degree, surprising. A substantial per-
centage, around 50.74% of legal entities, have embraced AI technologies in some
form, with another 11.82% actively exploring its possibilities, painting a picture of a
sector that’s proactively engaging with technological advancements.

However, the specifics of AI utilization in these practices remain ambiguous. While


half of the respondents affirm the use of AI, discerning the particular technologies
they’ve incorporated remains elusive. Several commentators have highlighted the
primacy of AI in tasks like document automation, legal research, and information
retrieval. This is seen as a direct response to the increasing demands of the profes-
sion and the recognition of AI’s capabilities in streamlining processes, especially in
larger firms that grapple with voluminous, repetitive tasks.

Comparisons with previous studies, particularly from non-Anglophone regions, in-


dicate a positive shift towards AI adoption. For instance, while a 2016 German study
portrayed a lukewarm reception to legal tech, more recent observations mark a
pronounced change. The legal world, traditionally viewed as conservative, is dem-
onstrating a more aggressive uptake of technological solutions. This is especially
apparent in larger firms, where the potential for scale and efficiency is most pro-
nounced. However, smaller entities are also showing significant enthusiasm, sug-
gesting that the potential benefits of AI aren’t seen as exclusive to large-scale op-
erations.

46 | LLI WHITEPAPER | Nº 3 (EN) | 2023


Yet, alongside this progressive trend, there’s an underlying caution, also reflected
in the relatively short duration many firms have been engaged with AI – with a ma-
jority having used it for less than a year. This could also be attributed to the recent
surge in AI’s popularity, notably with platforms like ChatGPT and other large lan-
guage models, which are both user-friendly and readily accessible.

Furthermore, the influence of external pressures, particularly from clients seek-


ing more innovative service providers, cannot be discounted. The onus to appear
“cutting-edge” or to differentiate in a competitive market might be driving some of
this AI adoption, even if firms are not entirely ready or clear about its full potential.

Medvedeva’s observation adds a layer of nuance to this narrative. While the sur-
vey points to about 50% of law firms harnessing AI, this figure could be underrep-
resentative. It’s plausible that many legal firms employ AI-powered search tools
without categorizing them as ‘AI’ due to a deep integration or perhaps a misunder-
standing of the technology. This disconnect could also be attributed to the survey’s
framing or the respondent’s interpretation, suggesting that AI’s actual prevalence
might be even more widespread than indicated.

In conclusion, the legal sector is unmistakably amidst a technological evolution.


While the integration of AI is evident and growing, its depth, breadth, and the moti-
vations behind its adoption present a nuanced picture. As AI continues to advance
and tailor itself to specific industries, its role in the legal world is poised for further
exploration and expansion.

Risks Concerning the Implementation of AI in Law Firms

The implementation of AI in law firms brings forward both opportunities and chal-
lenges. There are significant concerns about experimenting with new AI solutions,
where Costantini highlights that jumping into new technologies without proper
testing can lead to unexpected problems and significant financial implications.
Furthermore, there is a surprising degree of autonomy given to individual lawyers
in some firms, posing potential risks to client data security and the overall quality
of work.

However, there’s a silver lining, as there seems to be growing awareness about


these issues, as reflected in the priority given to guidelines and training pro-
grammes. But, the true capability to address these issues might be concentrated
within larger firms with robust IT structures. These observations suggest that while
there’s awareness about the challenges, there’s a disparity in preparedness across
different-sized firms.

LLI WHITEPAPER | Nº 3 (EN) | 2023 | 47


There’s a shared sentiment, voiced by both Costantini and van Dijck, that the le-
gal sector’s commitment to safeguarding sensitive information is paramount. With
data breaches becoming increasingly common, understanding the intricacies of AI
algorithms and their implications becomes crucial.

Ebers delves into the legal framework, discussing the European AI Act’s role in
regulating AI. The Act differentiates between high-risk and low-risk systems, im-
posing more stringent requirements on the former. Notably, the Act requires that
AI system providers inform users when they are interacting with an AI tool, but not
necessarily the end recipients of its outcomes. The debate on transparency obliga-
tions is crucial, especially when considering the trust between clients and lawyers.

Bias and transparency issues arise when dealing with AI, as pointed out by Ghosh.
AI systems, being trained on historical data, can inherit biases, and the lack of clar-
ity on their workings makes them hard to understand and trust completely. Espe-
cially for critical human decision-making tasks, these issues become paramount.

One interesting insight noted by Harašta is that many companies dive head-first
into implementing AI solutions without any trial period. This practice is particu-
larly prevalent in smaller firms, potentially due to the lack of internal processes.
This hasty implementation is ironic, as many believe that AI inherently brings ef-
ficiency, but without understanding current processes, it’s hard to truly gauge im-
provements. Also Zeleznikow observes that while 57.8% of companies claim to have
trialed AI solutions before full adoption, a notable 42.2% have skipped this prelimi-
nary step. This suggests that a large percentage of firms are readily adopting AI
without prior testing, indicating a rapid and confident embrace of the technology.

Libal emphasizes the concerns related to legal issues and the trust placed in AI
tools, suggesting that many may not recognize the true risks these tools pose, par-
ticularly in terms of liabilities and insurance. Quite to the contrary, Savelka mentions
the inherent risk-averse nature of the legal profession, which mirrors the cautious
approach taken towards AI. However the potential legal liabilities related to AI de-
cisions, especially when they affect clients, can’t be understated.

Walker finds it intriguing that while many law firms value the importance of AI’s
legal issues, there seems to be limited attention given to quality assurance, vendor
reputation, and training. This raises questions about how firms are ensuring the ef-
fectiveness and reliability of their AI tools.

In conclusion, while AI offers transformative potential for law firms, there’s a press-
ing need to address concerns about its implementation. Issues of transparency,

48 | LLI WHITEPAPER | Nº 3 (EN) | 2023


bias, legal regulations, and the need for a structured approach to AI adoption
stand out. The legal sector, being dynamic, will continue to evolve alongside AI,
but ensuring that this evolution is both responsible and ethically sound is of utmost
importance.

Relatively Limited Concern for Explainability

In the rapidly evolving landscape of Artificial Intelligence (AI), explainability and


transparency have become focal points of discussion, especially within the AI com-
munity. Yet, when we delve into the realm of legal firms as represented in the survey
results, a distinct narrative seems to emerge. These firms, navigating the intricate
challenges and promises of AI, have demonstrated a unique set of priorities, often
diverging from the academic discourse.

One of the most pronounced concerns for law firms is the potential liabilities as-
sociated with AI use, as highlighted by Costantini. A staggering 80.2% are preoc-
cupied with this, yet only a minor 15.3% appear attentive to the technical causes
behind these liabilities, such as the lack of explainability. This potential for legal
complications is further compounded when we consider the forthcoming regula-
tions, such as the European AI Act proposed in 2021. As Ebers points out, this act,
which adopts a risk-based approach, makes significant distinctions between high-
risk and low-risk AI systems. While the Act has provisions that ensure AI transpar-
ency for users, it seems to have a loophole. The end clients, the very individuals af-
fected by AI’s decisions, might remain in the dark about the machine’s involvement
in their cases.

Libal brings to light an intriguing observation: the seeming trust legal firms place
in AI tools. Over 60% of the firms surveyed expressed concerns about privacy, se-
curity, and accuracy. In contrast, explainability lagged behind at less than 16%. This
disparity suggests that firms may be more inclined to trust the outputs of AI with-
out delving deeply into the mechanics behind these outputs. Such an inclination
resonates with Zeleznikow’s observation that academics and researchers appear
to be more engrossed with the intricacies of explainability than their practitioner
counterparts.

-
Zurek takes this further by proposing that the difference in perception of explain-
ability between researchers and legal firms might arise from their inherent needs.
Legal firms predominantly seek AI’s assistance for more repetitive and mundane
tasks and not necessarily for intricate legal reasoning. This perspective frames AI
as an augmentation of traditional legal activities rather than a revolutionary force
set to redefine the profession.

LLI WHITEPAPER | Nº 3 (EN) | 2023 | 49


Pagallo’s insights offer a reflective stance, drawing attention to the gap between
scholarly pursuits and the day-to-day realities of lawyers. While the academic
sphere is rife with debates on trustworthy AI and algorithmic transparency, prac-
titioners have their sights set on immediate challenges, like navigating the tricky
terrains of liabilities and adhering to regulations. Yet, amidst these differences, Pa-
gallo underscores an undeniable trend: the continued and growing embrace of AI
tools in law firms’ operations.

Savelka raises ethical flags, cautioning about the dangers of blind trust in AI predic-
tions. Biased data and non-transparent AI tools can lead to skewed legal strategies,
posing ethical dilemmas. Additionally, the advent of AI also brings with it fears of
job displacement, especially for roles traditionally occupied by junior legal profes-
sionals.

In conclusion, the conversation on AI’s explainability and transparency, though


rich and varied, unfolds differently across academia and legal practice. While both
realms acknowledge the significance of these themes, the pressing challenges of
liability, privacy, and accuracy often overshadow them for legal firms. The unfolding
regulations, like the European AI Act, will undeniably play a pivotal role in shaping
this narrative in the coming years.

Automation of Repetitive or Mundane Tasks

The contemporary landscape of the legal profession exhibits a considerable de-


mand for the automation of tasks, especially those that are repetitive or mundane
in nature. For instance, Costantini underscores in his commentary the substantial
proportion of a legal firm’s workload, close to a third, comprising repetitive tasks.
Such a high prevalence not only exhausts intellectual resources but also escalates
internal costs for law firms, especially those engaged in litigation. Also, Franc-
esconi’s and Ghosh’s reflections bring forth a consensus that the legal profession
views AI as a potent tool for streamlining repetitive tasks, refining efficiency, and
curbing human errors. Drawing from the statistical analysis, Lauritsen, Medvedeva,
and Walker spotlight legal research and document review as dominant repetitive
tasks in law firms. A staggering majority, approximately 80%, perceive these tasks
as mundane, amplifying the potential and space for AI intervention.

However, as van Dijck elucidates, caution is urged: the line between repetitive and
mundane tasks may not be so clear-cut. Automation’s capability to handle tasks
previously relying on human judgment raises intricate questions. Harašta’s com-
mentary provides a nuanced perspective, addressing the imminent impacts of AI
on paralegal tasks. While there exists a robust appeal for automating repetitive

50 | LLI WHITEPAPER | Nº 3 (EN) | 2023


tasks, offering lawyers the latitude to delve into high-value tasks, one must proceed
with circumspection. The allure of automation doesn’t automatically translate into
feasibility, and not all tasks, even if repetitive, are conducive to automation.

The above outlined, recurring themes of course do not exhaust the diversity and
scope of opinions, observations and criticism provided by the researchers’ com-
ments. We have noted, however, that these topics constitute a set of core issues
perceived by the majority of the commentators, and may, therefore, provide a con-
venient starting point for a detail-oriented lecture of the original commentaries,
which are presented below, in alphabetical order of the authors’ names.

Importantly, the references to the numbers of questions and to the tables are
made to the full statistical analysis, available at [link].

Federico Costantini, University of Udine, Italy

Analysis

The report is remarkable both for the number of firms involved and for the out-
comes.

A first remark can be made regarding ‘repetitive tasks’ (question 4.1.1.), which ac-
count for practically one-third of the respondents’ workload. This is a well-known
criticality not only in the legal sector, and evidently constitutes a waste of intellec-
tual resources and time.

It is significant that this data is higher in law firms that practise litigation because it
means that disputes are not only a social cost but also an internal expense for the
legal practices.

In this sense, the impact of AI would have a dual benefit, because it directly de-
creases internal costs and indirectly lowers litigation costs, yet it does not seem so
simple.

Going into more detail (question 4.1.1.2), respondents believe that the impact of
AI would most benefit three areas: legal research, document review, and contract
drafting. Of these, only two, however, are directly related to judicial activity, while
the third essentially concerns consultancy. In fact, it seems to be a ‘conservative’ an-
swer, because AI integrates database research tools that have already been known
for decades. It is only an improvement of existing tools; there is no breakthrough

LLI WHITEPAPER | Nº 3 (EN) | 2023 | 51


innovation. Similarly, the fact that the use of AI in case management (53.7 per cent)
is not particularly high indicates that current applications perhaps already meet the
needs of the firms dealing with them.

It is useful to relate this figure to the answers given by law firms that already adopt-
ed AI (question 4.3.1.1.). In fact, the data are overlapping (document automation
39.0%, legal research 34.0%, information retrieval 24%). I assume that the difference
also depends on the technological solutions adopted and the economic assess-
ments that were made based on what was available on the market.

It is very interesting that AI has already been adopted (question 4.1.2) by practi-
cally half of the respondents (50.74%), albeit at different levels. If one considers this
figure together with those who are exploring this technology (11.82%), it emerges
that the use of AI in law firms is in fact an established reality. This emerges from the
question (4.1.3), where it can be seen that adding up the percentages of openness
to AI (30,54% + 37,93%) shows that most are inclined to adopt this technology.

The question on drivers of interest (question 4.2.3.2) manifests an attitude that is


perhaps too pragmatic, focusing on the use to be made of the technology by the
individual user, and does not take into consideration the incorporation of the tech-
nology into the organisation, i.e. vendor support and training issues 32,2%. Perhaps
there is still a prejudice that an AI system is a ‘product’ that is purchased once and
for all and does not require maintenance and constant monitoring, especially if it is
a model based on machine learning technologies.

The question on sources of information (question 4.2.3.3) is interesting because it


particularly notes the relevance of personal contact (75,3%) and events dedicated
to the topic (66,1%). It is significant that large law firms approach AI through aca-
demic conferences, a sign of greater prudence (and perhaps also of the possibility
of devoting more economic resources to innovation).

The figure for openness to AI (question 4.1.3) could be explained by the fact that
smaller firms are quicker to adapt to change, while larger ones can organise them-
selves better. This suggests that AI could be a determining factor in the evolution
of the legal professions market, given the aggregation of small and medium-sized
firms into larger conglomerates.

I find the answer to question 4.3.1.3 - concerning experimentation before adoption


– very interesting. This is the most surprising answer. Experimentation with new
solutions requires a substantial economic investment, but it is equally obvious that

52 | LLI WHITEPAPER | Nº 3 (EN) | 2023


otherwise even greater and, above all, unexpected problems arise. This is a lack of
prudence, a quality that every jurist should cultivate.

The answer to question 4.3.1.4 could mean that, when it occurs, the adoption of AI
within a firm tends to be pervasive and is reflected throughout the business. The
lower percentage referring to clients could mean that the environment in which the
firm operates is not technologically advanced.

The answer to question 4.3.1.5. is also interesting. The fact that there is a large per-
centage of law firms that leave it up to the individual lawyer to self-organise is a risk
to the security of the client’s data and to the quality of the work performed. The fact
is that there seems to be awareness of this problem, as can be seen from question
5.1.3, to which respondents indicated guidelines (61.7%) and training programmes
(29.9%) as priorities. This means that there is awareness of the underlying issues
related to the introduction of this technology, but that in fact widespread and ho-
mogeneous knowledge can only be had in large practices that are able to structure
themselves in this way. This answer can be linked to question 4.3.1.6, concerning the
size of IT departments. In fact, fewer of the respondents have dedicated structures,
and in any case, the amount of personnel is generally very small (question 4.3.1.7).

The answer to the question on the future of the legal profession is interesting (ques-
tion 5.1.1) because it reveals a somewhat short-sighted attitude. First of all, one is
concerned about possible liabilities (80.2%) but does not pay attention to possible
technical causes (lack of explainability, 15.3%). But above all, among the problems
of AI, there is limited sensitivity to the impact on employment (labor market, only
15.8%). Respondents seem to not evaluate the impact on their own position. Yet,
in another question (question 5.1.2), ‘paralegal’ work is expected to be automated
soon (53.2%). In other words, there is a push towards process automation – and a
great excitement for its achievements – but no awareness of the indirect conse-
quences of it.

Another interesting element is related to expectations (question 5.1.1.2). Asking AI


to bring greater efficiency to organisational processes (87 per cent) means demand-
ing from technology something that can also be achieved without it, with tradition-
al organisational effort. AI, in this sense, is a surrogate for business organisation.
This factor emerges from a comparison with American firms, which prefer to focus
on collaboration with the customer, perhaps because they have a more established
practice in this regard. Similarly, calling for AI to reduce human error (56.6%) is le-
gitimate, but the fact is that repetitive tasks are part of professional growth: error is
avoided through practice. Legal practice is also “learning by doing”.

LLI WHITEPAPER | Nº 3 (EN) | 2023 | 53


Synthesis

The recent extraordinary advances in artificial intelligence, popularized by the ad-


vent of ChatGPT, are likely to bring about a profound change in our society, with
irreversible and unexpected consequences, as is widely believed, but perhaps dif-
ferent from what is expected, both in terms of the timeframe in which this will take
place, and the magnitude of the impact. I got the idea that the transformation will
be similar to what happened with Covid19, rather than what is depicted in a ‘Ter-
minator’ scenario. During the pandemic, we witnessed a global phenomenon an-
ticipated by certain signals that were overlooked both individually and collectively,
only to suddenly reveal themselves in their actual magnitude and profoundly affect
our lives. Personal expectations, business strategies, and social policies have been
overwhelmed by a disruptive force. Many of us had no choice but to undergo such
effects and some suffered permanent damage to personal life or career prospects.

From the report I analysed, I believe that the advent of AI in law firms will have
a similar impact. Although there is a certain awareness of the importance of the
phenomenon, and also of the propensity to adopt these technologies, there is in-
evitably no clear perception of its actual scope.

It seems to me that there is a pragmatic attitude in the adoption of new technolo-


gies, dictated by a short-term vision and by the fact that the solutions on the market
are only reliable for limited functions. On the other hand, it has to be realised that
the activities of law firms - especially those mainly involved in litigation - depend on
the level of digitalisation of the courts and, partially, by clients. In some countries,
innovation is also limited by the context.

From a certain point of view, the expected benefits from AI can also be achieved
without it. I refer to the management of ‘repetitive tasks’. I think that the optimisa-
tion of internal processes can be improved even without the use of AI - not all of
them, of course - and on the other hand, some activities, perceived as ‘bureaucratic’
or ‘time wasting’, are unavoidable within complex organisations and therefore dif-
ficult to eliminate. On the other hand, from personal experience, I cannot disregard
the fact that professional skills are accumulated through the exercise of activities
that are perceived as ‘repetitive’ only when one has reached a higher professional
level. Limiting too much - or even suppressing - certain tasks through automation
means interfering in the gradualness of the individual jurist’s professional growth.
This also poses the problem of training the jurists of the future and brings up the
problem of ‘automation complacency’.

54 | LLI WHITEPAPER | Nº 3 (EN) | 2023


Moreover, I sense that the advent of AI in law firms is being considered without an
assessment of the risks related to the individual jurist’s employment. More consid-
eration is given to the benefit to the organisation as a whole, less to the impact on
human resources. This emerges clearly with respect to ‘repetitive tasks’. From the
answers to the questions, it seems that one underestimates the fact that the auto-
mation of document management will inevitably lead to a reduction not only of ‘pa-
ralegal’ personnel - as envisaged by the respondents - but also of more specialised
professionals. The emergence of professionals dedicated exclusively to AI-related
issues may not solve this problem.

A further assessment can be made with reference to the size of law firms. In fact,
the advent of AI is a further factor in the split between small law firms - possibly
specialized ‘boutique firms’ even in LegalTech issues - and large law firms. Medi-
um-sized firms cannot keep up with innovation because they are not agile enough
or sufficiently organised. I do not know whether this process actually benefits the
community.

On further notice, it is revealing that respondents are aware of the need for regula-
tion or at least guidance on how to work using AI. however, this kind of regulation
cannot be left to individual law firms, but should be the subject of consideration by
the whole legal community, possibly with a worldwide discussion. From this point
of view, the ABA’s initiative to set up a working group on the topic seems appro-
priate. https://www.americanbar.org/groups/leadership/office_of_the_president/
artificial-intelligence/.

Gijs van Dijck, Maastricht University, The Netherlands

PART I

Conformity and Surprises in Survey Results

The survey findings indicate that approximately 10-20% of tasks within law firms
are considered repetitive. While this statistic might appear consistent with initial
expectations, repetitive tasks may not equate to mundane ones—the interplay be-
tween human judgment and AI capabilities in handling these tasks warrants further
exploration.

State of AI Implementation

The report underscores a pivotal insight that nearly half of the respondents have in-
corporated AI into their operations. However, the nature of this AI implementation

LLI WHITEPAPER | Nº 3 (EN) | 2023 | 55


remains enigmatic due to a lack of specification regarding the types of AI utilized.
Understanding the AI technologies these firms adopt would illuminate their strate-
gic direction and technological maturity.

Implications of Firm Characteristics on Implementation

The correlation between company size and openness to AI adoption, as revealed by


ordinal regression analysis, is intriguing and potentially concerning. Larger organi-
zations exhibiting higher openness to change may perpetuate industry disparities.
The implications of such discrepancies could reverberate through the legal sector,
creating a divide between technologically advanced firms and those lagging.

Risk-Benefit Assessment and AI

The emphasis on data security and privacy in evaluating AI technologies aligns with
broader industry concerns. This cautious approach reflects a conscientious recog-
nition of AI’s potential implications on the sensitive legal information. The legal
sector’s prioritization of data integrity is not only a reflection of its responsibility to
clients. However, it may also show its commitment to upholding the ethical founda-
tions of the profession.

Information Sources and Preferences

The prominence of academic outlets as significant sources of information on AI


is intriguing. High scores attributed to conferences and publications reveal a reli-
ance on scholarly insights to navigate the complexities of AI integration. The legal
community’s reliance on academic channels underscores collaborative knowledge
dissemination’s importance in promoting informed decision-making.

Technological Needs and Preferences

The breakdown of preferences for specific AI tooling paints a fascinating picture of


law firms’ immediate technological needs. The prominence of document-related
technology, particularly document generators, document verification tools, and
summarization tools, suggests an emphasis on optimizing document-related work-
flows. Equally noteworthy is the demand for case law analytics tooling, signifying
the growing recognition of AI’s potential insights in legal research and argumenta-
tion. However, the necessity (when asked) for case law analytics seems less.

I would have been interested in more US / non-US comparisons. Finally, I find it dif-
ficult to conclude the various regressions due to the limited number of predictors.

56 | LLI WHITEPAPER | Nº 3 (EN) | 2023


PART II

The recent survey findings casting a spotlight on the significance of security and
privacy considerations in AI implementation within the legal domain underscore a
critical dimension of the technology’s integration. As law firms increasingly turn to
AI to enhance efficiency and decision-making, the heightened emphasis on safe-
guarding data integrity and privacy reflects the legal sector’s recognition of its
ethical responsibilities and the delicate nature of legal information.

AI’s transformative potential in the legal landscape extends beyond mere effi-
ciency gains; it can reshape how legal professionals approach research, document
analysis, and even predictive legal outcomes. However, the knowledge that vast
amounts of sensitive data underpin these advancements magnifies the importance
of data security. Legal practitioners are entrusted with their clients’ confidential
information and have the ethical duty to uphold privacy rights. As such, a proactive
approach to integrating robust security measures is non-negotiable.

Equally striking is the role of academic publications and conferences as primary


sources of information for AI tooling in the legal domain. This phenomenon high-
lights the legal community’s dedication to evidence-based decision-making, a cor-
nerstone of the profession. While AI promises a realm of possibilities, its efficacy in
legal contexts hinges on the rigor of its underlying research and the transparency of
its methodologies. Academic outlets offer a vetted space for disseminating empiri-
cal findings, providing practitioners with insights grounded in data-driven analysis.

This confluence of security concerns and evidence-based decision-making prompts


a compelling question: Will evidence-based AI emerge as an indispensable consid-
eration for legal practitioners moving forward?

The answer appears affirmative. In an era where data breaches and privacy viola-
tions attract headlines and regulatory scrutiny, the legal community’s commitment
to safeguarding sensitive information is an ethical and legal imperative. Integrating
AI in the legal sector necessitates a profound understanding of the algorithms’ op-
erations and their potential implications for privacy and data integrity.

Moreover, the reliance on academic publications for AI tooling underscores the


value of a research-driven approach to AI implementation. As legal practitioners
embrace AI solutions, the need for comprehensive, unbiased evidence to substan-
tiate their decisions will inevitably gain prominence. As a legal case relies on well-
founded evidence, integrating AI into legal processes should adhere to a similar
evidence-based ethos.

LLI WHITEPAPER | Nº 3 (EN) | 2023 | 57


Prof. Dr. Martin Ebers and Susanne Rönnecke, RAILS e.V.

Implementation of AI tools is a response to client’s demand of more work for


less money

AI tools help to increase the efficiency on repetitive tasks. Nearly one third of a
law firms work consists of such tasks which reveals a high potential for automation.
In contrast to a German study (How Legal Technology Will Change the Business
of Law, January 2016) which found that fewer law firms than expected had imple-
mented legal technology, this study found that about half of the respondents had
already or plan to implement AI tools. Nevertheless, one third is not planning to
do so.

It is not surprising that smaller firms which are working on more general cases and
are having more repetitive tasks are implementing AI tools. However, the finding
is surprising with regard to the cited German study which saw the highest risk for
small firms to be vanished by legal technology since they are doing more general
cases and standardized tasks which are now replaced by technology. They recom-
mended that small firms specialize and implement AI tools to increase their produc-
tivity without increasing the costs.

In addition, it is not surprising that big firms already implemented AI tools and
are very open to their use. Their clients require more transparency on fees and
a seamless collaboration with their in-house staff. They also require more work
for less money. Law firms whose business model is still based on high hourly
rates were forced to implement legal technology in order to reduce the costs
for the client. Furthermore, those firms have more capacity – financially and in
man-power – to develop, test and implement new technologies. As the cited
German study shows they are more likely to invest in legal-tech start-ups or
develop their own solutions.

The tasks with the highest potential for AI are legal research, document review
and contract drafting, case management and compliance monitoring. The study
shows that small firms are not replacing their work on documents by AI tools but
use such tools to enhance legal research. In contrast, big firms use AI tools mostly
for contract analysis which makes sense considering that they are mostly dealing
with nonstandard, complex cases.

As the German study also suggests, the difference between the US market and
other market might be based on the characteristics of the different legal systems.
In common law the specific rules of document disclosure in discovery lead to a high

58 | LLI WHITEPAPER | Nº 3 (EN) | 2023


adoption of AI solutions for document analysis. Other reasons might be the larger
legal-tech market, access to venture capital and the prevalence of the English lan-
guage and its convenience for natural language processing.

The use of AI changes the roles in the law firms

The use of AI tools shows that the roles in law firms will change which corresponds
the findings of the German study by the Bucerius Law School and Boston Con-
sulting Group from 2016. As repetitive tasks are automated the firms will need
less general supportive staff, junior lawyers and generalists. Especially the work
of young lawyers consists of 30-50 % of tasks that can be automated1. They are
mostly trained on due diligence, document review, document generation and
legal research. However, experienced lawyers will remain important for dispute
outcome prediction, compliance and risk management and IP management
which are tasks – according to the findings of this study – the firms do rarely use
AI tools for. The role of the lawyer shifts to a project manager. In addition, the
profession will require more technical skills since the lawyer has to understand
the tools used. The

German study suggests that even the role of general supportive staff changes
which will need less legal education and more technical and management skills.
The current pyramid structure with a high ratio of junior lawyers per partner will
convert into a rocket structure with a low ratio of junior lawyers per partner. The
German study estimates that the ratio of junior lawyers to partners might decline by
up to three quarter. The law firm will, however, be supplemented by non-legal staff
such as project managers and legal technicians.

The automation of entry-level jobs forces law firms to find a way to train their young
lawyers in order to gain the required project management skills and technical lit-
eracy. In addition, law schools must supplement their curriculum in order to supply
the legal market with lawyers who have the knowledge and skills to succeed in their
new roles. They need to provide courses in project management and legal technol-
ogy. This should comprise classes in case management, database management
statistics, analytics and digital communication2.

1
How Legal Technology Will Change the Business of Law, January 2016, available at: https://www.law
school.de/fileadmin/content/lawschool.de/de/units/abt_education/Studienseite/Studien/Legal_Tech_
Report_2016.pdf.
2
How Legal Technology Will Change the Business of Law, January 2016, available at: https://www.law-
school.de/fileadmin/content/lawschool.de/de/units/abt_education/Studienseite/Studien/Legal_Tech_
Report_2016.pdf.

LLI WHITEPAPER | Nº 3 (EN) | 2023 | 59


The greatest challenges for implementing AI tools are legal issues

Liability for the use of AI tools and their regulation were overwhelmingly named as
the greatest challenges for law firms when considering implementing such tools.

A regulatory framework for the use of Legal technology is an essential step to cre-
ate legal certainty and reduce the risk of the deployment of AI tools. The European
AI Act proposed in April 2021 and currently entering the Trialogue might be a step
in the right direction.

The European AI Act regulates AI systems using a risk-based approach. The Act
only imposes regulatory burdens when the AI system is likely to pose high risks to
fundamental rights and safety. High-risk systems have to comply with high quality
data requirements, documentation and traceability requirements, transparency
obligations, the need of human oversight, accuracy and robustness requirements.
In contrast, low-risk systems are only facing limited transparency obligations. The
AI Act Draft qualifies only those legal AI systems which are used by judicial authori-
ties as high-risk systems. This scope is broadened by the Parliament’s report to
administrative authorities and the use on behalf those authorities. AI tools imple-
mented by law firms will only qualify as high-risk systems under the amendments
suggested by the Parliament if they use them on behalf of public authorities. Most
of the AI tools named in the survey are low-risk systems. The provider of an AI
system which is intended to interact with natural persons shall ensure that they are
designed and developed in such a way that natural persons are informed that they
are interacting with an AI system, unless this is obvious from the circumstances
and the context of use (Art. 52 AI Act Draft). The AI Act only binds the provider of
AI tools to flag the use of AI to natural persons using the systems. So, if a lawyer
uses a document generator or an AI tool for legal research, the provider is only
obliged to inform the lawyer about using an AI system. The client, who is actually
affected by the outcome of the deployment of the AI system, does not need to
be informed. However, as the survey reveals, AI systems which are directly used
for client collaboration are and will remain rare. Nevertheless, the EU should con-
sider broadening the transparency obligation so that natural persons who become
subject of an AI-supported decision have to be informed. Considering the mutual
trust between client and lawyer it should be considered malpractice if the lawyer
does not disclose the use of an AI system if the outcome of the case is significantly
influenced by its deployment. This might not be true for AI tools for legal research.
Also, systems that are used to organize workflows, to manage cases or compliance
are merely effecting the internal organization of the firm and not the outcome for
the client. But tools for document analysis or a document generator contain the

60 | LLI WHITEPAPER | Nº 3 (EN) | 2023


risk of inaccuracy by the system and should, thus, lead to an obligation to reveal
their use.

Moreover, during the legislative procedure the European Council introduced new
rules regarding general purpose AI systems. A general purpose AI system is an
AI system that, irrespective of how it is placed on the market or put into service,
including as open source software, is intended by the provider to perform gener-
ally applicable functions such as image and speech recognition, audio and video
generation, pattern detection, question answering, translation and others. Those
rules are a reaction to the recent hype about large language models like GPT-3
and their wide range of possible fields of use. One famous incident illustrates the
risks of those systems when they are used in a legal context. An American lawyer
used ChatGPT for his case research. Unfortunately, the cases, ChatGPT cited, were
made up. They never existed. The system does not check the factual correctness of
its texts and the user is not able to research on which documents it was trained in
order to assess its legal competence. The EU Council suggests that those systems
are treated like high-risk AI systems unless, the provider explicitly excludes the use
in high-risk AI systems. Thus, the use of those systems for the applications named
in the survey would only be accompanied by transparency and documentation re-
quirements.

Conclusion

As the survey shows, the work of a law firm has a lot of potential for automation by AI
tools. The survey also shows that most law firms have already recognized the pos-
sibilities and implemented AI tools for the most repetitive tasks. The technological
change plays to the client’s demand of higher productivity but less costs. However,
the law firms must be aware that the role of a lawyer changes and demands more
project management and technical skills. In addition, the tasks that are automated
were executed by junior lawyers and paralegals up to now. Thus, the structure of
the entity is going to change requiring more non-legal staff. The education of law
students and the training of young lawyers has to focus more on acquiring man-
agement and technical skills. Another key fact for the successful implementation
of AI tools in law firms is a regulatory framework which will provide legal certainty
regarding liability issues. Unfortunately, the AI Act imposes for most of the systems
mentioned in the survey the mere obligation to flag the use of AI systems when
interacting with humans. However, the sensitive relationship between lawyer and
client requires a regulatory framework for the use of AI systems. There is still a great
need for action for the legislators.

LLI WHITEPAPER | Nº 3 (EN) | 2023 | 61


Enrico Francesconi, Italian National Research Council, Institute of Legal Infor-
matics and Judicial Studies, Italy

1. Reaction to the findings of the survey

The report represents a wide and very relevant analysis of the appeal and penetra-
tion of AI technologies in the world of law firms.

The results of the survey confirm the expectations about the interest of law firms in
the applications of AI technologies in the legal domain, especially of large compa-
nies, which can invest more resources to train and qualify their personnel.

The state of play of AI technologies in the surveyed law firms of all size reveals a
good level of implementation of such tools, in particular in the field of document
automation and legal research.

At the same time, less penetration of AI tools has been found in domains like intel-
lectual property management, dispute outcome and risk predictions, compliance
and risk management, as well as dispute resolution. These results show that AI tools
are especially used by law firms for mundane or repetitive tasks, while more ad-
vanced applications of legal reasoning, widely discussed in the scientific literature
as applications of legal reasoning and inferences (in terms of legal compliance or
predictive justice services), seem still under-represented.

It is not surprising the interest in e-discovery of law firms in common-law countries,


like the US, considering their need to analyze similar cases to support arguments in
a current case. Moreover, it is also not surprising that big law firms are more inclined
to use AI tools, even with complex user interfaces, than small-size firms, consider-
ing the number of employees that can be dedicated to the related tasks.

Moreover, the survey reveals that the largest companies require employees to be
pre-authorized to use AI tools: this fact reveals how the use of AI is felt sensitive by
large companies.

2. Predictions concerning the applications of AI technology legal practice and


the associated risks and potential benefits, as well as recommendations re-
lated to what should be done (or avoided) in the coming years in connection
with these processes.

The survey revealed that AI applications are considered important by law firms,
firstly to streamline repetitive tasks and improving efficiency, as well as to reduce

62 | LLI WHITEPAPER | Nº 3 (EN) | 2023


human errors. Those are definitely, and comprehensively, the most direct challeng-
es to be addressed in office automation in general, for law firms in particular.

Nevertheless, AI tools aim to address more ambitious challenges, like automation


in legal reasoning that can have a beneficial impact on several tasks of interest for
law firms, like document summarization, predictive justice, alternative dispute reso-
lution, advanced legal information retrieval, automatic legal compliance checking,
detection of contradiction in legal texts.

The interviewed companies seem not completely aware of the potential of AI tools
in the legal domain, as far as the AI and Law research can offer, but they properly
underlined the “ethical and legal debates” that the use of such technologies might
raise.

For these reasons, it is clearly underlined the need for a policy and related guide-
lines able to provide a governance framework for AI tools application in legal mat-
ters.

Recently the European Commission addressed such challenges in the “White Pa-
per On Artificial Intelligence. A European approach to excellence and trust”. Such
White Paper is targeted to promote AI technologies in the public and private
sectors. This document is conceived around four main pillars aiming at: creating
excellence and testing centres that can combine European, national and private
investments; fostering new public-private partnerships in AI, data and robotics;
promoting the adoption of AI by the Public Sector; creating an Ecosystem of Trust:
Regulatory Framework for AI (ex: data protection, privacy, non-discrimination).

In this framework, law firms can have an important role for providing use cases and
feedback about the usage of AI tools in the legal professions, as well as for identify-
ing ethical issues which can hamper the application of AI technologies in the legal
domain.

LLI WHITEPAPER | Nº 3 (EN) | 2023 | 63


Saptarshi Ghosh, Indian Institute of Technology, Kharagpur, India

Associate Professor, Department of Computer Science and Engineering,

Indian Institute of Technology Kharagpur

http://cse.iitkgp.ac.in/~saptarshi

Part 1: Analyzing the findings on the survey

The report on “AI in Legal Business” provides a comprehensive glimpse into the
current state of AI integration within the legal sector. The study is divided into two
phases – the first phase involves a survey of 203 law firms and the second one en-
gages researchers worldwide to analyze the key findings on several aspects such
as AI adoption, openness to change, challenges of lawyers, and future predictions.
The sample characteristics section showcases the diversity of the legal industry in
terms of firm size, lawyer count, AI engagement, primary activities, and geographi-
cal location (US / non-US).

As per the report, the legal profession includes a significant portion of mundane
and repetitive tasks, with the average being 38% of the workload [as per Table 5
in the Report]. This aligns with our understanding of legal work as told by Legal
professionals in India as well. However, companies that specialize in “litigation”
have a higher average of these repetitive tasks, about 14% higher than other sub-
fields [Section 4.2.2 in the Report], which might be due to the nature of litigation
processes. The fact that nearly 80% of respondents believe AI could enhance “legal
research” [as per Figure 10 in the Report], indicates a growing recognition of tech-
nology’s benefits in making repetitive tasks more efficient.

Interestingly, the location of the law firm does not seem to significantly affect the
distribution of repetitive tasks. However, there are notable differences when it
comes to the size and specialization of firms. Medium-sized companies, those with
11 to 99 employees, stand out by having fewer repetitive tasks on average. On the
other hand, firms with at least 50% of lawyers have a lower average of repetitive
tasks. [Section 4.2.2]

The smallest (1-10 employees) and the largest companies (100+ employees) along
with the companies in the U.S., are more likely to have implemented AI. The re-
duced likelihood of medium-sized companies (11-99 employees) adopting AI is
somewhat surprising. This suggests that the relationship between AI adoption and

64 | LLI WHITEPAPER | Nº 3 (EN) | 2023


company size is not linear and might be influenced by other factors, including the
cost of developing and deploying AI solutions.

Also, more than half of the surveyed companies have already integrated AI tools
into their operations, and an additional 11.8% are actively exploring AI options.
This suggests that AI implementation is becoming a norm rather than an excep-
tion in the legal sector. The fact that a significant number of companies (68.5%) are
adopting AI in their firms reflects the growing recognition of AI’s potential [Table 9].
The overall trend of openness to AI indicates a promising path toward harnessing
technology for legal innovation.

Regarding the evaluation of AI solutions, it is not surprising that “data security and
privacy” tops the list, chosen by 79.6% of respondents, which reflects a responsible
approach to adopting new technologies. This aligns with the emphasis on protect-
ing sensitive legal information. Nevertheless, as per the expectation, “cost” (75.5%)
and “ease of use” (61.7%) are also prominent factors which imply that firms are
carefully considering the financial implications of AI integration as well as looking
for solutions that can be seamlessly integrated into their workflow. Unexpectedly,
“vendor-related issues” like reputation and support are ranked lower, with only
18.4% and 26% of respondents selecting them respectively [Table 10]. But, US firms
valuing vendor reputation might mean they really care about trusting technology
providers [Section 4.2.3].

“Networking” and “AI events” emerge as the top sources for new AI ideas and
solutions, with 68.8% and 59.3% of respondents selecting them, respectively. This
aligns with the understanding that the exchange of ideas and participation in in-
dustry-specific events are common avenues for staying updated on technological
advancements. However, there are notably low preferences for “newsletters” and
“online communities”, with only 4% and 3% of respondents choosing them; this
indicates that firms are prioritizing direct interactions and real-time engagement
with AI developments [Table 11].

Also, firms of different sizes look for AI ideas from different sources. Smaller compa-
nies are more likely to rely on sources like social media (e.g., Twitter) and blogs. Inter-
estingly, larger firms are inclined to engage with AI events, indicating a strategic focus
on industry events for innovation. “Academic publications” are favored by medium-
sized firms, showcasing a commitment to scholarly resources [as per Section 4.2.3]

As expected, larger companies (with 100+ employees, and even more so for those
with 1000+) tend to use AI tools for a longer duration compared to smaller compa-
nies, particularly those with 1 to 40 employees [Section 4.3.2]

LLI WHITEPAPER | Nº 3 (EN) | 2023 | 65


According to the respondents, “Document automation” (39%) and “legal research”
(34%) emerge as the prime domains of AI applications. This reflects a progressive
approach to enhancing operational efficiency. Despite the limitations in certain
areas like “Intellectual Property management”, “dispute outcome & risk predic-
tions”, and “compliance & risk management”, the overall trend highlights a promis-
ing movement toward AI integration [Table 12]

Many companies (59 out of 103) are following pilot programs or trial periods before
implementation of AI which indicates that they are careful about handling the pos-
sible risks that come with adopting AI. They use AI for both “internal tasks” and
“client work” (47.6%), which makes sense. But it seems a bit surprising that fewer
companies use AI just for “client work” (14.6%), as it could help improve services for
clients [Tables 14, 15 in the Report].

The substantial majority (59.2%) of companies using AI-based tools for less than a year
indicate a gradual yet steady transition. However, half of the companies exhibit a trend
towards more cautious technology adoption, with a notable requirement for pre-ap-
proval (49.5%) among their lawyers [Table 16]. Interestingly, smaller firms appear to offer
more autonomy to their lawyers in selecting technology. The presence of AI innovation
departments is significant (in 44 out of 103 firms), suggesting a proactive approach to
staying at the forefront of AI advancements, particularly in larger firms. Even though it is
a small number, the presence of dedicated AI positions (in 13 out of 103) shows a com-
mitment to leveraging specialized expertise for effective AI integration [Table 18, 19].

Also, several challenges were identified by legal practitioners regarding AI adop-


tion. The prominence of legal issues, including “liability and regulation”, is the most
commonly cited challenge (80.2%), which is not unexpected. Simultaneously, the
emphasis on “privacy and security” (66.8%) and “AI accuracy and reliability” (63.9%)
reflects the need to address potential pitfalls in AI models. It is noteworthy that cer-
tain challenges, such as “labor market concerns” (11.4%) and “lack of explainabil-
ity” (15.3%), emerged as less pronounced. Medium and larger firms show a slightly
wider range of recognized difficulties, which could be because of the complex op-
erational situations they handle. Still, the inclination of 31.5% of respondents toward
AI’s potential in scaling their business, signifies an awareness of AI’s transformative
capacity in extending business reach and impact [Table 22].

Part 2: Applications of AI in legal practice, the associated risks and potential


benefits

As per the report, AI-powered tools are already being used to automate tasks such
as “document automation”, “legal research”, “contract review”, etc. In the coming

66 | LLI WHITEPAPER | Nº 3 (EN) | 2023


years, AI is expected to have an even greater impact on legal practice, with the po-
tential to revolutionize the way lawyers work. Regarding the impact of AI in the legal
industry within the next 1 and 3 years, 53.2% of respondents accepted the fact that
AI tools would become integral to legal workflows which highlights the industry’s
recognition of AI’s transformative potential [Table 24].

There are some potential benefits of AI in legal practice such as –

(1) Increased efficiency: AI can automate many of the time-consuming and re-
petitive tasks that lawyers currently perform. This will free up lawyers to focus
on more complex and strategic work.

(2) Improved accuracy: AI can analyze large amounts of data more quickly and
accurately than humans. This can potentially lead to better decision-making
and more favorable outcomes for clients.

(3) Increased access to justice: AI tools can make legal services more affordable
and accessible to people who would not otherwise be able to afford them. Ac-
cess to a Law practitioner is expensive in many countries in the world. If an AI
system can at least give preliminary guidance to common masses in simpler
legal problems, they would be able to access justice much more easily than at
the present times.

According to the report, the most preferred AI systems by the respondents were:
“document generators” (84.2%), “document summarization tools” (69.5%), “case
law analytics tools” (63.5%), and “compliance & risk management systems” (59.1%).
Among these, the respondents have chosen the “automated document genera-
tor” (41.9%) as the most necessary AI tool. This strong preference shows how cru-
cial it is to efficiently create documents in legal work. These document generators
are highly valued, especially in medium and large firms (from 43.9% to 57.1%) and
more favored by US firms (54.8% vs. 36.2% from non-US firms). On the other hand,
small and non-US firms emphasize the “compliance & risk management systems”
[Table 26].

The least chosen AI tools are “negotiation support systems” (21.7%) and “legal
argument assistants” (27.1%). But, “case management systems” stand out as most
appealing to firms with 100-999 employees as well as for Litigation-focused firms
to handle their cases better, while medium-small firms (11-39 employees) find them
relatively less attractive [Figure 28].

However, there are also some risks associated with the use of these AI tools in legal
practice:

LLI WHITEPAPER | Nº 3 (EN) | 2023 | 67


(1) Bias: AI systems are most often trained on historical data that is created by hu-
mans, and this data can be biased. This means that AI systems can also become
biased, which could lead to unfair outcomes for certain types of clients.
(2) Lack of transparency: It is difficult for lawyers to understand how the AI sys-
tems work. Also, the internal workings of AI models are often undisclosed.
Hence, the application of AI tools for tasks that involve crucial human decisions
requires careful examination.
It is important to address these risks in order to ensure that AI is used safely and
ethically in legal practice. Among the variety of AI tools, “predicting case out-
comes” is only favored by about 7% of responses. This difference highlights the
need for a careful approach to AI’s role in predicting case results. Notably, the
prediction of AI-related risks for law firms’ insurance (15.4%) gets limited attention,
reflecting the focus on AI benefits rather than risks.

In particular, I would like to add some comments regarding the application of Large
Language Models (LLMs) in the Legal domain. The integration of LLM-based AI
tools, like ChatGPT, Bard, etc. into the legal industry has gained huge momentum
since 2022. With their applications ranging from the writing of agreements and con-
tracts to legal research, document automation, and more, LLMs have the ability to
expedite a variety of legal practices. LLMs can rapidly generate documents and
gather information, relieving legal professionals of repetitive and mundane tasks.
Moreover, they can expedite research and case preparation with their ability to com-
prehend complex legal texts quickly. However, there are some important factors to
keep in mind. When LLMs are used in certain contexts, they also bring up issues and
dangers. For instance, there may be inherent risks when applying LLMs for judgment
prediction. The outputs generated by LLMs, if used unwisely, could potentially bias
human decision-making. The difficulty arises from the fact that the intrinsic biases of
these models are often undisclosed or inadequately understood. Hence, while LLMs
offer invaluable support in tasks like drafting and research, their applications in tasks
involving critical human decisions demand heightened scrutiny.

Here are some additional recommendations that should be made in the coming years:

(1) Lawyers should be trained more on how to use AI tools and how to identify and
mitigate bias.
(2) There should be a dialogue between lawyers, technologists, and policymakers
to develop standards and best practices for the use of AI in legal practice.
(3) Common people should be educated about the potential benefits and risks of
AI in legal practice.
(4) AI could also be used to create more personalized legal advice and services.

68 | LLI WHITEPAPER | Nº 3 (EN) | 2023


Jakub Harašta, Faculty of Law, Masaryk University, Czech Republic

INTRODUCTION

I will start this commentary with a personal confession: As a rule, I get over-excited
by scientific achievements, while at the same time, I am sceptical about the real-
life impact they might have in the future. The development of artificial intelligence
and its implications for the legal industry is no exception. As a researcher, I enjoy
reading about recent advancements in legal information retrieval, legal question
answering or argument modelling. However, as a lecturer, I often bore my students
with emphasizing flaws standing in the way of practical application, frequently re-
lated to problematic scalability, unclear contribution to legal practice or low ac-
ceptance of any given technology from professionals.

The confession brings me to the following statement: All who read, teach or re-
search legal tech and related changes in conducting legal practice have strong
opinions on the issue. These opinions are often based on anecdotal evidence from
consultations, research activities, and dialogues (not research interviews!) with
people of similar interests.

In the grand scheme of things, we often lack solid quantitative data.3

The authors of the report – Michał Jackowski and Michał Araszkiewicz – finally al-
lowed us to confront our opinions with meticulously collected data. Such an op-
portunity is rare and must be used to build narratives interpreting the data vis-á-vis
one’s experience. I cannot overstate how important the data are and how grateful
we should be that someone collected the data. Only because of the hard work of
the authors we can indulge ourselves in searching for meanings, both in the current
state of play and for future development.

NEW FRIENDS...

While reading the report, I was genuinely surprised by the results related to three
survey questions:

1. Prevalence of AI-based tools or solutions.

2. Resources required for successful adoption of AI-based tools or solutions.

3
Qualitative analyses are not scarce. Unfortunately, the potential for generalization of their results is lim-
ited. See e.g., SOUKUPOVÁ, Jana. AI-based Legal Technology: A Critical Assessment of the Current Use
of Artificial Intelligence in Legal Practice. Masaryk University Journal of Law and Technology, 2021, vol.
15, no. 2, pp. 279–300.

LLI WHITEPAPER | Nº 3 (EN) | 2023 | 69


3. The type of AI system desired by law firms.

First, the data suggest artificial intelligence tools are more prevalent in legal prac-
tice than expected. Approximately half of the sample (103 firms) uses AI-based
tools or solutions, with document automation, legal research and information
retrieval being the leading fields of practice where law firms implemented these
tools. Additionally, companies often use their tools for internal organizational tasks
and tasks related to client work.

From my experience from consultations with law firms’ representatives, the urge
to ‘do AI’ is intense. However, representatives often mention ‘just not now’ in the
same breath. They often state that ‘others’ (e.g., clients) push them towards being
‘more cutting-edge’ and ‘more innovative’. The pressure is often counterproduc-
tive, as the naturally conservative lawyers 4 avoid ‘innovation for innovation’s own
sake’ and seek clear added value in implementing AI-based tools or solutions.

The data leads me to believe that some companies are either readier or braver
to embrace the change brought forth by AI. The readiness may be influenced by
general market readiness (especially in the USA), data availability issues (English
vs. under-represented languages), fiercer competition, and pressure to optimise
companies’ activities. Additionally, embracing the change may be motivated by
trying to appear ‘more cutting-edge’ to draw in a specific clientele. Data suggest
that larger companies were significantly more likely to use AI tools longer relative
to smaller companies, which unfortunately sheds little to no light on the factors
behind the decision to implement.

The second surprise concerns the resources or support lawyers require to adopt
and implement AI technologies successfully. Most answers were related to internal
AI usage guidelines, training programs and security/privacy improvements. R&D
investment placed dead last. The result shows me that lawyers are not concerned
with the maturity of the existing technology but mainly with safe ways of using it.
To me, this shows awareness of the field and related issues. When talking with law-
yers (either with attorneys or judges), they often cite reservations toward AI about
performance, which they deem either significantly lower to humans performing
the same task or impossible to compare to humans in a meaningful way.

The survey shows more mature thinking about AI in the legal industry. We have
technology that may be used meaningfully (especially with a meteoric rise of Large
Language Models). However, we still have to figure out how to implement it in any

4
See e.g., BROOKS, Chay, Cristian GHERGES and Tim VORLEY. Artificial Intelligence in the Legal Sector:
Pressure and Challenges of Transformation. Cambridge Journal of Regions, Economy and Society, 2020,
vol. 13, no. 1, pp. 135–152.

70 | LLI WHITEPAPER | Nº 3 (EN) | 2023


law firm’s everyday workflow. According to the survey, this can be done through
guidelines or programs. It is a clear need addressed from lawyers to law firms, indi-
viduals to institutions.5 The survey suggests a clear connection between the higher
exposure to AI-based tools or solutions in companies with 1000+ lawyers and their
ability to have robust internal mechanisms for developing guidelines and provid-
ing training. I expected the institutional inertia to play a significant role here, with
larger law firms being less flexible to innovate. However, the survey suggests the
opposite. Larger law firms can dedicate more internal resources to, e.g., training.

The third surprise relates to a question about the type (function) of the AI system
lawyers would like to implement under the assumption that it is highly accurate and
safe. Leading answers include a document generator, document verification tool,
document summarization tool, and case law analytics tool. Such results again show
the state of discussion as relatively mature, as these issues are strongly related to
often-repeated lower-level tasks that every lawyer needs to address daily. In my
interpretation of the results, the more complex tools appeared less. I believe that
automation must start from lower-level support tasks to ease the cognitive load for
individual lawyers. Tools allowing more pleasant reading of the document by high-
lighting the most-cited parts or summarizing the case for paralegals to assess if it is
worth reading are essential. If reliable, these lower-level tools can positively impact
technology acceptance by building or enhancing trust.6 However, when presenting
this opinion to attorneys and judges, I was often confronted with pushback call-
ing for complex supportive technologies over automation or support of lower-level
tasks. Fortunately for me (and unfortunately for my audiences), the survey supports
my deeply held belief.

… AND OLD ACQUAINTANCES.

Aside from the abovementioned surprises, I want to draw attention to some unsur-
prising results, which, in my opinion, signify issues either preventing AI from being
implemented to its full potential or downplaying some of its potential large-scale
impacts. These come in three areas:

1. The haphazard way in which the AI is implemented.


2. The focus on the expected impact is mainly on paralegal-level work (often per-
ceived as busy work).

5
For example, judges in the Czech Republic and Slovakia often complain that they are given access to
legal information retrieval tools or databases (because either the court or the Ministry of Justice pays for
licenses) but are left struggling to develop the needed skills without assistance.
6
I extend the argument that ‘knowledge enhances trust’, see BARYSE, Dovile. People’s Attitudes towards
Technologies in Courts. Laws, 2022, vol. 11, no. 5.

LLI WHITEPAPER | Nº 3 (EN) | 2023 | 71


3. Only a little attention is dedicated to AI’s impact on the labor market.

Firstly, the data suggest AI-based tools and solutions are often deployed with lit-
tle attention to how these tools benefit law firms. As mentioned above, from my
experience, many firms want to ‘do AI’. However, the desire is often not framed by a
sufficient understanding of firms’ processes. Such an approach does not allow the
technology to be used fully.

The survey suggests that over 40% of companies fully implemented the AI solutions
without engaging in a pilot program or trial period. Such a result is hardly surpris-
ing, especially at the level of smaller companies. Mid-size and smaller companies
often need to be more staffed about developing and managing internal processes
(see the abovementioned text related to developing internal guidelines and staff
training stemming from the same underlying condition). The initial investment in
AI-based tools and solutions is often significant. A meaningful inclusion into firms’
workflow must be an absolute priority. However, this aspect of legal tech is often
overlooked, which is suggested by my personal experience and the survey data.

Such an approach stems from the idea that AI-based technology allows perform-
ing tasks more efficiently, which 87% of respondents in the survey believe. On the
other hand, I think this perception is inaccurate. To be more efficient, one has to
know what is being done, with what investment (in time or money), and what effi-
ciency. It is unreasonable to expect an AI-based solution to automatically enhance
performance if the performance was never measured before. Efficiency comes from
understanding processes, which small and medium-sized law firms often struggle
with.

Secondly, the impact of AI is still expected mainly on paralegal-level work. The


survey shows that more than 50% of respondents predict the leading effects of
AI-based tools and solutions on the automation of paralegals’ tasks within the next
1 to 3 years. While such a finding is hardly surprising, it is problematic to a certain
extent. The idea of freeing the lawyers from repetitive tasks by technological de-
velopment, which would allow them to engage in jobs with higher added value, is
appealing. However, the fact that specific tasks are repetitive does not mean they
are easy to automate. On the other hand, the fact that particular tasks are easy to
automate does not make them typical for paralegals.

The survey gives us a rather unsurprising result. However, we should be care-


ful about reading into it too much, as I believe paralegals were not represented
amongst the respondents of this survey. Paralegals have to demonstrate resilience

72 | LLI WHITEPAPER | Nº 3 (EN) | 2023


and persistence to climb up the ladder.7 Their effort is not busy work with little to
no actual value but a complex activity8 (albeit often susceptible to automation9).
Paralegals often do not get the credit they deserve, and the narrative of easy and
imminent automation of their tasks is harmful. After all, most of the law firms’ man-
agement positions are held by ex-paralegals.

Finally, I reached the final unsurprising outcome of the survey, which is little atten-
tion to the impact of AI-based tools and solutions on the labor market. In the sur-
vey, only 11.4% of respondents perceived these as one of the top challenges facing
lawyers in the age of AI. The amount of attention is not surprising, as I encounter
it daily when discussing AI with attorneys, judges or students. As a rule, the indi-
viduals believe that most of their work has high-added value and cannot be easily
automated. In response, I often point out that most of us could automate at least
20% of our work-related activity10. If we do this for five employees, we free up time
to enhance the cumulative workload of these five people. Or we freed up 1,0 FTE,
which is not needed anymore for handling the same total workload. Automation will
inevitably lead to lawyers losing their jobs. Especially (but not solely) public sec-
tor employers will be hard-pressed to cut their budgets. The impact on the labor
market will be significant. The survey supports my opinion that this issue is under-
represented in public discourse despite its glaring importance.

WHAT NEXT?

The text above sums up the biggest challenges of developing and deploying AI-
based tools and solutions.

Currently, the discussion is often controlled by the buzzwords and perception of


AI as the answer to all the problems of humankind. On the other hand, notable re-
sults presented at the conferences often lack clear practical benefits. The situation
makes the discussion opaque and turns the effort to build realistic expectations
into lessons in clairvoyance. Our effort should be dedicated to more transparent
communication of AI benefits in existing institutional processes and the role of
AI-based tools and solutions in developing new ways to ‘do things’. The focus on

7
See the need to rethink how young lawyers acquire skills, in SUSSKIND, Richard. Tomorrow’s Lawyers.
Third Edition. Oxford: Oxford University Press, 2023, pp. 230–232.
8
With even the task of ‘retrieving relevant documents’ being multi-layered, see e.g., VAN OPIJNEN, Marc
and Cristiana SANTOS. On the concept of relevance in legal information retrieval. Artificial Intelligence
and Law, 2017, vol. 25, no. 1, pp. 65–87.
9
See WEBB, Michael. The Impact of Artificial Intelligence on the Labor Market. SSRN, 2020. Available at
https://dx.doi.org/10.2139/ssrn.3482150, p. 40.
10
See similar argument raised by SUSSKIND, Richard. Tomorrow’s Lawyers. Third Edition. Oxford: Oxford
University Press, 2023, p. 97.

LLI WHITEPAPER | Nº 3 (EN) | 2023 | 73


the efficiency of AI without understanding what is to be improved is a mirage, not
a goal.

Additionally, automation will have a significant impact on the labor market. The nar-
rative of freeing the highly educated workforce is automatically seen as beneficial.
However, it will send ripples throughout the field of law, affecting the market of
legal service providers,11 providers of pro bono services,12 and the universities pro-
viding specialized legal education.13 As a society, we must be better at addressing
disruptive technological challenges with high impact on society. More attention
must be paid to the labor market impacts of large-scale AI deployment (both in law
and generally).

Marc Lauritsen, Capstone Practice Systems, USA

Michal Jackowski and Michal Araszkiewicz (along with their fellow researchers) have
done a great service by arranging this survey and building statistical models to
interpret the results, using a logistic regression approach.

The survey appears to have been well designed and executed, and provides a
useful benchmark of attitudes and practices early in the generative AI revolution
(the spring of 2023.) The report is thoughtful and painstaking. It highlights a broad
range of opinions, views, and behaviors in the business and legal community.

Most who actively follow the legal tech scene have visibility into just a small number
of firms, so the great range of organization types and sizes is most welcome. Nearly
as many firms with 1-10 employees as those with 1,000 or more responded, covering
a mixture of US and non-US firms.

Over half of the respondents reported already using AI, and people are clearly not
hesitant to talk about their ideas and experiments.

11
See SUSSKIND, Richard. Tomorrow’s Lawyers. Third Edition. Oxford: Oxford University Press, 2023,
p. 111–113, or HONGDAO, Qian, Sughra BIBI, Asif KHAN, Lorenzo ARDITO, and Muhamad Bilawal
KHASKHELI. Legal Technologies in Action: The Future of the Legal
Market in Light of Disruptive Innovations. Sustainability, 2019, vol. 11, no. 4, and also REPLOGLE, Tyler J.
The Business of Law: Evolution of the Legal Services Market. Michigan Business & Entrepreneurial Law
Review, 2017, vol. 6, no. 2, pp. 287–304.
12
See SUSSKIND, Richard. Tomorrow’s Lawyers. Third Edition. Oxford: Oxford University Press, 2023, p.
138–141, or THOMPSON, Darin. Creating New pathways to Justice Using Simple Artificial Intelligence
and Online Dispute Resolution. International Journal of Online Dispute Resolution, 2015, vol. 2, no. 1,
pp. 4–53.
13
See CONNELL, William and Megan HAMLIN BLACK. Artificial Intelligence and Legal Education. The
Computer & Internet Lawyer, 2019, vol. 36, no. 5, pp. 14–18, or SAVELKA, Jaromir, Matthias GRABMAIR
and Kevin ASHLEY. A Law School Course in Applied Legal Analytics and AI. Law in Context, 2020, vol. 37,
no. 1, pp. 134–174.

74 | LLI WHITEPAPER | Nº 3 (EN) | 2023


Notable points

I found these points most notable:

1. The most frequently indicated tasks that AI could enhance were legal research
(79.7% of respondents), document review (72.1%), and contract drafting (55.8%).

2. The most frequently indicated fields of AI tools implementation was document


automation (39%) and legal research (34%).

3. The most frequently indicated use of AI was streamlining repetitive tasks and
improving efficiency (87%). The second most common choice, chosen by more
than half, was human error reduction (56.5%). The rarest choice was enabling
better collaboration between legal professionals and clients (15%).

4. The most preferred AI uses were document generation (84.2%), document


summarization (69.5%), case law analytics (63.5%), and compliance & risk man-
agement (59.1%).

5. The most frequently indicated factors to consider were data security and pri-
vacy (79.6%), cost (75.5%), and ease of use (61.7%).

6. The most frequently indicated challenges were legal issues (legal liability and
regulation - 80.2%), privacy and security (66.8%), and AI accuracy and reliability
(63.9%), with different types of firms perceiving these top challenges similarly.

7. The future impacts of AI chosen by large proportions of respondents were:


AI-enabled tools becoming an essential part of legal workflows (53.2%), auto-
mated paralegal tasks (53.2%), and widespread adoption of AI across various
legal practice areas (49.8%). The option “ethical and legal debates” was chosen
by 45.3%.

8. The most frequently indicated resources/support options, selected by more


than half of the respondents, were: the development of internal policies and
guidelines for AI usage (61.7%) and comprehensive training programs for law-
yers and staff (52.2%).

9. Only 13 entities said that they employ people in positions specifically dedi-
cated to AI, such as legal engineers or prompt engineers.

10. 31.5% of the respondents expect AI to help scale their businesses.

Interesting correlations appeared, e.g.:

– Companies in the U.S. were typically more likely to use AI technology, as well
as the smallest companies (1-10 employees) and those with 100+ employees.

LLI WHITEPAPER | Nº 3 (EN) | 2023 | 75


Medium-sized (11-99 employees) companies implemented the technology
less frequently.

– Openness to innovation appears to correlate with company size.

– Vendor reputation was a much more important consideration for firms from
the USA.

– Ease of use is less important for bigger companies (100 employees and more)
than smaller ones.

– In the case of the largest firms (1,000 employees and more), as many as 92% of
the answers were “technology must be preapproved”, while in the other size
groups this answer was between ca. 23% and ca. 48% of the cases. The larger
the company, the higher the proportion of “more restrictive” responses.

– There are significant differences between respondents from the USA and
those from other countries regarding which AI tool is the most necessary. For
instance, automated document generation was deemed necessary by 54.8%
vs 36.2%,

Words Words Words

Lawyers and paralegals are wordsmiths, who spend most of their time reading,
writing, listening, and speaking. In other words, consuming and producing texts. So
it’s not surprising that about 58% of the survey responses relate to tools for work-
ing with documents. The use most frequently identified was automated document
generation (41.9%).

We lawyers tend to live in Word, Outlook, and similar tools. Texts are pervasive.
One early observer suggested that: It should not be too surprising if law ends up
leading the parade in the work-as-text movement, and if legal technologists in-
creasingly find themselves understanding the law’s constitutive processes in docu-
mentary terms. Text (inevitably open-textured) and technique, after all, define the
context within which the architects of legal technology must operate.14

Quibbles

Despite the recent explosion of self-declared generative AI experts the topics ex-
plored in this survey are likely new to many of the respondents. That may help
explain the certain lack of imagination I detect in the answers.

14
Lauritsen, M., 1992. Technology report: Building legal practice systems with today’s commercial author-
ing tools. Artificial Intelligence and Law, 1, pp.87-102.

76 | LLI WHITEPAPER | Nº 3 (EN) | 2023


There’s of course a danger of making too much of impressionistic responses to a
survey of this sort. And I have just two minor complaints:

Asking questions like these – “What percentage of your firm’s workload compris-
es mundane and repetitive tasks, as opposed to tasks that require in-depth legal
knowledge and strategic thinking?” and “Can you identify the most common mun-
dane and repetitive tasks that your legal professionals handle on a regular basis
that AI could enhance?” – reinforces a questionable narrative that even the latest
AI tools are mostly suited for ‘mundane’ work. What counts as ‘routine’ of course
itself is a tricky question.15

It seems that effectiveness was not among the possible answers to the question:
“What factors do you consider most important when evaluating AI solutions for
your firm?”

What’s to come

We’re still very much in an exploratory phase, the opening act of a rapidly evolving
drama about law practice. Our infatuations will likely remain superficial for a while.

Flourishing in this era will require new intimacy between humans and machines. I’ve
suggested thinking about that in terms of phenomenology.16 And I recently read
a brilliant discussion about the implications of AI for scientific reasoning that can
largely be applied to the work of lawyers.17

We will need to figure out how best to model legal workers’ inner cognitive worlds
and choreograph their interactions with the outer worlds of computational models.
That seems to require attention to the nuances of human experience when working
with machine intelligence.

Should be fun!

15
See e.g. Computational Intelligence and the Paradoxes of Legal Routine (1990)
16
Toward a phenomenology of machine-assisted legal work (2018)
17
Hope, T., Downey, D., Weld, D.S., Etzioni, O. and Horvitz, E., 2023. A computational inflection for
scientific discovery. Communications of the ACM, 66(8), pp.62-73. Available at https://dl.acm.org/doi/
pdf/10.1145/3576896.

LLI WHITEPAPER | Nº 3 (EN) | 2023 | 77


Tomer Libal, University of Luxembourg, Enidia AI

The survey examined the adoption of AI in law firms of diverse sizes, types, and lo-
cations, with nearly 30% located in the USA. Although the survey does not explicitly
define AI, it’s inferred that the term refers to tools rooted in computational statis-
tics, as this is usually the definition understood by legal professionals. These tools
range from processing documents and information retrieval to chatbots.

The survey initially investigated which legal tasks could be automated by AI, em-
phasizing repetitive and mundane tasks, which account for roughly 35% of the aver-
age firm’s workload.

The primary tasks in this category were legal research, document review, and con-
tract drafting. Although contract drafting was only ranked third in terms of rep-
etition and monotony, tools designed for document generation piqued the most
interest among law firms. Over 40% of firms in general showed interest, with the
figure rising to almost 55% in the USA. Conversely, there was a significantly higher
interest in compliance and risk assessment tools outside the USA (>12%) compared
to within the USA (<2%). These discrepancies suggest that factors beyond the na-
ture of the tasks influence the adoption of AI in law firms. The survey also confirmed
the industry’s emphasis on automating mundane and repetitive tasks, with a stag-
gering 87% of firms prioritizing this AI benefit.

Additionally, the survey reveals that repetitive tasks are more prevalent in large
firms,

particularly in the litigation domain. In contrast, firms where a significant majority


of the staff are lawyers (>50%) tend to have fewer repetitive tasks, with a difference
of approximately 11%.

The results indicated a pronounced interest in the use of AI tools, with over 60%
of the companies showing enthusiasm, particularly among small firms (with fewer
than 10 members) and large firms (with more than 100 members). Of the companies
already leveraging AI tools, a minority have been doing so for more than a year
(less than 25%). This trend aligns with the recent surge in popularity surrounding AI,
notably with platforms like chatGPT and generative AI.

The primary factors influencing AI adoption are security and privacy, cost, and us-
ability.

78 | LLI WHITEPAPER | Nº 3 (EN) | 2023


Concurrently, the predominant concern is related to legal issues, with over 80%
of respondents indicating this. This suggests that, as of now, no AI tool distinctly
stands out in terms of regulations and legal compliance.

A somewhat unexpected finding is the limited interest in tools designed to facilitate

communication between professionals and clients, with only around 15% of re-
spondents expressing interest. This observation is further corroborated by the
mere 7% of companies that have already implemented such solutions. While such
tasks are scarcely viewed as mundane or repetitive, it’s notable that certain client
communication tasks like client onboarding and meeting scheduling are techno-
logically well-established. Moreover, they align closely with the primary

concerns and challenges of AI as outlined in this survey. In addition, it’s worth not-
ing that these tools garnered more attention in the USA, suggesting cultural differ-
ences might play a role in this disparity.

While privacy, security, and accuracy were highlighted as significant challenges


by over 60% of respondents, explainability ranked much lower, with less than 16%
considering it a primary concern. This is somewhat unexpected, given that explain-
ability is intrinsically linked to accountability – a factor one would presume to be of
utmost importance to legal professionals.

The prioritization of accuracy, absent a parallel emphasis on explainability, implies


a greater trust in the infallibility of tools. In essence, this suggests that the necessity
to validate the outputs of these tools is perceived as less vital than the innate trust
in their operation.

From this perspective, legal professionals seem to prefer tools that might occa-
sionally produce minor errors—like those used in certain types of legal research or
document analysis—over tools that must be absolutely error-free and, as a result,
necessitate subsequent review (e.g., tools that generate legal documents). This ob-
servation appears contradictory, especially when considering that document gen-
eration tools are ranked highly in importance by legal professionals. The nuanced
priorities of these professionals present an intriguing paradox that might warrant
further exploration in the realm of AI tool adoption in the legal sector.

One should emphasize that it’s inherent to statistical-based AI tools to have a mar-
gin of error. To eliminate these errors, one must resort to deterministic computa-
tional methods or introduce a human review process to validate the AI’s outputs.

LLI WHITEPAPER | Nº 3 (EN) | 2023 | 79


The fact that over 56% of respondents cited the reduction of human errors as a
primary benefit of AI integration is particularly thought-provoking when viewed in
conjunction with the earlier observations. This underlines the intricate relationship
between various AI applications and their perceived benefits and limitations within
the legal field.

In the case of legal research, the broad scope of AI’s search capabilities can indeed
diminish the likelihood of human oversights, despite the inherent statistical errors
associated with such searches. This is because the vastness of the databases and
the efficient processing power of AI can uncover nuanced or obscure details that
might elude human researchers. In contrast, for document generation, while the
efficiency gains are evident, it remains less clear how AI could mitigate human er-
rors. The precision required for legal documents, combined with the complexities
and subtleties of language, poses challenges that may not be as pronounced in the
realm of legal research.

Moreover, the fact that potential risks posed by AI to legal insurance are not widely
regarded as a significant hurdle (with only around 15% of law firms identifying it as
a concern) further complicates the dialogue. It suggests that while firms are wary
of errors in the content of legal documents, they might not necessarily associate
those errors with increased liability or risks significant enough to impact insurance.

In sum, these findings emphasize the importance of distinguishing between AI


tools where minor inaccuracies might be permissible and those where the margin
for error is minuscule.

Differentiating between these two categories of tools might better guide legal pro-
fessionals in their AI integration strategies, ensuring that the technology’s applica-
tion aligns with the unique demands and standards of each specific task.

When we consider AI tools where small error margins might be permissible, we’re
primarily looking at applications where the sheer volume of data makes human pro-
cessing impractical or highly time-consuming. Legal research and certain types of
document analysis fit neatly into this category. For these tasks, the computational
might of AI, which can swiftly sift through and analyze vast datasets, becomes its
most salient strength. In these scenarios, while a human touch might yield more
nuanced results, the scale of the task makes the efficiency and broad scope of AI
invaluable. The trade-off between quality and quantity is strategic and deliberate.

Conversely, when it comes to tasks where precision trumps all else—such as legal
document generation—the stakes are considerably higher. In these contexts, even

80 | LLI WHITEPAPER | Nº 3 (EN) | 2023


minor inaccuracies can have significant repercussions, be they legal, financial, or
reputational. The output needs to be of the highest quality, with quantity or speed
being secondary considerations. Here, the AI tools deployed must either be in-
credibly sophisticated, ensuring minimal error rates, or they must be integrated
within a framework that includes rigorous human review.

In conclusion, the subjective bifurcation of AI tools into these two categories—


based on the relative importance of quality versus quantity—provides a meaning-
ful lens through which to assess the adoption and integration of AI in the legal
realm. It highlights the importance of choosing the right tool for the right task and
underscores the evolving relationship between humans and AI in professional set-
tings.

By applying this lens to some of the tools’ characteristics and challenges men-
tioned in the survey, we can expand subjectively on the results. For the first class of
tools, which might be used for broad data processing like general legal research,
explainability isn’t as critical. They are meant for tasks where slight imperfections
are tolerable, given the sheer volume of data processed. Here, the primary goal is
efficiency and coverage. Moreover, aspects like legal compliance, ethics, and pin-
point accuracy are not as crucial compared to the second class.

Tools in the second class are geared towards high-stakes tasks where precision is
paramount, such as legal document generation. Here, explainability is of utmost
importance. Users need to understand the AI’s decision-making process to ensure
it is consistent with legal and ethical standards. Other challenges, especially legal
compliance and accuracy, also come to the forefront given the tasks’ sensitivity.

When looking at the longevity and maturity of AI tools in the legal field, it’s plau-
sible to infer that tools from the first class, with their fewer challenges and well-
established technologies, have been in use for a more extended period among the
survey participants. However, the landscape is evolving with the advent of genera-
tive AI. Despite being a recent entrant, there’s a growing number of companies
offering legal document generation tools harnessing this technology. But given
that these tools fall into the second class, they inherently come with concerns about
legal compliance, ethics, and accuracy, as they’re still relatively unproven.

Similarly, case law analytic tools, with their noted demand by over 63.5% of law
firms, would also fit into the second class due to their precision-centric nature. The
rise of platforms like ChatGPT being used by legal professionals underscores the
need for rigorous regulation and accuracy. The emphasis law firms place on legal
challenges — constituting over 80% of their concerns and accounting for about

LLI WHITEPAPER | Nº 3 (EN) | 2023 | 81


27% of all concerns across firms — further attests to the importance and sensitivity
of these tools.

The respondents’ decision to concentrate on repetitive and mundane tasks is logi-


cal, as these are traditionally the areas where automation has shown the most im-
mediate benefits. The findings of the survey provide a strong backing to this focus.
However, the evolving landscape of AI, particularly the advancements in generative
AI, opens the door to a broader range of possibilities.

The latest applications of generative AI in creative domains, such as communica-


tion and marketing, showcase its potential to transcend the conventional bounda-
ries of automation. Enhanced versions of generative AI, including innovations like
chain-of-thought algorithms, could potentially revolutionize more nuanced areas of
law. Imagine a scenario where legal brainstorming sessions are complemented by
AI, bringing to the table vast amounts of data-driven insights and perspectives in
real time.

Looking at the broader picture, AI adoption might not be solely driven by the na-
ture of tasks or geographical considerations. The evolving job market and the skills
AI can potentially replicate play a crucial role in determining AI’s applicability. For
instance, the declining demand for legal secretary roles over the past two decades
provides a telling example. While other factors certainly play a role in such trends,
the increasing capabilities of AI tools cannot be overlooked.

By automating tasks traditionally performed by non-lawyers, firms can achieve sig-


nificant cost savings. As the technological landscape evolves, it’s evident that the
role of AI in the legal sector is poised to expand beyond routine tasks, ushering in
a new era of enhanced productivity and innovation.

Juliano Maranhão, University of Sao Paulo Law School, Brazil

University of São Paulo Law School and Director of the Lawgorithm Institute of Re-
search on Artificial Intelligence and Law

1. Impression on the findings

The emergence of Artificial Intelligence tools applied to the legal practice has at-
tracted great attention in the last decade, with the creation of a market of legaltechs
and lawtechs in many countries. The services provided by these companies, the
majority of them startups with innovative solutions but scarce structure, have raised
questioning on the chambers of lawyers and bar associations, concerned with the
threat to the legal profession, the compliance with the requirement of expertise to

82 | LLI WHITEPAPER | Nº 3 (EN) | 2023


perform legal services and, in some countries, to what extent available platforms
of services online would violate marketing restrictions or other ethical obligations
for legal services.

Soon it became clear, though, that such a threat was not effectively present. The
first reason is that much of the available systems and services provided do not de-
liver the high expectation created upon the technology and its capabilities, in part
due to the present hype and some exaggeration on the marketing by lawtechs,
particularly startups still not prepared with enough structure to provide long term
services. The second reason is the understanding that the present tools are mainly
complementary to lawyers’ activities and not substitutes to human practice.

Against this background there is still much enthusiasm for the technology within
the legal field and a market of consultancy on legal solutions and technology in
legal operations has appeared in many countries, since law firms and the public
sectors became aware of the risks of these investments and limitations of the solu-
tions. However, although some companies have published some reports on the
development of these markets, there is up to date no empirical study revealing the
effective present state of affairs in adopting AI by legal firms, methodologically
well-grounded and free of commercial interests. The present research is a first step
in fulfilling this gap, already bringing very interesting findings.

The first of them is that although there is widespread belief among law firms that
the technology will be a great asset in the near future for the delivery of legal ser-
vices, the effective adoption of AI is still in its infancy. For instance, approximately
only half of the firms which were inquired effectively deploy AI systems and a signifi-
cant part only for internal affairs, not as part of a legal service delivered to clients.
We still have to take into account that the boundaries between AI systems and sim-
ple automation are not always clear to the laymen (including inquired lawyers and
partners). Just a few law firms have departments dedicated to implementing such
innovative tools, the number of employees involved in it is still low in average and
almost none have employees that are fully dedicated to this activity. Besides, most
applications are focused on repetitive tasks, such as document automation, legal
research, information retrieval, case management and contract analysis (which also
is usually limited to gathering relevant information from contracts) and most have
been adopted recently (1-2 years).

The scenario shows that there is room for investments, but law firms are still cau-
tious in hiring these solutions, even though most of them trust in the relevance of
these tools for the near future. The reasons for that are also shown in the results of
the inquiry. There are concerns about data security and the confidentiality of the

LLI WHITEPAPER | Nº 3 (EN) | 2023 | 83


information, the costs are still high and this also relates, I believe, with the effective
capacity of the lawtechs to deliver what is promised in their marketing strategies
and the ease of use, since interfaces using traditional machine learning methodolo-
gies are still technical and demand training of the legal team.

The opportunities are promising and the inquiry also provides key information to
lawtechs and companies willing to invest in this field.

2. Some projections

The research was launched and information gathered mostly based on AI tools us-
ing traditional models of machine learning. But we have quite recently experienced
the emergence of large language models and generative AI, which brought about
a widespread use of generative AI, such as the ChatGPT offered by OpenAI. The
emergence of these systems brings two main points:

First, the former belief that AI systems are only going to perform repetitive tasks
is now challenged, with a report by Goldman Sachs predicting loss of 40% of legal
positions in legal offices. Whether this is an accurate prediction will depend on the
adaptation of legal firms to the new tools that will be available and it is likely that
legal firms will accelerate investment and the evaluation of tolls based on founda-
tional models, but fine-tuned and specified to legal applications.

Second, due to friendly interfaces, the application of these systems is already


widespread and law firms will need to invest in consultancy and define governance
principles for the use of AI by the legal staff. The application of these systems to
generate specialized legal documents is still inadequate and limited, with many
limitations and incorrectness. This demands a strict policy of human revision of
the generation of legal documents. However, new research is showing that large
language models may be fine-tuned and trained on legal documents with success
for specific legal tasks with a dramatical improvement (reduction of mistakes). Very
soon, a new generation of more trustworthy tools, based on fine-tuned and speci-
fied applications will be available, reducing the cost of investment and making the
use easier, which were two of the greatest reasons for hesitation by the law firms
shown by the research. There is still concern with data privacy and security, but it is
likely that this problem can be overcome.

Hence, it seems that future inquiries about the effective use of these tools in legal
firms will show a more widespread adoption of systems implemented to perform
more complex tasks, both internally and in the delivery of services to clients. Such
previsions demand, though, inquiry and empirical research to check the effective
implementation. It is also interesting to verify which tools are deployed in courts,

84 | LLI WHITEPAPER | Nº 3 (EN) | 2023


since law firms will have also to adapt to them. Another aspect of concern for future
research is the degree of investment on policies for the responsible implemen-
tation of AI systems and rules of governance and training among the legal staff
considering the tendency of widespread use, not only made available by the law
firms to its employees and partners but also general systems available which can
be easily accessed and used.

Masha Medvedeva, University of Leiden, The Netherlands

Cautionary Tales and AI Realities: Deliberations on Integrating Technology


in Law

Masha Medvedeva – m.medvedeva@law.leidenuniv.nl

eLaw – Center for Law and Digital Technologies & Department of Business Studies,
Leiden University

The rapid growth of interest in artificial intelligence (AI) has led to its integration
into various industries, including the legal sector. Historically, the legal field has
been conservative in adopting technology due to the high cost of error and liability
issues, but recent advancements in AI have gained significant attention. The sur-
vey indicates that approximately 50% of law firms utilize AI in their practice, with a
significant portion implementing AI within the firm within the past year. The recent
adoption suggests that it might be driven by the growing popularity of Large Lan-
guage Models (LLMs), such ChatGPT and GPT-4, whether because the law firms are
more open to using such systems, or because of their wider availability.

The survey was performed on firms of different sizes and from different locations,
as well as with different proportions of lawyers at the firm. The analysis of the re-
sponses suggests that by far the most common mundane tasks performed in any
of the firms are legal research and document review (Figure 1). Correspondingly,
the most common answer to how AI could help lawyers overcome challenges is
efficiency at repetitive tasks (Figure 6). However, when asked what they would like
to implement in their firms the most interest has been shown to be in document
generation, followed by document verification tools (Figure 9). This interest in docu-
ment generation further confirms that the interest in adopting new legal technol-
ogy might stem from the growing performance of LLMs, after all GPT-4 has been
claimed to be able to pass the bar.18

18
https://law.stanford.edu/2023/04/19/gpt-4-passes-the-bar-exam-what-that-means-for-artificial-intelli-
gence-tools-in-the-legal-industry/

LLI WHITEPAPER | Nº 3 (EN) | 2023 | 85


In fact, we see more and more legal tech companies that use language models as
the foundation of their service, see, for instance, Casetext’s CoCounsel19 or Uncov-
er.20 However, the tendency to use LLMs for legal text generation is concerning,
especially seeing that many law firms express interest in implementing it.

This inclination, while driven by the growing capabilities of such models, introduces
significant apprehensions. Generative AI models operate based on statistical rela-
tionships between words within training data and lack any actual understanding
of the world or, in fact, law. Given this architecture, they are also known to ‘hal-
lucinate’, i.e., make information up. Even though the systems can be fine-tuned on
more specific (e.g., legal) data and combined with other models21, and there have
been attempts to reduce such hallucinations, 22 creating a completely accurate sys-
tem is doubtful, and the integration of AI systems that don’t have a foundation in
factual or legal principles could profoundly impact the legal practice.

In response to the survey question “What factors do you consider most important
when evaluating AI solutions for your firm?”, the most common answers were data
security and privacy, cost, and ease of use. Strikingly, the choice of performance
was not among the answers, which is extremely concerning, but perhaps it wasn’t
included in the survey options. While it’s not surprising that keeping data private
is a big concern, it is at least as crucial to consider how well a system will perform
with the specific data that the law firm deals with. What type of data was it tested
on? Will it work as well on the firm’s data? Can the system handle changes in laws,
keep up with legal precedents, and understand shifts in how laws are interpreted?
What types of errors does it make? Are they the same as humans, since the system
is trained on human data? Or are these different mistakes that might be harder to
detect and mitigate? What are the potential costs of the system making mistakes,
like generating incorrect text, 23 misunderstanding information, or missing impor-
tant details? Who is liable for those mistakes? These are all important questions to
consider when selecting an AI solution.

The survey findings hint at a potential oversight, only half of the respondents say
that the technology used by lawyers is required to be pre-approved before use,

19
https://casetext.com/
20
https://www.uncoverlegal.com/
21
Xavier Daull and others, ‘Complex QA and Language Models Hybrid Architectures, Survey’ (arXiv, 7 April
2023) http://arxiv.org/abs/2302.09051 accessed 15 Aug 2023.
22
Lewis and others (n 33); Baolin Peng and others, ‘Check Your Facts and Try Again: Improving Large Lan-
guage Models with External Knowledge and Automated Feedback’ (arXiv, 8 March 2023) http://arxiv.org/
abs/2302.12813 accessed 15 Aug 2023.
23
See, for instance, this report on the case about a lawyer using ChatGPT to prepare a court filing https://
www.nytimes.com/2023/05/27/nyregion/avianca-airline-lawsuit-chatgpt.html

86 | LLI WHITEPAPER | Nº 3 (EN) | 2023


suggesting that law firms might not be fully appreciating the associated risks when
embracing AI technology. This oversight may stem from an unwarranted assump-
tion that highly accurate and secure AI systems are readily accessible. In reality,
these systems often fall short of the desired accuracy and safety standards. This
underscores the need for a comprehensive risk assessment during technology in-
tegration.

This is also why I am not sure if the question about what systems law firms would like
to implement assuming that such a system is highly accurate and safe is an entirely
fair one. It is interesting to see what the law firms would like to have in an ideal world
(or if they had a genie) to see what most desirable technology would be to develop.
However, while the allure of the most common answer - document generation - is
significant, the current accuracy and safety of such systems leave room for concern.
While there are attempts to make them more accurate, achieving it in the near fu-
ture is unlikely, at least not using LLMs; there can be other ways of generating docu-
ments, from simply providing a template to filling it in with information retrieved
from the documents, that may not require such unreliable systems.

Information about the types of AI tools already implemented in law firms is insight-
ful, yet it also raises some questions. Defining what falls under Artificial Intelligence
has always been a complicated task. From Figure 10, representing ‘The most neces-
sary AI tools’, it is clear that a wide range of systems are included, and they are like-
ly to use different types of underlying technologies, although most are probably
based on machine learning. While only half of the surveyed firms claimed to use AI,
it is quite unlikely to be true, considering that most law firms today use AI-powered
search systems. It is not clear if this oversight is due to misunderstanding of how
the search technology works, having such a deep integration that it is not seen as
‘AI’ anymore, or simply the survey’s answer choices. It is worth noting that although
search may be a widely used AI system, it also has risks and can have an impact on
legal practice. For instance, decisions made in their design, the specific annota-
tions, ranking algorithms and even the presence of errors during data sourcing, col-
lection, or processing, can have a significant influence when used on a large scale.24

Future progress in scientific fields related to AI will likely yield improvements in vari-
ous AI technologies, including generative models. However, it is important to rec-
ognize that perfection will not be achieved, and over-reliance on imperfect systems
could have negative consequences by potentially influencing the nature of legal
advice, fostering a passive and ineffective methodology in certain facets of legal

24
L. Diver, P. McBride, M. Medvedeva, A. Banerjee, E. D’hondt, T. Duarte, D. Dushi, G. Gori, E. van den
Hoven, P. Meessen, M. Hildebrandt, ‘Typology of Legal Technologies’ (COHUBICOL, 2022), available at
https://publications.cohubicol.com/typology

LLI WHITEPAPER | Nº 3 (EN) | 2023 | 87


practice. This could ultimately result in a failure to fulfill lawyers’ ethical and profes-
sional responsibilities and reshaping the legal landscape in unintended ways.25

AI’s incorporation into legal practice is inevitable, but its adoption should be pru-
dent. Legal professionals must recognize the scope and limitations of AI-enabled
technologies. The survey suggests that law firms have very few employees respon-
sible for AI innovation and adoption, and many law firms do not have any. This
implies that they are likely limited in their ability to assess the risks of adopting
specific technologies, while they might rely on them in their practice. Proactive
engagement in discussions that include not only lawyers and software providers,
but specialists in legal technology who understand the risks and ethical issues that
implementing such technologies carry, as well as policy formulation about AI’s inte-
gration is vital. Neglecting this involvement could lead to a situation where flawed
AI, despite its inherent limitations, becomes ingrained within the very fabric of legal
institutions and legal practice.26

Ugo Pagallo, Turin University, Italy

Law Firms Meet AI: Opportunities & Challenges Put in Perspective

Michal Jackowski and Michal Araszkiewicz have provided a detailed picture on the
extent to which law firms have adopted AI tools around the world. I have drawn on
their statistics at face value, and think their work shows many interesting things.
To deepen these ‘interesting things’, e.g., how the interplay between lawyers and
philosophers may evolve, the gist of the figures provided by Jackowski and Arasz-
kiewicz can be summed up in three points. Although clear differences exist among
law firms due to their size – five sizes in the report – and region, e.g., US and EU,
there are significant convergences and trends regarding openness, practices, and
concerns.

First, there is indeed “a high openness among the surveyed entities to adopting AI
technologies in their firm” and moreover, “a little more than a half of the companies
have already implemented AI-based tools and solutions (103 firms, 50.7%).

Second, the most popular AI-based tools or solutions are document automation
(39%) and legal research (34%). This outcome is confirmed by what law firms think
“about how AI can best help lawyers overcome these challenges.” As shown by Fig. 6,
“the most frequently indicated use of AI” certainly is not revolutionary. Rather, the

25
https://www.cohubicol.com/blog/casetext-cocounsel-openai-typology/
26
https://www.lawscot.org.uk/members/journal/issues/vol-68-issue-08/chatgpt-and-the-future-of-law/

88 | LLI WHITEPAPER | Nº 3 (EN) | 2023


overall idea is to optimize what law firms actually do by streamlining repetitive tasks
and improving efficiency (87%), or reducing human error (56.5%).

Third, we have a picture of what law firms deem the most relevant challenges fac-
ing lawyers. The top three challenges are (i) legal issues, i.e., liability and regulation
(80.2%); (ii) privacy and security (66.8%); and, (iii) AI accuracy and reliability (63.9%).
Interestingly, some of the areas most debated in meetings and scholarly journals
are relegated at the end of the list, e.g., lack of explainability (20.3%), after ethical
issues (36%), and adaptation (34,4%).

It is apparent that the problems under scrutiny in the papers of scholars – on, say,
trustworthy AI and transparency of the algorithms – are different from the problems
of practitioners and lawyers (for a long while, I’ve been both). As a theoretician, over
the past two decades, I have focused on the impact of AI systems on the law and
problems of ‘adaptation’, in particular, how technology reshapes, or transforms pil-
lars of the legal system; as a lawyer, focus is often on more practical issues, such
as preventing troubles of liability and regulation brought about by the use of AI
tools. These differences between theory and praxis, scholars and lawyers, are here
to stay, and however, they should not overlook the dynamics of the process under
scrutiny, i.e., how law firms are increasingly adopting AI tools in their workflow.

We did know about such trends with the disruption of ChatGPT in 2023 and the
adoption of some models by big firms in US and UK. Considering the hazards of
generative AI, e.g., Large Language Models producing misleading information that
can lead to less well-informed users, it is noteworthy that also new developments
of technology catches on in legal business. Thanks to the statistical results of Jack-
owski and Araszkiewicz, we have now clearer ideas on the adoption of AI by law
firms. All in all, it would be interesting – and even necessary – to repeat the experi-
ment and update the results within, say, three years. My conjecture is that some
results will be different. They do not only regard, of course, the number of compa-
nies that have implemented AI-based tools and solutions, but also, some relevant
challenges facing lawyers, as adaptation and ethical issues. Here, the results may
be different because of the growing impact of AI in human societies, including the
functioning of courts and the administrative corpus of legal systems. Jackowski and
Araszkiewicz offer the reference work to check this new frontier of legal business.

Ken Satoh, National Institute of Informatics,Sokendai, Japan

1. Reaction to the findings of the survey:


(1) Are the results conforming with your expectations, or do you find some of them
surprising?

LLI WHITEPAPER | Nº 3 (EN) | 2023 | 89


The findings say that the most frequently indicated mundane and repetitive tasks
AI could enhance was legal research and this conforms to my expectations and the
trend of generative AI is toward this direction. I am a bit surprised that contract
proofreading is not so high but there are many legal tech companies working on
this application. The findings say that the most important factors for evaluating AI
solutions are data security and privacy, cost and ease of use. This result is under-
standable, but I am a bit surprised that other compliant factors such as infringe-
ment of copyrights.

(2) How do you assess the overall state of implementation of AI in the surveyed law
firms?

The current trend of implementation supporting legal research and information


retrieval is very natural since AI technology mainly reduces such repetitive tasks.

(3) Do you have any commentary with regard to the differences associated with the
location of the law firm, its size, or area of expertise?

It is interesting to know that there are differences between USA companies and
Non-USA companies. If it would be related with the difference between case-
based countries and rule-based countries, then the same trend of USA companies
could be found in UK companies. So I am curious about UK companies’ trend.

(4) What is your opinion concerning how law firms address the risks and benefits of
AI in the workplace?

I believe that legal compliance will be a major concern when we use AI technology
since EU now considers “AI law” which regulates AI tools to protect human rights.

(5) What is your opinion about their expressed preferences and needs concerning
the technology?

In Figure 9 of the Executive Summary Report, three most preferred technologies


are “document generator”, “document verification tool” and “document summa-
rization tool”. So basically lawyers need document handling system. It is under-
standable since legal activities are based on document management. The current
emergence of generative AI and LLM would contribute to these demands, but we
need to solve the problems in the next section.

2. My predictions concerning the applications of AI technology legal practice and


the associated risks and potential benefits:

90 | LLI WHITEPAPER | Nº 3 (EN) | 2023


A sudden emergence of generative AI and LLM has made strong impacts to vari-
ous domains including legal domain. The 2nd-largest law firm in the UK and 7th-
largest on Earth, is partnering with Harvey after a 3-month trial of its AI lawyer
product.

Harvey will empower more than 3,500 of A&O’s lawyers across 43 offices operating
in multiple languages with the ability to generate and access legal content with
unmatched efficiency, quality and intelligence.

https://www.allenovery.com/en-gb/global/news-and-insights/news/ao-announc-
es-exclusive-launch-partnership-with-harvey

Another successful story would be a report saying that GPT-4 passed the multiple-
choice portion of the exam and both components of the written portion, exceeding
not only all prior LLM’s scores, but also the average score of real-life bar exam tak-
ers, scoring in the 90th percentile.

https://law.stanford.edu/2023/04/19/gpt-4-passes-the-bar-exam-what-that-means-
for-artificial-intelligence-tools-in-the-legal-industry/

In my opinion, it would be possible to use GPT for relevant information retrieval


since it predicts the most relevant words in the given context. However, if we use
GPT for judgements in statute-based countries, it does not guarantee that GPT
could give a correct result since GPT is not designed to logical reasoning.

In Japan, we use multiple choice questions as a part of bar exam and our research
group creates data for COLIEE (Competition on Legal Information Extraction and
Entailment) for retrieving relevant articles in Japanese civil code given one legal
question (named task 3 in COLIEE) and checking entailment of one legal question
given relevant articles (named task4). We experimentally applied Chat-GPT to solve
task 4 and we found that the correct rate for the entailment is 60 percents. To solve
entailment task, the solver must perform logical reasoning but the result shows that
Chat-GPT is not so good at logical reasoning.

Moreover, if we do not provide appropriate training data, GPT would give a false
answer and it will take more time to check it than to make an answer by human. This
kind of problems has already occurred.

1. In May 2023, a lawyer who used Chat-GPT submitted to the court a fictitious
precedent created by Chat-GPT.

LLI WHITEPAPER | Nº 3 (EN) | 2023 | 91


“Lawyer apologizes for fake court citations from ChatGPT”

https://edition.cnn.com/2023/05/27/business/chat-gpt-avianca-mata-lawyers/in-
dex.html

2. In June 2023, two American authors filed a lawsuit against OpenAI in San Fran-
cisco federal court, alleging that OpenAI misused their work to “train” ChatGPT in
violation

of copyright law.

“Lawsuit says OpenAI violated US authors’ copyrights to train AI chatbot”

https://jp.reuters.com/article/ai-copyright-lawsuit-idCAKBN2YF17R

3. Brian Hood, Mayor of Hepburn Shire Council was involved in a scandal related
with a foreign bribery within his company, but was never found guilty. However,
ChatGPT says ``In 2012, he pled guilty to one count of bribery and was sentenced to
four years in prison,’’ which is a clear defamation.

“Australian mayor reads world’s first defamation lawsuit over ChatGPT content”,

https://www.reuters.com/technology/australian-mayor-readies-worlds-first-defa-
mation-lawsuit-over-chatgpt-content-2023-04-05/

Therefore, my recommendation for usage of the current generative AI would be


the following.

We need to provide accurate training data to avoid hallucination.

Training data must be legally/ethically appropriate to avoid litigation by other peo-


ple.

To avoid logically incorrect reasoning, we should separate information extrac-


tion and reasoning based on extracted information by providing manually proved
reasoning modules and neural-based information retrieval. (For example, see Ha-
Thanh Nguyen, Wachara Fungwacharakorn, Fumihito Nishino, Ken Satoh, “A Multi-
Step Approach in Translating Natural Language into Logical Formula”, JURIX 2022:
103-112).

92 | LLI WHITEPAPER | Nº 3 (EN) | 2023


Burkhard Schäfer, University of Edinburgh, United Kingdom

The first overall impression is that the study confirms past experiences with the use
of technology in the legal services industry. Indeed, they could as well have come
from a time, in the 1980/90s when legal technology first made headlines and cre-
ated similar concerns as those that we have now. Then and now, the “low hanging
fruits” – repetitive, mundane, low-value tasks – seem to be the ones that lawyers are
most willing to outsource to technology.

However, when looking at some of the answers in more detail, a more nuanced pic-
ture emerges – which also may point to future research that helps to unpack some
of the answers more.

Even though the survey questions gave the respondents some paradigmatical ex-
amples of functions that legal technology may perform, such as document man-
agement and search, there was no introduction to specific tools or technologies,
or the means.

This leads to a methodological question with the study: can we be sure that the
participants had a shared understanding of what AI is, and is there a danger that
there are systematic divergences in the way in which some of the questions were
interpreted depending on whether the respondent was e.g. a dedicated technolo-
gist within a larger law firm, or a “traditional” lawyer working in a small firm?

4.1.2. for instance has a significant number of firms stating that they do not, cur-
rently, use AI tools – and some do not intent to use them in the future either

If one uses a broad definition of AI, this is most certainly wrong. It is almost certain
that the “no-AI” law firms have a spam-filter…. Or use Google, at least sometimes,
for research, and of course the IR tools that Westlaw (or their local equivalent) pro-
vide. Document drafting was noted as a routine task – and everything from docu-
ment wizards to voice-to-text to the extremely intelligent ML-based Grammarly, or
a simpler spell-checker, will most certainly be already in use.

That we may be dealing here with different definitions of AI for the purpose of
the study is further reinforced in 4.3.1.2. The overwhelming majority of responses
claim they have introduced AI less than 5 years ago – most of the examples I just
gave are decades old. The reason that so many firms claim to have introduced AI
only recently could be due to increased awareness of AI in law, or merely the way
in which technology vendors are now labelling their products. A 1019 study e.g.
showed that a significant percentage of startup companies that offer or use “AI”

LLI WHITEPAPER | Nº 3 (EN) | 2023 | 93


have little if anything to do with the technology. Equally, the Covid experience, and
the increased and more conscious use of supporting tools, may have created the
impression that these were “new” technologies when in reality, they merely made
existing solutions more visible. In any case, some care has to be taken when analys-
ing this data, taking into account that perception of AI use, actual AI use, and the
very definition of AI may differ between respondents

In particular, it is possible that :

– Some but not all respondents may have made a distinction between “ge-
neric” AI and “legal AI”, and only think of tools that are specific to the legal
roles they perform. Such a distinction was more natural with GOFAI – e.g.
Bench Capon’s notion that “true” legal AI must have at least some isomor-
phic formal representation of legal rules. This distinction is more difficult to
draw for machine learning-based tools, though here one cold distinguish
systems that are trained on legal texts and data specifically. For the now
emerging “foundational” AIs such as ChatGPT, this distinction becomes
even more fragile. While some system may require fine tuning of pre-trained
systems to law-specific tasks, other applications may not require this. 4.3.1.4
may cover some aspects of this, but the distinction between “client facing”
vs “internal” does not map perfectly onto this topic. Client facing AI could
be a chatbot that does a first interview – without any legal knowledge – while
a purely internal use may ensure compliance with professional requirements,
and therefore contain significant “legal” knowledge

– Some but not all respondents may not know enough about AI to realise how
ubiquitous it already is. Or with other words, once AI works in the background
so that nobody notices it any longer, it stops being “proper AI” – AI is always
in the future, so to speak.

– The different environments in which they work may give them different oper-
ational definitions of what they/their firm means with AI. From my experience,
some dedicated AI/knowledge engineering staff in larger firms often think of
“proper AI” as the next thing they want to develop – they see themselves as
separate from “IT support” and have to make the case to their employer that
what they do is novel and goes beyond the “off the shelf solution”. Others
take the opposite approach an emphasise the “AI nature” of what they do,
even for rather mundane issues such as client billing etc. So even experts
may use vastly different definitions of AI, not out of ignorance but because of
the way their roles are defined.
These diverging definitions and conceptualisations can be an impediment for the
efficient uptake of AI, in my experience. One of the biggest drivers currently is a

94 | LLI WHITEPAPER | Nº 3 (EN) | 2023


“Fear of missing out” (FOMO). Everybody talks about AI, out competitors do it, we
therefore must do “something” too. This could explain why so many answered that
they introduced AI very recently – possibly not even noticing how much they had
been using already, or making sure it was the right tool for their needs. A shared
vocabulary may help them to

A) See beyond any vendor hype and resist acquiring unsuitable tools just because
of the label

B) Ease them into adopting more advanced technologies when they realise how
much they are already using

C) Help them understand what their real needs are – the separation between
“real” legal tech and “back office” tech is in turn based on specific, and con-
testable, theories of what legal work and legal knowledge is, and the less glam-
orous back office AI can be more relevant to a firm than a “we predict court
decisions” tool’.

For me surprising answer to 4.1.1.2. was that “IP management” was one of the tasks
where AI has little to offer. Generally, IP management is one of the success stories;
Trademark search used to take hours, now AI-based businesses like TrademarkNow
do this search in seconds. Patent search too benefits greatly from advances in AI,
e.g. visual search of graphics. Identifying infringing online content can use every-
thing from highly sophisticated bespoke tools such as SnapDragons IP manage-
ment to simply using Google image search.

As second point, IP was of course the paradigmatic example of Lessig’s notion of


“Code is Law”, and IP management and enforcement uses extensively “legal AI” as
an inbuilt property of digital artefacts.

There is a number of possible explanations for this counterintuitive finding, some of


which supported/in conflict with other findings

1. An ambiguity in the term “IP management” Did the respondents interpret this
as “management of the client’s IP” – a task often taken on by law firms – or
managing the firms own IP? If the understood it as the latter, then this could
simply indicate that law firms traditionally do not see themselves as genera-
tors of IP protected material. But this too would be interesting. In an AI world,
the data that law firms hold will be increasingly valuable, and could also for
them become the “new oil”. This could involve using their own repository of
licenses, contracts or submissions made on behalf of their as input for training
AIs, or otherwise as source of new AI enabled income streams. More wide-

LLI WHITEPAPER | Nº 3 (EN) | 2023 | 95


spread use of machine learning in particular could change in the future this
category, as lawyers increasingly realise the range and value of their own IP in
an AI environment

2. AI is used extensively in IP, just not by law firms. Instead, new types of “legal
services providers” that are enabled by AI have come into existence, like Trade-
marknow or Snapdragon, now compete with lawyers for business

2.1. This could also explain why “compliance tech” like DRM is excluded – it is
not a concern for law firms, even though they are part of IP management
and perfom quasi legal tasks. In this case lawyers may well be aware of
these technologies, but when asked a question in the interview context im-
mediately read it as “supporting me in my work” rather than a more general
prediction about AI use

3. Many established forms of “search” are not considered proper AI – they are
by now too routine to be even noticed, “AI-based research” is reserved to new
tools that come explicitly with the AI label – as noticed above more an impres-
sion generated by vendors.

4.3.1.3. Pilot programs:

Again under the caveat that people may have had very different ideas of what an AI
system is, this was an interesting answer. Especially for those firms without a pilot,
how would they know they got value for money? And those with a pilot, it would be
interesting to see what criteria for evaluation were used – from a mere psychologi-
cal “the users say they like it” to hard data that quantifies success. It is also unclear
if these pilots only evaluated success, or lack of it, internally, or if any attempts are
made to systematically involve client feedback.

One obstacle I observe is that often ambitious IT projects, including but not limited
to AI, are “always deemed to have been a success” unless they cause catastrophic
failures that are too obvious to hide. Conversely, under the EU AI Act constant
monitoring of performance, and a duty to report failures, is a key aspect of creating
trustworthy AI.

As a sector, there is a need for objective, and shared, understanding of how to


evaluate the success of a legal AI project- this should then make test phases the
norm, or rather lead to “test phase in permanence”. This is also a challenge for the
legal AI research communities – (legal) information retrieval has developed clear
standards for evaluation that other forms of legal AI are lacking, and for some rea-
son, as these may depend on complex notions of justice that are difficult to quan-
tify. Nonetheless, the worrying lack of systematic evaluation that the survey reports

96 | LLI WHITEPAPER | Nº 3 (EN) | 2023


indicates that there is a particularly strong need for standards and method training
in this field.

I found 4.3.1.6 and 4.3.1.7 difficult to analyse – how can 44 firms have an AI innova-
tion department, but only 13 have at least one dedicated AI person? Here too some
contextual interpretation of the questions by the respondents seems to have taken
place, some may be thinking of committees or working groups on which everyone
can serve as included while other respondents may have understood this as a ques-
tion more narrowly, as use of dedicated groups of specialists.

I also found the correlation between size and AI adoption telling – size matters.

For me this indicates that the large firms now can be left to their own devices, the
greatest impact for helping the digital transformation of the justice systems will
be by targeting smaller firms and helping them with a program focussed on their
needs and resources. This includes training – but also a delivery of training that
responds to their situation (a large law firm can send staff to training more easily
than a 1 person outfit, for obviously reasons) Smaller firms are also particularly
vulnerable – they face “technological lock in” more often than large firms, and are
more affected by staff turnover when the one person who knew the system leaves.

A comprehensive support package therefore would allow them to lean in their own
time, emphasise intelligibility and user friendliness over capabilities, and look at
the entire environment to create sustainable solutions for small firms

There were some interesting and unexpected replies regarding the regulatory en-
vironment for legal technology in the answers to 5.1.1.1ff

“Ethics” is not a predominant concern for firms, but “regulation” is – for a regulated
profession where ethical standards are often “enforceable”, that is surprising. Then
again, maybe some respondents did not include professional ethics under “ethics”
but thought of it as an aspect of “regulation and law” instead.

A similar problem with “explainability”. Only a minority have this as a concern, but
as it plays an important role in 3 of the other fields that are concerning for them
- Data protection, ethics and regulation (where the AI Act will make explainability
a requirement for all high-risk, that is also all legal, applications), there may be a
misconception on the side of firms regarding the role of explainability, or the duties
that come with GDPR and AIA.

LLI WHITEPAPER | Nº 3 (EN) | 2023 | 97


5.1.1.2 and 5.1.1.3 fit in many ways the earlier answers regarding routine tasks. To-
gether they indicate that the profession expects that Richard Susskind’s vision from
1989 will become reality by 2029 – with most routine and paralegal tasks automa-
tised.

As past predictions have proved premature, it may be worth thinking about the
reasons why things may be different this time. One change from the 1990s is that we
all got more used to carrying out complex transactions online, assisted by technol-
ogy. Smart-ish Online banking tools did not automate routine transactions, rather,
they enabled non-specialists, the customers, to carry them out with confidence
themselves. This led to a general change in perception of what type of activity
expensive, professional (and regulated) advice is necessary. This change of percep-
tion may now finally also change the way clients perceive the role of law firms. As
noted above under “pilot”, it was unclear from the answers to what extent law firms
take client perception, expectation and satisfaction into account when evaluating
their AI needs.

We should also reflect on what we mean by “routine task”. Are they today the same
as in 1989, or do waves of technological adaption simply shift the meaning, and
perception, of what we consider “routine”? To some extent, the examples given in
the survey, like case management, document drafting etc can be seen as both rou-
tine or complex, depending on the context. In this view, the AI of today automates
routine operations today, at which point we stop thinking of them as AI – AI now is
the promise to address the new routines that the older ones created, for which now
new tools are needed etc (the routine of checking one’s spam folder only exists
since AI filtered spam).

The answers to 5.1.4.1. and 5.1.4.2 follow the pattern of previous answers, with an
emphasis on text generation – as noted before, this could mean generic AI tools
(V2T, grammar checkers, ChatGPT) just as much as law-specific applications. The
same holds true for management and process tools, another high-demand technol-
ogy. Surprising for me was the comparatively low percentage of e-discovery tools,
one of the success stories of legal tech globally. The reason may be the applicable
procedural law, and also costing structure – e-discovery is particularly popular in
the US where a combination of procedural rules and high discovery costs push this
in the forefront. I don’t think for this question we have a US/non-US breakdown,
which may have been interesting to see. If true, it would give an indication of how
the legal environment, including the allocation of costs, drives or disincentives digi-
tal transformation.

98 | LLI WHITEPAPER | Nº 3 (EN) | 2023


An interesting and to a degree sobering division between small and large law firms
was visible in Table 30 and 31. The latter are much keener on document creation
and checking, and on case management, while the former are more interested in
compliance. On one level this makes sense: large firms have in-house compliance
specialists, small firms don’t, which means they feel less secure, and any compliance
work they have to do eats directly into their billable hours. On the other hand, they
cannot take on the type of complex cases that create huge amounts of documents
(and being smaller, they also produce fewer documents for internal processes). If
this interpretation is correct though, we get a largely static model of the legal ser-
vices landscape: small firms stay small firms and use AI for small firm’s issues, and
large firms stay large and don’t use AI if they don’t have to. Put differently, small
firms do not see AI as an opportunity to compete with larger firms for bigger jobs
(jobs that e.g. require either high personpower or AI for tasks like document draft-
ing). Large firms conversely seem happy to automate the work of paralegals, but
don’t plan to use it for saving costs where risks are higher – compliance is preferably
done by humans.

Giovanni Sileno, University of Amsterdam, The Netherlands

Beyond language models: human and symbolic processes

Amongst other results, the report gives a clear picture of where automation is
mostly occurring at the moment in legal firms (e.g. legal research, document re-
view) and where automation is deemed to be needed (e.g. document generation).
These observations confirm that a core part of the legal activity is a matter of text—
or at least, that it is perceived as such by people participating in it—and that this
core is considered to be to some extent reproducible, potentially becoming less
vulnerable to human errors. This general attitude may explain why language mod-
els are expected to provide much disruption with respect to current practices, and
why organizations are willing to get prepared. Yet, I would like to utilize the oppor-
tunity of this space to highlight two important points with respect to this strategy.

First, research on the consequences of the introduction of language models as our


interfaces to linguistic artifacts is still at early stages, but, from a strategic point of
view, it is important to raise more awareness of what is at stake. If these artificial
devices are becoming better and better in producing or selecting texts with high
precision for required tasks, humans will mostly converge towards roles dealing
with checking, refining, (post)-editing. This entails that instruments enabling ex-
plainability, as well as autonomous access to independent sources of information,
will become increasingly necessary, both as constraints and as safeguards (e.g.
against manipulation, intentional or not). The report shows that the first require-

LLI WHITEPAPER | Nº 3 (EN) | 2023 | 99


ment is acknowledged, although considered to be of minor relevance. The second
requirement does not appear; it is plausibly given for granted. Both judgments can
be related to the initial stage of adoption of these systems. Yet, in lack of adequate
incentives, both explainability, and the maintenance of autonomous channels of in-
vestigation and support of trustworthy sources may not be developed in adequate
forms, or even disbanded.

Moreover, we need to acknowledge that the current generation of legal experts


and practitioners is not representative of the generation that will be formed using
these technologies. Generative linguistic acts like writing (in contrast to replicating)
are what enable us humans to form and organize more complex conceptual struc-
tures. Individuals who have shaped their conceptual structures “in the wild”, and
have acquired the skills to do so, would keep a more correct dialogical power struc-
ture between human and machine. It is still unclear what may happen by changing
the conceptual formation paradigm; yet, in a sports metaphor, we generally expect
coaches to have been players themselves before. Regardless of the level of techno-
logical adoption a legal firm pursues, it is important to properly consider how it will
maintain and develop its core human expertise.

My second point is related to the first, although it takes an opposite perspective.


Having control is not only a matter of interfaces, but, eventually, of resources. Lin-
guistic artefacts do not cover everything: activities and processes are eventually
grounded in the real world. Legal activities superpose to other socio-economic
activities both on ex-ante (e.g. compliance) and ex-post (e.g. litigation) dimensions.
A too narrow focus on automation at the document level misses the larger picture
of the functions that law firms and legal departments have in society. If linguistic
technologies become more and more commodities (preferably with open-source
models that are locally fine-tuned, for reasons of security and resiliency), a legal firm
can gain a competitive advantage only by enhancing automation in other areas.

Yes, legal experts are right in stating that the meaning of law depends on context,
but operations running in organizations are generally specified more concretely,
up to the extreme case of software-driven operations, which are, at least on paper,
generally deterministic. The open-textured nature of law clashes with the controlled
nature of operations, and indeed, public and private organizations face significant
challenges in coordinating their legal and IT departments. This issue becomes evi-
dent when the scale of organizational activities increases, and developers cannot
keep up an organic view of what has been done and why. If the law changes, how
can we be sure that the processes we run are complying with the law? Today, AI
adoption, and the concurrent AI regulation, are only exacerbating this tension, be-
cause the use of AI increases the computational component present in organiza-

100 | LLI WHITEPAPER | Nº 3 (EN) | 2023


tional activities. The rise and fall of the hype on smart contracts (which have nothing
of smart, nor of contracts) have overshadowed that there is still a need for techno-
logical solutions which are intermediaries between human legal dispositions and
generic computational instructions. This requirement applies in principle to any
operationalization of institutional activities, because they are primarily symbolic in
nature. Returning to the initial point, the haste on the adoption of language models
should not make us oblivious of what they would be used for: i.e. to better deal with
the complexity of symbolic processes entailed by interpretations of the law.

There is a clear opportunity for product and process innovation here, targeting
what should be the core expertise that legal firms should strive for, to develop and
to maintain. This innovation can plausibly take advantage and build upon research
developed in the AI & law field for decades, even more so if facilitated by language
model components. Indeed, prompting is in principle a much more accessible in-
terface to humans than formalizing, programming, or annotating, and, although
verification will be still needed downstream, this embedding may bring in the loop
social participants and situations generally left out for reasons of economic oppor-
tunity. Yet, such an advance requires adequate attention from the private sector,
both for early experimentation, and later for participation in standard-setting initia-
tives. Is your firm participating in any research initiatives? If the answer is no, ask for
reconsideration: in terms of strategy, being an early follower is good, but being a
trend-setter is better.

Jaromir Savelka, Carnegie Mellon University, USA

Are the results conforming with your expectations, or do you find some of
them surprising?

The results from the report largely align with my expectations. The emphasis on
document generation, summarization, and case law analytics in the report reso-
nates with the day-to-day tasks I observed at the firm. Automating these tasks can
lead to significant efficiency gains, allowing legal professionals to focus on more
complex aspects of their work. However, a few points did stand out. I had the op-
portunity to collaborate with various departments and teams. The diversity in their
AI needs and challenges was evident. Hence, the report’s indication of a homog-
enous response across different types of firms is surprising. I would have expected
more variation based on the firm’s size, specialization, and client base. The rela-
tively low percentage of respondents seeing AI as a tool for better collaboration
between legal professionals and clients is intriguing. Given the advancements in
AI-driven communication tools, there’s a significant opportunity here that seems
underexplored.

LLI WHITEPAPER | Nº 3 (EN) | 2023 | 101


How do you assess the overall state of implementation of AI in the surveyed
law firms?

The survey results suggest that the legal industry is in an exciting phase of techno-
logical transformation. The legal sector appears to be transitioning from the early
stages of AI adoption to a more mature phase. While there’s evident enthusiasm,
the full potential of AI in legal practice is still being explored. I witnessed firsthand
the integration of AI tools for tasks like document analysis and predictive analytics,
but there was also a recognition that we were only scratching the surface. Larger
firms, with more resources at their disposal, seem to be leading the charge in AI
adoption. However, the survey’s indication that both the smallest firms and those
with 100+ employees have implemented AI suggests that the perceived value of AI
transcends firm size.

The data suggests that firms with a focus on litigation have distinct AI needs. This
aligns with my observations, where litigation teams often dealt with vast amounts
of data within discovery as well as with case documents and historical rulings. AI’s
potential to streamline and provide insights in this area is immense. The predictions
about AI becoming an essential part of legal workflows and the potential for auto-
mating paralegal tasks reflect a forward-thinking industry. It’s heartening to see the
legal sector’s openness to these changes. On the other hand, the legal profession
is bound by strict ethical and regulatory standards. The cautious approach to AI, as
indicated by the survey, resonates with the profession’s commitment to upholding
these standards.

Do you have any commentary with regard to the differences associated with
the location of the law firm, its size, or area of expertise?

The legal industry’s approach to AI, as with many other aspects, is influenced by
various factors, including the firm’s location, size, and specialization. The distinc-
tion between US-based firms and their global counterparts in AI preferences aligns
with my observations. The US legalmarket, with its unique regulatory landscape,
client expectations, and competitive dynamics, often shapes how technology is
perceived and integrated. For instance, regulatory compliance tools might be more
sought after in regions with more stringent regulations.

Larger firms often have the resources and infrastructure to experiment with a
broader range of AI applications. Their size allows them to invest in dedicated in-
novation departments, pilot programs, and collaborations with tech companies.
Conversely, smaller firms might prioritize AI tools that offer immediate efficiency
gains or address specific challenges. The survey’s indication that both the small-

102 | LLI WHITEPAPER | Nº 3 (EN) | 2023


est and largest firms have implemented AI suggests that the technology’s value is
recognized across the board, albeit for potentially different reasons.

The data’s emphasis on litigation-focused firms having distinct AI needs resonates


with my observations. Litigation often involves extensive data analysis, from docu-
ment discovery to historical case law review. AI’s potential in this area is evident,
and it’s no surprise that firms specializing in litigation are keen to harness these
benefits. On the other hand, firms specializing in areas like intellectual property,
mergers and acquisitions, or tax law might have different AI priorities, such as pat-
ent analysis, contract review, or tax optimization.

The universal recognition of challenges like legal liability, privacy, and AI accuracy,
irrespective of location, size, or specialization, is telling. It underscores the legal
industry’s commitment to upholding its core values.

What is your opinion concerning how law firms address the risks and benefits
of AI in the workplace?

The integration of AI into the legal sector is a double-edged sword, presenting


both transformative opportunities and inherent challenges. Law firms appear to be
keen on harnessing the efficiency gains offered by AI, especially in areas like docu-
ment generation and summarization. This aligns with the repetitive nature of many
legal tasks. However, the emphasis on developing internal policies and guidelines
indicates a recognition of the need to balance efficiency with ethical and profes-
sional standards.

The legal sector deals with highly sensitive information, and the emphasis on pri-
vacy and security concerns in the survey resonates with this fact. The anticipation
of ethical and legal debates surrounding AI, as indicated by the survey, is both
expected and necessary. These debates will shape the future of AI in law, ensuring
that the technology is used responsibly and ethically.

The legal landscape is dynamic, and so is the field of AI. The emphasis on con-
tinuous training and the development of internal guidelines suggests a proactive
approach to adapting to this changing landscape. The legal profession is inher-
ently risk-averse, and the cautious approach to AI reflects this. The potential legal
liabilities associated with AI decisions, especially if they impact client outcomes,
are significant. Law firms seem to be acutely aware of this and are taking steps to
mitigate these risks.

LLI WHITEPAPER | Nº 3 (EN) | 2023 | 103


What is your opinion about their expressed preferences and needs concern-
ing the technology?

I observed the time-intensive nature of tasks like document review and legal re-
search. AI’s potential to automate these tasks can lead to significant time savings,
allowing attorneys to focus on more nuanced legal work. The interest in compliance
& risk management systems aligns with the complexities of the modern regulatory
landscape. Given the high stakes in legal decisions, AI tools that can assist in iden-
tifying potential compliance issues or regulatory changes are invaluable. The fact
that a significant portion of firms has an AI innovation department is a testament
to the industry’s forward-thinking approach. Such departments can spearhead the
exploration of cutting-edge AI solutions, ensuring that the firm remains at the fore-
front of legal tech advancements.

The relatively low emphasis on AI as a tool for enhancing collaboration between


legal professionals and clients is intriguing. Given the advancements in AI-driven
communication and collaboration tools, there’s a significant untapped potential
here. The concerns about legal liability, privacy, and AI accuracy are consistent with
the legal profession’s foundational principles. These concerns highlight a cautious
approach, ensuring that the benefits of AI are balanced with ethical and profes-
sional considerations.

What are your predictions concerning the applications of AI technology in legal


practice and the associated risks and potential benefits, as well as postulates re-
lated to what should be done (or avoided) in the coming years in connection with
these processes?

AI-driven platforms will revolutionize legal research, offering real-time updates on


case law, statutes, and regulations. These platforms will provide lawyers with in-
sights tailored to their specific cases, making research more efficient and accurate.
AI will be used to predict legal outcomes, helping lawyers strategize and advise cli-
ents more effectively. This will be particularly valuable in litigation, where predict-
ing court decisions can influence case strategy. AI will be able to review and analyze
contracts at scale, identifying potential risks, inconsistencies, and areas for nego-
tiation. Given the repetitive nature of contract review, this application has immense
potential. AI-driven chatbots and platforms will facilitate client-lawyer interactions,
providing preliminary legal advice, real-time case updates, and automating routine
client queries.

Automation of repetitive tasks will free up lawyers to focus on complex legal rea-
soning and client interactions. AI tools, with their ability to analyze vast amounts

104 | LLI WHITEPAPER | Nº 3 (EN) | 2023


of data, will reduce human error in tasks like document review and legal research.
Automation will reduce the time spent on routine tasks, potentially leading to cost
savings for clients.

Over-reliance on AI predictions in legal strategy could raise ethical issues, especial-


ly if AI tools are not transparent or if their predictions are based on biased data. AI
tools that handle sensitive client data must adhere to strict data protection stand-
ards to prevent breaches. While AI will automate certain tasks, there’s a concern
about job displacement, especially for paralegals and junior lawyers.

Law firms should invest in or collaborate with tech companies to develop AI tools
tailored to their specific needs. Existing platforms can be further refined based
on feedback from legal professionals. Continuous collaboration between the legal
industry and academia will be crucial. Research in natural language processing,
machine learning, and ethics will directly impact AI’s role in the legal sector. I fore-
see significant advancements in these areas that can be translated into practical AI
tools for law firms.

A dialogue between law firms, tech companies, regulators, and clients are essen-
tial. Platforms like legal tech conferences, workshops, and forums can facilitate
these discussions. Given the ethical and regulatory challenges associated with AI
in law, this dialogue will be crucial in shaping the future landscape. Industry-wide
standards for AI in legal practice should be developed. This includes ethical guide-
lines, data protection standards, and best practices for AI tool implementation. As
AI becomes more integrated into legal practice, these standards will become even
more crucial.

Please do feel free to also address different topics following your area of
expertise.

I witnessed the sheer volume of documents that legal professionals deal with daily.
Advanced NLP techniques can be employed to extract relevant information, iden-
tify patterns, and even predict legal outcomes based on historical data. The poten-
tial for automating due diligence, contract review, and other document-intensive
tasks is immense. Using historical case data, AI models can be trained to predict
litigation outcomes. This doesn’t mean replacing human judgment but augment-
ing it. Lawyers can leverage these insights to better advise clients and strategize.
AI’s decisions are only as good as the data it’s trained on. Biased data can lead to
biased outcomes. Given the high stakes in the legal field, it’s crucial to ensure that
AI tools are transparent, explainable, and free from biases. AI-driven chatbots and
virtual assistants can revolutionize client-lawyer interactions. These tools can pro-

LLI WHITEPAPER | Nº 3 (EN) | 2023 | 105


vide preliminary legal advice, schedule appointments, and even automate routine
client queries, enhancing the client experience.

AI in law doesn’t operate in a vacuum. Collaborations with experts in fields like


psychology, sociology, and ethics can provide valuable insights. For instance, un-
derstanding human behavior can help design better AI tools for lawyers, ensuring
they are user-friendly and align with legal professionals’ workflows. The legal land-
scape is ever-evolving, and so is AI. Continuous training of AI models is essential to
ensure they remain relevant and accurate. This also applies to legal professionals;
as AI tools become more integrated into legal practice, continuous training and
education will be crucial for lawyers to stay updated. The AI community thrives on
collaboration. Open-source initiatives, where AI tools and algorithms are shared
freely, can accelerate innovation. Especially for smaller law firms that might not
have the resources to develop AI tools in-house, open-source platforms can be
invaluable.

Vern R. Walker, Hofstra University, USA

Commentary for the Report:

I make these comments from the perspective of a former partner in what was then
a mid-sized law firm in the United States during the 1980s, combined with 30 years
of teaching and research as a member of a university law faculty, including 25 years
of research in AI and law. In general, I expect partners in law firms in the U.S. to
be generally conservative about adopting new technology—especially so concern-
ing AI, about which most partners would have very few informed intuitions. This
natural tendency is reinforced by the typical pricing structure for legal services in
terms of billing by hour of time spent on a client’s matter. There is little economic
incentive to have the billable hours of senior associates or junior partners reduced
by technology, unless this reduction is driven by client expectations, or by an in-
ability to hire qualified attorneys in sufficient numbers to complete the available
work. Moreover, if there is more work to do in a firm than the people can handle,
any decision to divert effort toward learning to use a new technology must meet
a rigorous cost-benefit analysis. Increased efficiency alone might not provide suf-
ficient incentive. Of course, as in any service domain, there are likely to be some
“early adopters,” especially if there is some competitive advantage to adopting
new technology (increasing the benefits to weigh against costs). This dynamic situ-
ation is reminiscent of the phase years ago when law firms gradually created web-
sites for their law offices, although website creation did not generally have a claim
to increase lawyer efficiency.

106 | LLI WHITEPAPER | Nº 3 (EN) | 2023


This background and perspective informs my comments on the project ‘AI in Le-
gal Business.’ My comments are in two parts: my reactions to the survey findings,
and my predictions about AI applications in legal practice in the future. Unless
otherwise noted, quotations are from the Executive Summary of the report on that
project.

Part I: Reactions to the Survey Findings

In my view, it is not surprising that almost half of survey respondents report that
they have not “already implemented AI-based tools and solutions” (100 firms,
49.3%). Of those, 76 firms (37.4%) “declared that AI-based tools are not implement-
ed in their firm, nor are they looking for such options.” Of the companies that have
implemented some AI-based tools, “[t]he vast majority (59.2%) of companies report
using AI-based tools for less than a year.” It is consistent with my expectation of
conservative interest on the part of firms that less than 25% of surveyed firms had
implemented any AI-based tools for over a year prior to the survey.

Also consistent with my experience was the dominant response among all firms
about “the most common mundane and repetitive tasks that your legal profession-
als handle on a regular basis that AI could enhance” (Figure 1). The two highest
answers from 197 firms were legal research (79.7%) and document review (72.1%).
This response is consistent with the actual adoption practice thus far. From those
firms that had already adopted AI-based tools (100 respondents on this question,
Figure 4), the two highest answers were that they had implemented AI tools in
document automation (a reported 39%) and in legal research (34%). It is probably
significant that so many of the respondents to this survey regarded legal research
as “mundane and repetitive.” This suggests that law offices in which most of the le-
gal research requires imagination and innovation by experienced attorneys would
be among the last to seriously consider adopting AI-based tools.

Also in line with my expectations were the answers about a firm’s sources of in-
formation about AI. The two highest reported sources (by a wide margin, see
Figure 3) were “networking” (68.8% of 199 respondents) and “AI events” (59.3%).
This suggests to me that the exploration and implementation of AI tools by legal-
services competitors is the main driver for obtaining information, let alone develop-
ing motivation for adoption.

Curiously puzzling was a set of answers to questions related to evaluating and


implementing AI tools. In answer to the question, “What factors do you consider
most important when evaluating AI solutions for your firm” (Figure 2), the survey
respondents in general (196 respondents) gave the least support to “vendor repu-

LLI WHITEPAPER | Nº 3 (EN) | 2023 | 107


tation” (18.4%) and “vendor support and training” (26%). But among those firms
that reported having adopted at least some AI tools, only 13 answered that they
employed people in “positions specifically dedicated to AI.” So, what do AI adop-
tion and implementation look like in the law firms surveyed? With a sizable major-
ity thinking that “the top challenges facing lawyers in the age of AI” (Figure 5)
are “legal issues,” such as liability (80.2% of 202 respondents), “privacy & secu-
rity” (66.8%), and “AI accuracy & reliability” (63.9%), who are the technical experts
that the firms are turning to for quality assurance, if they are neither the vendors
nor in-house experts? Perhaps there was a source that slipped through the survey
questions, such as service contracts with legal technology companies other than
vendors. But in any case, the serious problem remains of how risk-averse law firms
proceed to implement and oversee their AI tools in practice, in the event that mar-
ket forces lead them to do so.

Part II: Predictions about AI Applications in Legal Practice in the Future

In the months during which OpenAI’s ChatGPT has captured the public’s imagina-
tion (since November 2022), and a revolution was begun in generative AI, law firms
(and their clients) have undoubtedly felt increased market pressure to at least “talk
the AI talk.” But this very rapid change has occurred (and will continue to occur) in
an area of technology that even those lawyers who are responsible for implement-
ing technology understand the least. Indeed, generative AI is an area currently too
little understood by nearly everyone. The current public awareness and fascination
is creating a powerful counterforce that pushes against the conservative nature of
the legal profession. Balancing the two forces in practice will produce a great deal
of anxiety within law firms.

I find it interesting that the survey creators, in formulating their question about
future AI adoption (about what firms believe will be desirable), told respondents
to assume that the AI tools they envision would be “highly accurate and safe.” It
remains to be seen which AI tools could ever meet such a high standard. But that
wording does accurately reflect, I think, how managing lawyers in law firms think
about AI. To overcome the economic and other factors that make them naturally
conservative, they will need strong evidence that any tools they adopt are “highly
accurate and safe.” In general, they will not receive such evidence, and the adop-
tion of AI applications in legal business will be a fraught endeavor for the foresee-
able future.

As I suggested in Part I, for major firms a tested strategy for adopting technology
has been “going as slow as your competition.” Moreover, we will see movement in
adoption primarily in firms where legal research and document review are consid-

108 | LLI WHITEPAPER | Nº 3 (EN) | 2023


ered “mundane and repetitive,” and clients receive invoices for legal services that
bill the work of many individuals. I predict that law offices in which those tasks re-
quire imagination and innovation by experienced attorneys will be among the last
to adopt AI-based tools.

This situation will increase the opportunity for the legal-tech industry to gain mar-
ket share in any number of areas of application. And I predict that the reputation
of the legal-tech provider will become a dominant factor in securing contracts with
legal firms, because law firms will not expect to hire or develop in-house the talent
sufficient to keep up with developments in generative and other AI.

Bernhard Waltl, Liquid Legal Institute, Technische Universität München, Ger-


many

I. Preamble

The Liquid Legal Institute (LLI) recognizes the critical role of artificial intelligence
(AI) in the legal industry and supports academic research focused on this field. The
LLI is convinced that AI has the potential to revolutionize the legal industry, but
it must be implemented in a responsible and ethical manner. Academic research
focused on AI in legal business is essential to achieving this goal.

II. Selected highlights and findings

The report is structured according to the survey questionnaire analyzed, with sec-
tions on the collected sample, AI adoption and openness to change, and chal-
lenges facing lawyers and future predictions.

Within the AI adoption section, there is a subsection analyzing a group of entities


that implemented AI-based tools or solutions. Each section of the report includes
a descriptive section with descriptive statistics, followed by a section of actual sta-
tistical analysis called “Relational statistical analysis,” which looks for relationships
between the analyzed variables. The report’s scientific method and statistical anal-
ysis, which involved over 200 participants, are highly commendable and provide
valuable insights into the presence of AI in the legal industry. The report’s empha-
sis on caution and factual reflection when interpreting the findings demonstrates a
commitment to advancing knowledge and understanding in the field of AI in legal
business through rigorous and objective inquiry.

– AI-based tools or solutions implementation: Slightly over half of the companies


have implemented AI-based tools and solutions, while 24 are currently explor-

LLI WHITEPAPER | Nº 3 (EN) | 2023 | 109


ing options in this area. 76 companies reported that AI-based tools are not
implemented in their firm.

– AI technologies openness: Most companies surveyed are open to adopting AI


technologies in their firm, with 139 companies expressing the highest levels of
openness. Only 26 companies reported the lowest levels, while 38 companies
reported the middle value. These results are expected as many companies are
already using or exploring AI tools.

– Fields of AI tools implementation: Companies that implemented AI-based


tools most commonly used them for document automation and legal research,
but were less likely to use them for intellectual property management, compli-
ance and risk management, and dispute resolution or outcome predictions.

– Size of AI innovation department: 44 companies have an AI innovation depart-


ments. 9 companies reported a department size of 0 and were excluded from
the statistics. The average size of an AI innovation department is 4.07 employ-
ees, while the median size is 2.5 employees. Most departments have 1-2 em-
ployees.

– Categories of top challenges: The most common challenges for lawyers in the
age of AI are legal issues (regulation and liability), privacy and security, and AI
accuracy and reliability. Over half of the respondents selected these options.
The least common challenge was related to changes in the labor market (dis-
placement or job role changes).

– Predictions for the impact of AI on the legal industry: The future impact of AI in
the legal industry is expected to include automation of paralegal tasks, wide-
spread adoption of AI tools in legal workflows, and usage of AI in various legal
practice areas. Over half of the respondents selected these options. The least
common choice was related to AI-related risks becoming a significant problem
for law firms’ insurance.

III. Overall assessment and five high-level recommendations

The statistical analysis presented in this report is highly relevant to the Liquid Legal
Institute’s focus on AI in legal business. While the exploratory nature of the analysis
means that caution must be taken in interpreting the results, the findings can still
provide valuable insights into the presence of AI in the legal industry. The themes
raised and analyzed in the report can help inform the LLI’s working groups, such
as New Methods and Digitization, as they explore innovative approaches to AI im-
plementation and the integration of AI into existing processes. The importance of
factual reflection and critical interpretation of the analysis is also in line with the
LLI’s commitment to promoting transparency and ethical implementation of AI in

110 | LLI WHITEPAPER | Nº 3 (EN) | 2023


the legal industry. Overall, the findings of this statistical analysis can contribute to
the LLI’s efforts to advance the digital transformation of law while ensuring respon-
sible and effective use of AI:

1. The field of AI in legal business is rapidly evolving, and it is essential to continue


the work and effort to stay ahead of emerging trends and technologies.

2. An international, interdisciplinary effort is necessary to develop a comprehen-


sive understanding of AI in legal business, taking into account diverse perspec-
tives and experiences from around the world.

3. Collaboration and knowledge-sharing among legal professionals, AI experts,


and academics is crucial for developing effective and ethical approaches to AI
implementation in the legal industry.

4. A community-driven effort can help ensure that the development and use of
AI in legal business aligns with the needs and values of the wider community,
promoting public trust and accountability.

5. The potential benefits of AI in legal business are significant, including increased


efficiency, improved decision-making, and greater access to justice. A contin-
ued effort to advance AI in the legal industry can help unlock these benefits
while addressing potential challenges and risks.

Minghui Xiong, Zhejiang University Guanghua Law School, China

Where is digital law going to? Commentary on “Report for the project ‘AI in Legal
Business’”

Guanghua Law School, Zhejiang University, Hangzhou, China

Email: xiongminghui@zju.edu.cn

Xiao Chi

Guanghua Law School, Zhejiang University, Hangzhou, China

Email: xiao.chi.21@ucl.ac.uk

I would like to thank Professors Michal Jackowski and Michal Araszkiewicz for their
trust and for allowing me to be the first to see their project report. The results of
this report provide vital references for the future development of AI in legal busi-
ness. As stated in their report, the purpose of their analysis was not to verify a spe-
cific hypothesis but to generate knowledge from the data about the presence of AI
in the legal business. My comments will also try to achieve the same purpose. More
specifically, rather than focusing on the methodology, I will provide brief comments

LLI WHITEPAPER | Nº 3 (EN) | 2023 | 111


on the statistical results that I am interested in, along with actual situations in the
fields of Chinese law and artificial intelligence academia.

This project is of great significance since artificial intelligence has been integrated
into the lives and work of lawyers, regardless of their willingness to embrace AI. I
suspect that this may be one of the motivations of Jackowski & Araszkiewicz’s team
to investigate the application of artificial intelligence in legal business. The integra-
tion of artificial intelligence into the lives and work of legal professionals belongs
to the category of digital law, which is a commonly used term in Chinese legal
communities. I prefer to summarizing it into two development directions: first, the
digitalization of the world of the rule of law; second, the rule of law in the digital
world. The former direction focuses on the application of AI in Law, such as formal
models of legal reasoning, computational models of evidential reasoning, and ex-
ecutable models of legislation, etc. This direction is mainly with contributions from
academic communities specialized in AI and Law, such as IAAIL and JURIX. The
latter direction focuses on new legal issues arising from the widespread use of AI,
such as digital human rights, digital rights, and privacy protection, etc. This direc-
tion is mainly with contributions from jurists who study legal issues related to the
application of AI technologies. These two directions attract attention from jurists
and experts in various fields. The previous direction belongs to the application of
artificial intelligence or one of the focuses of legal informatics which is the current
name of the original Artificial Intelligence and Law entry on Wikipedia. Note that
legal informatics is considered as a branch field of artificial intelligence or infor-
mation science. Therefore, a substantial amount of research and effort has been
invested in this direction, not only by legal experts but also by experts from other
fields such as computer science, making it a relatively mature direction. For the lat-
ter direction, there has been a growing interest in it recently. In China, most jurists
are keen to study it now.

This project focused on both the aforementioned directions. Note that Jackowski &
Araskiewicz’s team conducts the research through statistical methods. In this way,
they can obtain a comprehensive overview of the adoption of artificial intelligence
in law firms, the challenges faced by lawyers and firms, and the future predictions
regarding the impact of AI on the legal industry. Now I will comment on this report
from these two directions.

On the one hand, digitization issues in the world of the rule of law are mainly em-
bodied in the question of Section 4: “Can you identify the most common mundane
and repetitive tasks that your legal professionals handle on a regular basis that AI
could enhance?” This project categorized the digitalization issues of the rule of
law world into the following ten main categories: (a) document review, (b) contract

112 | LLI WHITEPAPER | Nº 3 (EN) | 2023


drafting, (c) contract proofreading, (d) legal research, (e) contract analysis, (f) due
diligence, (g) E-discovery, (h) IP management, (i) compliance monitoring, and (j)
case management. Their statistical analysis showed that lawyers generally believe
AI can improve the most common mundane and repetitive legal tasks, such as legal
research (79.7%), document review (72.1%), and contract drafting (55.8%). It is worth
noting that contract drafting, contract proofreading, and contract analysis each
accounted for 55.8%, 40.1%, and 41.1% respectively, indicating that smart contracts
are a big focus for law professionals. The question in Section 5 “What type of AI
system would you like to implement in your law firm, provided that such a system
is highly accurate and safe?” also points out certain digitization issues in the world
of the rule of law. According to the results of the report, law firms have paid more
attention to “document generator” (84.2%), “document verification tool” (71.4%),
“document summarization tool” (69.5%), and “case law analytics tools” (63.5%).
These AI applications are all selected by more than sixty percent of respondents.
The above statistical analyses play an important role in future corporate planning in
LegalTech. The results point out that more future research and efforts can be paid
to areas such as legal research, document review, and smart contracts. Since legal
research is already a relatively mature area, it is worthy paying more attention to
document review and smart contracts in the future. Besides aforementioned AI ap-
plications, other options are also being chosen comparably frequently, indicating
that there exists high interest in using various AI systems in law firms. This further
emphasizes the importance of the first direction, the digitalization of the world of
the rule of law. In fact, a number of legal technology companies have emerged in
China in the past decade, such as PKULaw, Beijing Thunisoft Co., Ltd., the Bestone
Information & Technology Co., Ltd., etc. These companies are focusing on projects
belonging to the digital category of the rule of law world, including smart court,
smart police and smart procuration. Overall, the direction, the digitalization of the
world of the rule of law, emphasizes the importance of the full use of artificial intel-
ligence technology in improving the quality and efficiency of the rule of law.

On the other hand, the issues of the rule of law in the digital world are mainly re-
flected by the question “What do you perceive as the top challenges facing lawyers
in the age of AI?” in Section 5. This project categorized the issues into the following
seven categories: (a) ethical issues, (b) privacy & security, (c) AI accuracy & reliabil-
ity, (d) adaption to AI, (e) legal issue, (f) labor market and (g) lack of explainability.
According to this report, “legal issue” is the most relevant issue of the legalization
of the digital world. Besides this issue, others are also concerned more or less by
jurists, especially “privacy & security” and “AI accuracy & reliability”, since both of
them are chosen by over sixty percent of respondents. An example of the issue
“privacy & security” from a real-world application is the General Data Protection
Regulation (GDPR), which was drafted and passed by the European Union in 2018. It

LLI WHITEPAPER | Nº 3 (EN) | 2023 | 113


is one of the toughest privacy and security laws in the world. The statistical results
in this report indicate that ensuring the safety of implementing artificial intelligence
technology has become a challenge for contemporary legal professionals. Apart
from the main statistical results, there exist some other interesting findings in this
report. One of the findings is that compared to AI technologists, legal technolo-
gists seem to be less concerned about the interpretability of AI. Another finding
is that only 11.4% of respondents are concerned about labor market issues. These
findings can provide developers of legal artificial intelligence technology with valu-
able information on what they need to prioritize. In addition, the findings also show
that concerns about lawbots on replacing some lawyers are unnecessary currently.
Overall, the direction, the rule of law in the digital world, emphasizes the impor-
tance of the safety of artificial intelligence technology and its legal regulation, and
findings in this report provide useful references for this direction.

In general, this project demonstrates the charm of the two development directions
of digital law and is of great significance for the future development of AI in the
legal business. It not only offers valuable knowledge for legal scholars and scholars
from other fields to refer to, but also provides important guidance for the future de-
velopment of legal science and technology. The knowledge is comprehensive and
reliable since it is derived from data collected from over two hundred firms across
countries, making the project international and large-scale. Furthermore, the find-
ings of the survey report are consistent with our intuitions, which indicates that the
methods used in the project are scientific and reasonable. However, the respond-
ents in this project are mainly from European and American law firms. Therefore, I
suggest more law firms from regions other than Western countries be involved in
future research. This could lead to different discoveries.

John Zeleznikow, La Trobe University, Victoria University, Australia27

I am so fortunate to be asked to comment upon the study AI in legal business.


Although the first research in the domain was conducted almost fifty years ago28
and I have been conducting relevant work for thirty-three years29, actual practical
applications have only been recently developed. And whilst there has been much
recent hype, there have been limited practical examples.

27
https://orcid.org/0000-0002-8786-2644
28
See McCarty, L.T., 1977. Reflections on taxman: An experiment in artificial intelligence and legal reason-
ing. Harvard Law Review, 837-893.
29
See Vossos, G., Dillon, T., Zeleznikow, J., & Taylor, G. (1991). The use of object oriented principles to
develop intelligent legal reasoning systems. Australian Computer Journal, 23(1), 2-10. and Zeleznikow,
John, and Dan Hunter. Building Intelligent Legal Information Systems: Representation and Reasoning in
law. No. 13. Kluwer Law and Taxation Publishers, 1994.

114 | LLI WHITEPAPER | Nº 3 (EN) | 2023


There are good reasons why the use of machine learning and data mining has been
limited, whereas the use of rule-based expert systems has flourished, especially for
issues of compliance. Newer forms of Artificial Intelligence have been extensively
used in medical domains30, because medical data is invariably of interval or ratio
type – for example height, weight, body temperature, heart rate and blood pres-
sure. Such data is generally easy to measure and accurate. The data means some-
thing. On the other hand, legal data is generally nominal in nature – it is almost
always free text and cannot easily be stored in databases, so that calculations can
be made. And whilst data needs interpretation, data is not generic. It varies on
the type of law (common law or civil law), the domain of law (commercial, criminal,
family, constitutional), the region in which the law is being conducted (USA except
Louisiana, Canada except Quebec, European Union except Ireland) and language
amongst other factors.

At La Trobe University in Melbourne Australia, we are conducting a study on how


practitioners in Australian Family Law use Artificial Intelligence31. Family Law has
been chosen because compared to other domains there are far more cases, and
these are generally of far lower value32. But the use of data mining Family Law cases
still has its problems33. For example, most family disputes are resolved via forms of
unreported negotiations rather than written judgements. Even written judgements
give the judge much discretion. Also, there are many nuances in these free-text
judgements. Plus, the law is constantly evolving.

The challenges in using Artificial Intelligence are far fewer. This is why I am delight-
ed to be able to comment upon this large-scale project on Artificial Intelligence in
Legal Business. Those using Artificial Intelligence in legal professions will greatly
benefit from this work.

Phase one of the study consisted of surveying 203 law firms that responded to the
survey. This is a very large empirical sample, upon which researchers will be able to
explore many connections.

The report distinguished between US-based companies (62 companies, 30.5%) and
non-US-based companies (mainly from Europe, 141 companies, 69.5%). I would like

30
See for example Zeleznikow, John. “The benefits and dangers of using machine learning to support
making legal predictions.” Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery (2023):
e1505.
31
Helen Yan, Louis de Koker and John Zeleznikow
32
At least for property cases. It is impossible to put a value on child welfare cases.
33
See for example Stranieri, A., Zeleznikow, J., Gawler, M., & Lewis, B. (1999). A hybrid rule–neural ap-
proach for the automation of legal reasoning in the discretionary domain of family law in Australia.
Artificial intelligence and law, 7(2-3), 153-183.

LLI WHITEPAPER | Nº 3 (EN) | 2023 | 115


far more detail on the exact countries of the law firms surveyed. For example whilst
France, Spain, Canada, Australia and Germany practice law very differently from
each other, they are lumped together in the survey. The constant description of
USA vs others is not very helpful and may lead to inaccuracies. As the authors have
this data, it should be produced for the reader and analysed appropriately.

It is heartening to know that a little more than half of the companies have already
implemented AI-based tools and solutions (103 firms, 50.7%), while 24 of them
(11.8%) are currently exploring options in this area. But where are the adopters
based? This matters, because for example China in many instances mandates the
use of AI. Hence my desire to know more about the data. Just as the users of data
analytics in law need to know where the data comes from, we need to know where
the companies surveyed are based.

It is interesting to know that companies in the U.S. were typically significantly more
likely to use AI technology, as well as the smallest companies (1-10 employees) and
those with 100+ employees. Medium-sized (11-99 employees) companies imple-
mented the technology less frequently. Is this a recent phenomenon? I believe so,
as I have not been aware of such a great interest in AI by US legal firms 34. My experi-
ence in Australia is that firms are interested in using AI, but rarely actually do so.

Of interest is that US firms wish to use AI because of reputational benefits. I would


be very interested in knowing whether this is true in other countries. Does AI prove
useful in educating clients and encourage them to negotiate? Do potential litigants
trust AI more than they trust lawyers. In the US, has the Trumpian distrust of govern-
ment lead to a lack of trust in lawyers and the system?

To me, the most interesting point is that Academic publications were the least pop-
ular for the smallest firms, but the most popular among medium-sized firms (11-99);
academic conferences were the most popular for the biggest firms. And is this truer
in the US, where there is a growing distrust in research and expertise – the objec-
tion to vaccination during the COVID19 pandemic is merely a significant example
of this trend.

I was surprised that respondents are least likely to report using AI tools for intel-
lectual property management, dispute outcome and risk predictions, compliance
and risk management, and dispute resolution. Certainly, early AI and Law research
in the 1970s and 1980s investigated these domains. And clients would see greater

34
Later in the report I see that the vast majority (59.2%) of companies report using AI-based tools for less
than a year. Both 1-2 years and 2-5 years were selected by 18 companies (17.5%) each. Only 6 (5.8%)
companies indicated that such solutions have been present in their company for more than five years.

116 | LLI WHITEPAPER | Nº 3 (EN) | 2023


reputational gains in these areas rather than in document automation and legal
research.

The fact that almost all selections of e-discovery were made by firms from the USA
is probably true because of the nature of the US legal system, where there is so
much data to peruse and disputants often try to confuse their opponents by pro-
viding a plethora of data e.g. in class action cases.

Most companies (47.6%) say their AI tools are used both for internal organizational
tasks and for tasks related to client work. 37.9% of firms report using AI tools only for
internal organizational tasks of the company (where there is no reputational benefit
and efficiency and effectiveness are upmost and bias and fairness matter less). The
fewest companies (14.6%) use AI tools only for client work.

It is surprising that whilst 57.8% of companies say they have used a pilot program or
trial period before fully implementing the AI solutions, a significant 42.2% say they
have not engaged in pilot programs. What this indicates is that 42.2% of firms are
prepared to use AI without previously testing it. These 42.2% of firms have been
convinced they need to use AI very quickly! This is very different to the scene even
5-10 years ago.

And only 37.9% of firms report using AI tools only for internal organizational tasks of
the company. What this means is that currently 62.1% of firms are prepared to use
AI tools for client work – a very significant recent trend. Unsurprisingly, the larger
the company, the higher the proportion of “more restrictive” responses. Demand-
ing that “technology must be preapproved”. Smaller firms give their AI users more
freedom. One can argue larger companies believe in more control or are just more
cautious.

Unsurprisingly, the most frequently indicated challenges were legal issues (legal
liability and regulation), privacy and security and AI accuracy and reliability. I am
surprised that users were not concerned about the lack of explainability. I assume
researchers and academics are more concerned with the issue of explainability
than are practitioners.

When asked about how AI can best help lawyers overcome challenges, the major
challenges were streamlining repetitive tasks and improving efficiency – tasks that
non-AI based software have performed for some time. Thus, it was not surprising
that the future impact of AI chosen by the largest proportion of respondents were:
AI-enabled tools becoming an essential part of legal workflows (53.2%), automated

LLI WHITEPAPER | Nº 3 (EN) | 2023 | 117


paralegals’ tasks (53.2%), and widespread adoption of AI across various legal prac-
tice areas (49.8%). There was much less concern about law firms insurance” (15.4%).

The most preferred AI systems law firms wanted to implement were: document
generator (84.2%), document summarization tool (69.5%), case law analytics tool
(63.5%) and compliance & risk management system (59.1%). The fewest compa-
nies indicated a negotiation support system (21.7%) and legal argument assistant
(27.1%). Clearly firms wanted AI tools that assist them to improve their performance
(especially document generators), not to make decisions for them.

There are significant differences depending on the size of the company and wheth-
er it is US-based or not, as well as the legal domain in which the law firm practices.
This strongly supports our conclusion that one cannot build generic AI tools
for the legal domain. Necessary tools will vary depending on the country of
origin of the firm (and on many occasions the region of the country), the size
of the firm and the legal domain in which the law firm practices.

-
Tomasz Zurek, University of Amsterdam, The Netherlands, Maria Curie Sklo-
dowska University, Poland

The rapid development of Artificial Intelligence we are witnessing in recent years


has significantly influenced our lives. The growth of interest in Artificial Intelligence
can be observed even in the most conservative disciplines like law. Although AI
and Law, as research area, is developing since the 70ties, the recent explosion of
interest in AI also increased the popularity of this topic. The report of the project
‘AI in Legal Business’ shows that lawyers are also interested in Artificial Intelligence,
and they expect that AI will significantly influence their work. Below I present and
discuss some impressions and observations concerning the result of the research.

My first observation is that the legal business takes the potential influence of AI on
our lives very seriously. I was really surprised that over half the analysed firms have
already implemented any AI-based tool or solution and another 12 per cent are
exploring the topic. Moreover, a significant number of the firms declared as open
to AI technology. I think that this is an important symptom showing that the techno-
logical development broke into this, very traditional, discipline, and lawyers realize
that it will significantly influence the shape of legal business in the future.

After a closer look at the research results, we can observe, that most of the tasks
supported by already implemented AI tools are tasks related to the creation and
analysis of legal documents (document automation, legal research, contract analy-
sis, and, at least partially, information retrieval), usually written in natural language.

118 | LLI WHITEPAPER | Nº 3 (EN) | 2023


This suggests that firms are looking for tools related to natural language process-
ing tools. This can lead to an interesting conclusion: Although the analysed firms
appreciate the development of AI, they assume that, similarly to actual practice,
the key element of the legal work will be still focused on the natural language writ-
ten text, without any attempts to automate the law (i.e. “law as code”). This is also
confirmed by answers to the question: “What type of AI system would you like to
implement in your law firm, provided that such a system is highly accurate and
safe?”, when the most popular answers were related to the text analysis, while, for
example, “Outcome prediction systems” or “Legal argument assistant” were much
lower in the ranking.

Another interesting, and similar to the above, observation is that the analysed le-
gal firms see the potential area of usage of AI-based tools in supporting them in
repetitive and mundane tasks It presumably means that legal firms do not predict
the existence of tools which can support them in the more challenging tasks (for
example, producing legal advice, compliance analysis, and so on).

Concluding the above, I can point out that although the results of the research
show that there is a great interest in AI within the legal business, most of the firms
see the potential usage of AI as an extension and support in their everyday tasks,
rather than the groundbreaking change of the legal business as such.

Surprisingly, the problem of explainability, which is widely discussed both in gener-


al AI and AI and Law communities, has not been noticed as a crucial one. Although
explainability was mentioned as an important factor for the evaluation of AI tools
for approx. 35% firms, but it was in 5th position, significantly below security or cost.
Moreover, the lack of explainability was in 6th position as an answer to the question:
“What do you perceive as the top challenges facing lawyers in the age of AI?”,
when only 15.3% of firms found it important. How to explain it? Why explainability
is seen as one of the key problems for researchers, but not for legal firms? In my
opinion, the difference is rooted in the needs and requirements of legal firms. As
we observed earlier, most of legal firms are looking for tools which can help them
in supportive, repetitive, and mundane tasks (e.g., in legal information retrieval,
document automation, etc.), but they are not looking for tools replacing (or even
supporting) them in complex legal tasks requiring sophisticated reasoning mecha-
nisms (e.g., legal arguments’ creation). This confirms that legal firms see AI as a
tool supporting them in their traditional activities, rather than something which can
significantly change the whole legal business.

It is also worth noticing, that although many reviewed legal firms have already im-
plemented some AI tools in their practice, most of them did it quite recently (almost

LLI WHITEPAPER | Nº 3 (EN) | 2023 | 119


60% did it in the recent year) and they are at the premature stage of using AI tech-
nology, presumably they are exploring the possibilities rather than taking a leading
role in incorporating AI tools in legal practice. This is confirmed by the relatively
small number of employment positions specifically dedicated to AI. On the other
hand, 25 firms have R&D department, which may indicate that they are interested
in the active development of AI tools and, probably, exploring the possibilities of
their usage. This suggests that there is a group of firms which takes seriously the
potential influence of AI on legal business and would like to take a leading role in
the “AI revolution”.

The above may lead us to more general observations and conclusions:

First of all since companies are looking for tools that would help them in mundane
and repetitive (and probably time and resource-consuming) tasks, the implemen-
tation of such tools could significantly speed up the processes of preparing legal
analyses, contracts, documents, court applications, etc., which in turn may increase
the number of cases to be heard in the courts. The development of such tools will
increase the time pressure concerning the preparation of documents, as well as
will influence the competition amongst law firms. Moreover, this could significantly
affect the ability of the courts to deal with such a large number of cases and result
in very long queues. This could be a significant problem, symptoms of which are
already observable. Is there a way to overcome this problem? This is an open issue
and can undoubtedly be solved, at least partially, by introducing tools supporting
artificial intelligence in courts. Especially mechanisms that can automate some sim-
ple and routine cases. However, this is very difficult to implement not only from a
technical but also, and above all, a social point of view. This is undoubtedly a topic
that requires further research from at least two points of view: the explainability of
such a system and the mechanisms of trust that people place in technical devices.

The second conclusion is related to the above. I wonder if and how the develop-
ment of artificial intelligence will affect law as such. The report shows that the ana-
lysed companies assume that the legal business in the future will be more or less
similar to the current one (but perhaps faster and less focused on routine tasks).
I’m curious to see if this is what will happen. Perhaps the rapid development of
artificial intelligence and its impact on society will be strong enough to significantly
change the way law works. For example, the increasing role of autonomous devices
will force the necessity of creating systems which autonomously should check the
compliance with law. This may open the door for systems which can autonomously
provide legal reasoning, product conclusions, analyses, etc. I think that such mech-
anisms, in the long run, can significantly change the legal business.

120 | LLI WHITEPAPER | Nº 3 (EN) | 2023


In the light of the above, the answers to the questions: “Predictions about the im-
pact of AI on the legal industry” and “The biggest challenges facing lawyers in the
age of AI” seem to be particularly interesting. Most of the answers to the first one,
are rather general (“AI tools a key role”). Two slightly more specific: “AI lawyers in
demand” and “AI risks in the insurance field,” do not indicate any groundbreaking
changes in the legal system and no relation to autonomous systems at all. Respons-
es to the second question, suggest that lawyers see problems in adapting the law
to new circumstances, rather than expecting changes in the functioning of the legal
system as a whole. However, this result requires more detailed research to better
understand the answers to this question.

In conclusion. The report shows that most law firms are aware of the potential im-
pact of artificial intelligence on the legal business and are trying to adapt to the
new circumstances. However, most of them seem to be at an early stage of imple-
menting AI in their business. Of particular interest is the future of the legal business
as a whole. In my opinion, some (a minority) of firms are trying to lead the new AI-
based business by investing in research and development, while the rest are simply
using the tools available on the market. How artificial intelligence will affect the
legal business is still an open question.

LLI WHITEPAPER | Nº 3 (EN) | 2023 | 121


6. SUMMARY AND CONCLUSIONS:

Based on a careful analysis of the data in our review, we have identified the follow-
ing conclusions:

1. Law firms are facing declining performance and pressure to become more ef-
ficient, which generative AI technology can certainly help with.

2. About 38% of law firm tasks are repetitive tasks that can be replaced by AI.
These are primarily legal research, document review and contract generation.

3. 51% of law firms have already implemented AI. The majority in the US. Most are
the largest firms (+100 lawyers) and the smallest (1-10 lawyers).

4. The largest number of AI tools deployed are tools for automating document
processing and legal research. However, there is a lack of dominant technology
here - companies are still looking for the best solutions and are far from it. The
market is still immature.

5. The generative AI revolution has completely changed this market. 60% of com-
panies have been using AI for less than a year - an obvious impact of this tech-
nology on the development of innovation in the legal world.

6. Law firms vary in their approach to AI - most are implementing the technology
very cautiously, preceding pilots and ensuring that the technology is preap-
proved. However, nearly half of law firms are allowing AI to be used from the
bottom up, taking advantage of the fact that lawyers with AI are many times
more effective than without AI.

7. Law firms are maturing to allow AI to occupy an increasingly important role in


the organization. 43% have created AI-innovation units, and 13% employ AI
engineers or prompt engineers.

8. Lawyers are very open to implementing AI in the organization. 68.5% of com-


panies marked 4 or 5 on a scale of 1-5, 12.8% of companies declared 1 or 2, and
18.7% of companies declared 3.
9. Lawyers are not worried that AI will cause massive job losses. Only 11.4% of
those asked are concerned about a significant change in the labor market.

10. Lawyers are convinced that AI will significantly change the way work is done.
53% believe that AI tools will become an essential part of workflows and parale-
gals’ tasks will be automated. 38% believe that lawyers who specialize in AI will
have a better chance of finding work and advancing their careers.

122 | LLI WHITEPAPER | Nº 3 (EN) | 2023


11. Key challenges to AI adoption are legal issues (legal liability and regulation
- 80.2%), privacy and security of the technology (66.8%), and accuracy and reli-
ability of artificial intelligence (63.9%). More than half of respondents answered
that it is necessary to develop internal policies and guidelines for the use of
artificial intelligence (61.7%). 52% expect additional education and training, and
42% require improved security and privacy/data protection in the artificial intel-
ligence sphere.

12. 1/3 of law firms intend to partner with technology companies and invest in AI
R&D.

13. If the tools were accurate and secure the average company would implement
5-6 AI-based tools. The most preferred AI systems were a document generator
(84.2%), a document summarization tool (69.5%), a jurisprudence analysis tool
(63.5%), and a compliance and risk management system (59.1%).

Following the conclusions drawn from our careful analysis, we wish to highlight
the subsequent key findings that are crucial to our understanding of the broader
context:

1. Generative AI will revolutionize the legal industry in the next 3 years.

2. About 40% of legal tasks will be carried out using AI in the next few years.

3. Law firms using AI are able to generate several hundred billion dollars in ad-
ditional value, becoming key players in their market.

4. Knowledge of AI and the ability to do project work using AI will become one of
a lawyer’s most important assets in the job market.

The reaction to these findings manifested in the commentaries provided by the


researchers confirm the conviction that availability of such data is well-received.
Also, the diversity and depth of the presented opinions corroborate the postulate
concerning enhancement of communication and collaboration between legal busi-
ness on the one hand, and the scholarship focused on the intersection of AI and
law on the other hand. The present report aimed to raise the awareness of the op-
portunities, needs and challenges perceived by legal business, and the potential
and knowledge accumulated in the academic circles.

We plan to continue surveying the law firms with regard to the state of AI tools im-
plementation and inviting researchers to provide their insights and criticism, to en-
able an ongoing dialogue and inspiration between legal business and academia, as
well as other expert groups. In our commitment to keeping our hand on the pulse,
we aspire to serve as a comprehensive compendium of knowledge, detailing the

LLI WHITEPAPER | Nº 3 (EN) | 2023 | 123


AI tools employed by legal professionals, the benchmarks they set for themselves,
and the opportunities and threats they perceive. By doing so, we aim to offer an in-
depth understanding that caters to the evolving needs and curiosities of the legal
fraternity. We envision this report evolving into a periodic publication, providing a
platform for such an exchange of data, opinions, and recommendations. Therefore,
the present report should be regarded as the first, important step towards the
development of such a permanent communication channel, fostering continual in-
novation and knowledge-sharing.

The expert commentaries on the results of the first edition of the survey have al-
ready provided us with numerous suggestions about how the following editions
of the survey might be extended or improved. Taking these suggestions, and the
results of our own analyses into consideration, we may indicate the following direc-
tions we intend to develop the “AI in Law Firms Survey” project in the near future:

1. Employing more geographical diversity (in particular, inclusion of the law firms
operating in Southern America or Asia);

2. Applying more fine-grained distinctions in questions, enabling more precise


answers (for instance about the geographical location or the internal business
structure of the respondents, or about the types and exact purposes of tech-
nology implemented or intended for implementation);

3. Putting more emphasis on the different classes of risks connected with the AI
implementation (concerning legal or reputational responsibility, organizational
problems, questions concerning professional development);

4. Emphasising the concepts of AI explainability, transparency and trustworthi-


ness, as they are understood in the ongoing discussion concerning legal and
ethical requirements for AI;

5. Emphasising risks that may follow from AI biases or hallucinations and how they
can be handled in a law firm;

6. Investigating the impact on the legal labor market, business models (automati-
zation) and the law as such;
7. Intending to attract further expert opinions, also from different communities
(for example from cognitive science, professional legal associations, policy-
making authorities or legal tech business).

124 | LLI WHITEPAPER | Nº 3 (EN) | 2023


7. BIBLIOGRAPHY:

1. Above The Law, Generative AI in the Law: Where Could This All Be Headed?
Wolters Kluwer, 2023.

2. Araszkiewicz M., Bench-Capon T., Francesconi E., Lauritsen M., Rotolo A., Thir-
ty years of Artificial Intelligence and Law: overviews, Artificial Intelligence and
Law 30 (4) 593-610.

3. Arredondo, P., Driscoll, S., Schreiber, M., GPT-4 Passes the Bar Exam: What
That Means for Artificial Intelligence Tools in the Legal Profession. Stanford
Law School, 2023.

4. Baryse, D., People’s Attitudes towards Technologies in Courts., Laws, 2022.

5. Beauchene, V., de Bellefonds, N., Duranton, S. and Mills, S., AI at Work: What
People Are Saying, Boston Consulting Group, 2023.

6. Brooks, C., Gherhes, C. and Vorley, T., Artificial intelligence in the legal sector:
pressures and challenges of transformation.,Cambridge Journal of Regions,
Economy and Society, 2020. pp. 65–87.

7. Connell, W., Hamlin Black, M., Artificial Intelligence and Legal Education. The
Computer & Internet Lawyer, 2019. p. 40.

8. Chui, M., Hazan, E., Roberts, R., Singla, A., Smaje, K., Sukharevsky, A., Yee, L.,
Zemmel, R., The economic potential of generative AI. The next productivity
frontier, McKinsey & Company, 2023.

9. Chui, M., Yee, L., Hall, B., Singla, A., Sukharevsky, A., The state of AI in 2023:
Generative AI’s breakout year. Quantum Black AI by McKinsey, 2023.

10. Diver, L., McBride, P., Medvedeva, M., Banerjee, A., D’hondt, E., Duarte, T., Du-
shi, D., Gori, G., Van den Hoven, E., Meessen, P., Hildebrandt, M., ‘Typology of
Legal Technologies’, Cross-disciplinary Research in Computational Law (CRCL):
Computational ‘Law’ on Edge, Cohubicol, 2022.
11. Daull, X., Bellot, P., Bruno, E., Martin, V., Murisasco, E., Complex QA and Lan-
guage Models Hybrid Architectures, Survey. arXiv, 2023.

12. Governatori G., Bench-Capon T., Verheij B., Araszkiewicz M., Francesconi E.,
Grabmair M., Thirty years of Artificial Intelligence and Law: the first decade.
Artificial Intelligence and Law, 30 (4), 481-519, 2022.

LLI WHITEPAPER | Nº 3 (EN) | 2023 | 125


13. Hope, T., Downey, D., Weld, D.S., Etzioni, O. and Horvitz, E., A computational
inflection for scientific discovery., Communications of the ACM, 2023.

14. Hongdao, Q., Bibi, S., Khan, A., Ardito, L., Khaskheli, M. B., Legal Technologies
in Action: The Future of the Legal Market in Light of Disruptive Innovations,
Sustainability, 2019.

15. Jones, J. W. and others, Report on the State of the Legal Market, Mixed results
and growing uncertainty. Thomson Reuters Institute, 2023.

16. Lauritsen, M., Computational Intelligence and the Paradoxes of Legal Routine,
Medium, 1990.

17. Lauritsen, M., Technology report: Building legal practice systems with today’s
commercial authoring tools. Artificial Intelligence and Law, 1992, 1, pp.87-102.

18. Lauritsen, M., Toward a phenomenology of machine-assisted legal work.,RAIL,


2018

19. Legal Trends for Mid-Sized Law Firms, Clio, 2023.

20. McCarty, L.T., Reflections on taxman: An experiment in artificial intelligence


and legal reasoning. Harvard Law Review, 1977, pp. 837-893

21. McBride, P., Casetext’s CoCounsel through the lens of the Typology. Cohubi-
col, 2023.

22. McBride, P., Diver, L., ChatGPT and the future of law. Law society of Scotland,
2023.

23. Nguyen, H., Fungwacharakorn, W., Nishino, F., Satoh, K., “A Multi-Step Ap-
proach in Translating Natural Language into Logical Formula, JURIX, 2022. pp.
103-112

24. Opijnen, V., Santos M. and C., On the concept of relevance in legal information
retrieval. Artificial Intelligence and Law, 2017.

25. Peng, B., Galley, M., He, P., Cheng H., Xie, Y.,Hu, Y.,Huang, Q.,Liden, L.,Yu, Z.,
Chen, W., Gao, J., ‘Check Your Facts and Try Again: Improving Large Language
Models with External Knowledge and Automated Feedback’, arXiv, 2023.
26. Replogle, T. J., The Business of Law: Evolution of the Legal Services Market.
Michigan Business & Entrepreneurial Law Review, 2017. vol. 6, no. 2, pp. 287–
304.

27. Sartor G., Araszkiewicz M., Atkinson K., Bex F., van Engers T., Francesconi E.,
Prakken H., Sileno G., Schilder F., Wyner A., Bench-Capon T. Thirty years of Ar-

126 | LLI WHITEPAPER | Nº 3 (EN) | 2023


tificial Intelligence and Law : the second decade, Artificial Intelligence and Law
30 (4), 521-557, 2022.

28. Savelka, J., Grabmair, M., Ashley, K., A Law School Course in Applied Legal
Analytics and AI. Law in Context, 2020 vol. 37, no. 1, pp. 134–174.

29. Soukupová, J.. AI-based Legal Technology: A Critical Assessment of the Cur-
rent Use of Artificial Intelligence in Legal Practice. Masaryk University Journal
of Law and Technology, 2021.

30. Stranieri, A., Zeleznikow, J., Gawler, M., Lewis, B. A hybrid rule–neural approach
for the automation of legal reasoning in the discretionary domain of family law
in Australia. Artificial intelligence and law, 1999.

31. Susskind, R., Tomorrow’s Lawyers. Third Edition. Oxford University Press, 2023.
p. 97, 111–113, 138–141, 230–232 ,

32. Thompson, D., Creating New pathways to Justice Using Simple Artificial Intel-
ligence and Online Dispute Resolution. International Journal of Online Dispute
Resolution, 2015, vol. 2, no. 1, pp. 4–53.

33. Veith, C., Bandlow, M., Harnisch, M., Wenzler, H., Hartung, M., Hartung, D., How
Legal Technology Will Change the Business of Law, Boston Consulting Group,
2016

34. Villata S., Araszkiewicz M., Ashley K., Bench-Capon T., Branting L. K., Conrad
J., Wyner A. Thirty Years of Artificial Intelligence and Law : the third decade,
Artificial Intelligence and Law. 2022, 30(4), 561-591,

35. Vossos, G., Dillon, T., Zeleznikow, J., & Taylor, G., The use of object oriented
principles to develop intelligent legal reasoning systems. Australian Computer
Journal, 1991, 23(1), 2-10

36. Will Generative AI deliver a generational transformation? UBS Evidence Lab,


2023.

37. Webb, M., The Impact of Artificial Intelligence on the Labor Market. SSRN,
2020.

38. “White Paper On Artificial Intelligence. A European approach to excellence


and trust”- https://commission.europa.eu/publications/white-paper-artificial-
intelligence-european-approach-excellence-and-trust_en

39. Zeleznikow, J., Hunter, D. Building Intelligent Legal Information Systems: Repre-
sentation and Reasoning in law. Kluwer Law and Taxation Publishers, 1994.

LLI WHITEPAPER | Nº 3 (EN) | 2023 | 127


40. Zeleznikow, J. The benefits and dangers of using machine learning to support
making legal predictions. Wiley Interdisciplinary Reviews: Data Mining and
Knowledge Discovery, 2023.

128 | LLI WHITEPAPER | Nº 3 (EN) | 2023

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy