E Book First Global Report On AI in Legal Practice
E Book First Global Report On AI in Legal Practice
1. Introduction ............................................................................................ 5
1.1. Foreword............................................................................................... 5
1.2.3. Summary................................................................................... 9
1.1. FOREWORD
This report represents the results of the survey “AI in Law Firms 2023”, together with
expert commentary provided by leading researchers working in the field of AI and
Law, Legal Informatics and the Law of AI. Many of them are affiliated with organiza-
tions that have been delving into the nexus between artificial intelligence and law
for over three decades. Despite this long-standing focus, the recent surge in the
adoption of artificial intelligence over the past year prompted a decision to expand
the research scope to encompass empirical considerations. Our objective was to
understand the current tangible impact of AI on the realm of legal practice. To this
end, we surveyed over 200 law firms globally, representing nearly 100,000 legal pro-
fessionals. We sought to ascertain their perspectives on the rapid proliferation of
AI, exploring how they employ AI tools, their concerns, perceived limitations, and
the opportunities they identify. Subsequently, this report was reviewed by a select
group of researchers who provided insights into our findings and formulated their
own views on the future development of AI tools’ implementation in legal practice.
This publication harmoniously melds the empirical and statistical outcomes with a
scientific exploration of AI’s evolution in the legal sector.
Law firms entered 2022 following two years of crisis related to the COVID-19 pan-
demic. Evidently, they have learned how to measure up to its effects both internally,
in their teams, and externally, in their relationships with clients, which have not only
been maintained but improved. The year 2022 brought good financial results for
law firms.
We are at the end of 2023, and we can already point out that it brings many more
challenges, such as slowing demand, less customer optimism, rising costs, declin-
ing team productivity, and inflation, which is a global trend. Added to this are tech-
According to the Thomson Reuters Institute 2023 “Report on the State of the Legal
Market,” the last quarter of 2022 already shows a decline in demand for legal ser-
vices, especially among the largest firms and especially in the transactional industry
(for example, AmLaw100 reported a 9% decline in demand for transactional work
Q4 2022). The year 2023 is a continuation of this trend.
Another indicator that points to some stagnation in the industry is profit per share-
holder (PPEP), which, having reached a record high in 2021, declined in 2022 and for
the first time since 2009.
In the second half of 2022 and into 2023, we also see a decline in optimism among
law firm clients about increasing spending on in-house counsel. Spending is ex-
pected to decline in banking, finance and insurance, and, according to some in-
house counsel, also in litigation and mergers and acquisitions.
As the only one among the segments in the market, it was mid-sized companies
that saw an increase in demand. The aforementioned report describes this as a
result of customers seeking high quality but at better rates offered by smaller and
more efficient companies.
In 2022, law firms grew and, especially in the first half of the year, competed hard for
talent. The result was the highest increase in direct spending on salaries since 2008
as well as additional costs associated with business development and the return of
teams to offices after the pandemic. After the 2022 fourth-quarter revenue decline
described above and the deterioration in performance, legal firms faced a need to
cut costs in 2023. Hence the layoffs that firms began to experience in 2023. They
primarily affect junior lawyers and associates, affecting up to a dozen per cent of
the teams.
1.2.3. Summary
– a decrease in the efficiency of the lawyer’s work, which, combined with the
billable hours model, results in a decrease in revenue for companies
– adverse customer reaction to continued price increases and the search for
efficient billing models.
Artificial intelligence has been with us for several decades. However, 2022 and es-
pecially 2023 are when we experienced an explosion of generative AI popularity
after the AI winter.
ChatGPT, Bard, GitHub Copilot, Midjourney, AnyLawyer and other generative arti-
ficial intelligence tools that have attracted public attention in 2023 are the result of
significant investments in recent years that have helped advance machine learning
and deep learning. Over the past decade, artificial intelligence has penetrated our
lives gradually. However, it was mainly visible in B2B relations and did not break
through into the public consciousness, except for moments when we watched its
breakthrough victories in Chess or AlphaGo. It wasn’t until 2022 that ChatGPT be-
gan to be in widespread use by more than 100 million users, achieving more than
1.6 billion page views in June 2023. In addition, plug-ins for ChatGPT, its competi-
tors (such as Bard and Claude), and applications based on large language models,
using AI for efficiency improvements in almost every sphere of life, began to be in
widespread use. In the legal world, these include tools such as Harvey, Spellbook,
Lexion and AnyLawyer.
Large language models contain elaborate artificial neural networks inspired by the
billions of neurons connected in the human brain and are part of what is called
deep learning. They can process enormous and diverse sets of unstructured data
and perform more than one type of task. Tools based on generative AI can work
with text, images, sounds, videos, and computer code. They can create, summa-
rize, edit, change, and classify materials. However, the fundamental change we are
seeing is the combination of usability with the ability to communicate with tools
in ordinary language. They interact with users by communicating in a human-like
manner. Consequently, with project work outside the human team, we have a new
stakeholder, which is AI and the tools built with it.
After the report prepared in June 2023 by McKinsey & Company, “The economic
potential of generative AI. The next productivity frontier”, we can cite a number of
challenges facing businesses related to AI. Language models require the training
of generative AI. The computing power required to do so can become a bottleneck
in its development. The second challenge is the lack of regulation and operating in
A huge development of tools is associated with this fast financial path. GPT 3.0
was released in March 2022. In November 2022, GPT 3.5 and Chat GPT using this
language model were released, including a fine-tuning process for machine learn-
ing algorithms designed specifically for conversation modelling. Four months later,
Open AI released GPT-4 with much greater capabilities, including multimodal ca-
pabilities enabling it to process text, images, and videos. In March 2023, Anthropic
released an LLM called Claude, which significantly, just 2 months later, had 10 times
more text processing capabilities. In May 2023, Google released PaLM2, which is
the Bard chatbot engine allowing Google customers to collaborate with generative
AI, implemented in the European Union in July 2023.
Few studies to date address the issue of AI adoption in the legal services market.
The aforementioned McKinsey & Company 2023 report predicts that the services
most susceptible to the impact of AI are those related to sales, marketing, cus-
tomer operations and software engineering. It is in these spheres that this technol-
ogy can add value to the entire organization by revolutionizing internal knowledge
McKinsey & Company estimates the impact on the legal industry will also be sig-
nificant. It estimates the potential impact of implementing generative AI at nearly
$100 billion annually, representing about 15-20% of legal services spending in com-
panies where AI would be implemented. In addition, according to the report’s au-
thors, artificial intelligence has the potential to generate about $180-260 billion in
additional value in the legal industry, with a particular focus on the banking indus-
try, and a relatively high potential for change in the insurance, telecommunications,
real estate and energy industries.
A study was also conducted by Above the Law in 2023, which surveyed 275 lawyers
about their attitudes toward artificial intelligence. More than 80% of the lawyers
surveyed agreed that generative artificial intelligence would create “transforma-
tive efficiencies” in research and routine tasks. Lawyers also shared their views on
the possibility of AI replacing some legal professions. Some 71% said generative
AI could replace document review lawyers within the next decade, and 68% said it
could have a similar impact on law librarians. Some 41% said paralegals could be
replaced in the next 10 years.
From these two reports, one can conclude that AI adoption in the legal industry
also carries great potential. However, it has not yet been sufficiently explored. This
prompted the authors to conduct their own study and prepare the first report that
will comprehensively discuss the state of AI adoption in the legal industry, lawyers’
attitudes toward these changes, and predictions regarding potential changes.
The report was prepared by an interdisciplinary team and is not affiliated with any
organization. The e-book was prepared in cooperation with Liquid Legal Institute.
The editors are
3. Gijs van Dijck integrates legal, empirical, and computational analysis in order to
improve the description, application, understanding, and evaluation of the law.
He has taught courses on tort law, contract law, property law, empirical legal
research, and computational legal research. Gijs has published in top journals
including the Journal of Empirical Legal Studies and the Oxford Journal of Le-
gal Studies. He has been a speaker at various conferences, including ones at
Oxford, Harvard, Yale, Duke and Cornell. He was a visiting scholar at Stanford
University in 2011. Gijs is a Professor of Private Law, director of the Maastricht
4. Martin Ebers, President of the Robotics & AI Law Society (RAILS), Germany, and
Professor of IT Law at the University of Tartu, Estonia. Moreover, he is a per-
manent fellow at the law faculty of the Humboldt University of Berlin, and co-
director of the German Institute for Energy and Competition Law in the Pub-
lic Sector. In 2022, Martin was awarded a five year grant from the Wallenberg
Foundation (WASP-HS, the Wallenberg Program on Humanities and Society for
AI and Autonomous System – Guest Professorship at Örebro University, Swe-
den) to conduct research on “Private Rule-making and European Governance
of AI and Robotics”. His latest books are amongst others Algorithms and Law
(Cambridge University Press, 2020), Contracting and Contract Law in the Age
of Artificial Intelligence (Hart Publishing, 2022), and the Stichwortkommentar
Legal Tech (Nomos Publishing, 2023).
12. Ugo Pagallo, a former lawyer and current professor of Jurisprudence at the Uni-
versity of Turin (Italy), is Vice-President of the Italian Association of Legal Infor-
matics. Author of thirteen monographs, a hundred essays in scholarly journals
and book chapters, he has been a member of many international projects and
research, collaborating with such institutions as the European Commission, the
World Health Organization, and the Japanese government. His main interests
are Artificial Intelligence (AI) & law, network theory, governance, human-robot
interaction, and information technology law.
13. Andrzej Porębski, a researcher at the Faculty of Law and Administration at the
Jagiellonian University, MA of IT and Econometrics, MA of Law and MA of So-
ciology, data analyst and statistician, conducting the research project funded
by the National Science Center, Poland, entitled “The Understandability Re-
quirement of Machine Learning Systems Used in the Application of Law”, who
prepared its statistical part.
14. Ken Satoh is a professor in the Principles of Informatics Research Division, (Na-
tional Institute of Informatics), and Sokendai (The Graduate University of Ad-
vanced Studies) Japan. He has been working on the logical foundations of AI,
especially non-monotonic reasoning. He is currently investigating juris-infor-
matics (aiming at the amalgamation of informatics and law, as bio-informatics
amalgamates informatics and biology). He is a member of the steering com-
16. Giovanni Sileno is an Assistant Professor at the Socially Intelligent Artificial Sys-
tems research group at the University of Amsterdam, and a member of the Civ-
ic AI Lab. With a background in electronic engineering, a PhD in AI & Law, and
postdoc studies in cognitive systems and data-sharing infrastructures, he has
been working in various fields related to AI and Computer Science research,
such as computational legal theory, agent-oriented programming, cognitive
modelling, computational policy design and operationalization.
19. Xiao Chi is a first-year Ph.D. student in the field of digital law at Zhejiang Univer-
sity. She has obtained a Bachelor’s Degree of Science majoring in Mathemat-
ics at University of Liverpool and a Master’s Degree of Science majoring in
Scientific and Data Intensive Computing at University College London. During
her master’s program, she focused on research related to epistemic graphs
and completed a dissertation titled “A Filtering-based General Approach to
Learning Rational Constraints of Epistemic Graphs”. Xiao joined the ZJU Law &
AI Laboratory after enrolling in the doctoral program, and her doctoral super-
visor is Professor Minghui Xiong. She has participated in several seminars on
explainable artificial intelligence, where she discussed and wrote papers with
professors of logic and computer science. Xiao has also participated in several
conferences in the fields of Logic, AI, and Law. Her doctoral research focuses
on legal informatics, especially on persuasion systems and mediation systems.
20. Vern R. Walker is Professor Emeritus of Law at the Maurice A. Deane School of
Law at Hofstra University (New York). He has a Ph.D. (philosophy) from the Uni-
versity of Notre Dame, and a J.D. (law) from Yale Law School. He was a partner
in the Washington, D.C., law firm of Swidler & Berlin (working extensively with
expert witnesses and scientific evidence). At Hofstra Law, his courses included
scientific evidence, torts, and administrative law, and he founded and directed
the Research Laboratory for Law, Logic and Technology (LLT Lab). He has pub-
lished extensively on legal reasoning and factfinding, on the use of scientific
evidence in legal proceedings, and on the use of computational and artificial
intelligence approaches to legal analysis. He designs computer software for
representing legal knowledge, mining argumentation from legal documents,
and modeling legal reasoning. His most recent publication is the book, Beyond
Language: A Philosophical Journey (Wipf & Stock, 2021).
21. Bernhard Waltl, a computer scientist and expert at the intersection of law and
computer science, is a specialist in artificial intelligence and NLP. He works on
different topics in the field of legal operations, legal tech, and legal innovation.
He co-founded Liquid Legal Institute in 2018. He is am a firm believer that col-
laboration is the key to a successful, people-centric and digital future.
22. Adam Zadroz-ny, assistant professor at the National Center for Nuclear Studies
Poland, lecturer of Natural Language Processing at Cognitive Studies Univer-
23. John Zeleznikow has conducted research and taught in Australian, US, French,
Dutch, Israeli, Belgian, German, UK, Estonian, Spanish and Polish universities
for fifty years. He is the author of 4 research monographs and 105 refereed jour-
nal papers. He has an H index of 38 with 4784 citations. Professor Zeleznikow
has also won over $A8.5 million in competitive research grants. These in-
clude ten Australian Research Council Grants, three European Union Grants
and Dutch, French, Scottish and Spanish research grants. He has successfully
supervised 20 PhD students and 6 postdoctoral fellows. Over the past thirty
years, Professor Zeleznikow has focused on how Artificial Intelligence can be
used to enhance legal decision-making. His research findings have been uti-
lised by legal and mediation firms, West Midlands Police (UK), CONSOB (Ital-
ian Stock Exchange Regulator), Victoria Legal Aid and Relationships Australia.
He has performed pioneering research on using machine learning and game
theory to support legal decision-making.
-
24. Tomasz Zurek, is currently employed as a post-doc researcher at the Informat-
ics Institute, University of Amsterdam. He is also an Associate Fellow of T.M.C.
Asser Institute in The Hague, and an assistant professor at the Institute of Com-
puter Science at Maria Curie Sklodowska University in Lublin, Poland (currently
on leave). He has got MA in management (1999) and a Ph.D. in computer sci-
ence (2004). His dissertation concerns the utilisation of artificial intelligence
in banking. His current scientific interests focus on the representation of legal
knowledge and modelling of legal reasoning and argumentation, especially
the modelling of informal ways of reasoning. He is an author of over 50 peer-
reviewed papers, a member of program committees of main AI and Law confer-
ences, and a member of the International Association of Artificial Intelligence
and Law.
The survey covered 203 companies, representing a total of nearly 100,000 employ-
ees, including some 50,000 lawyers. We analyzed data collected in different parts
of the world to get a representative global picture. Our goal was to understand
lawyers’ views on AI and to assess the extent to which the technology is already
present in the legal industry.
In order to see more accurately how changes are shaped in each group, we divided
companies into five classes, taking into account the number of employees in each
class. The first group included companies with 1-10 employees, which accounted
for 22.7% of the study sample. The second group was companies with 11-39 em-
ployees, which accounted for 26.6% of the study sample. The third group was com-
panies with 40-99 employees, representing 20.2% of the study sample. The next
group was companies with 100-999 employees, which represented 13.3% of the
surveyed sample. Finally, the fifth group included companies with 1,000 employees
or more, representing 17.2% of the surveyed sample.
The results of our survey showed that the average percentage of lawyers who deal
with artificial intelligence in their work is 24.5%, with a median of 15.2%. These num-
bers are indicative of the degree to which lawyers are involved in using AI as part
of their work. It seems that this technology is becoming more and more present in
the legal industry, but there are still differences between companies with different
numbers of employees.
In addition, we divided the companies into five categories based on their main
fields of activity. The first category is firms specializing in litigation, which repre-
sented 7.4% of the surveyed sample. The second category is corporate firms, which
represented 23.2% of the surveyed sample. The third category was companies
working with new technologies, which accounted for 9.4% of the surveyed sample.
The fourth category included companies that described themselves as general-
ists, handling all matters, accounting for 13.3% of the surveyed sample. The fifth
category was companies with other business profiles, which together accounted for
46.8% of the surveyed sample.
We divided the study into two parts. The first part is a statistical report. It is aimed
to find out how law firms use AI and what attitudes they have toward generative ar-
tificial intelligence. The survey provides valuable information from both a business
and scientific perspective. The results obtained provide a better understanding of
the actual sentiment of lawyers toward artificial intelligence, as well as an assess-
ment of the degree of adoption of this technology in the legal industry. The survey
is also focused on challenges and predictions related to the use of AI in legal prac-
tice. In turn, this information can be used by companies and researchers to make
decisions on how to move forward with the implementation of AI in legal practice
and to identify areas for further research and development.
We began the survey by asking in what spheres artificial intelligence could change
the work of lawyers. So we started with a question about the proportion of mun-
dane and repetitive tasks in companies that could be improved by using artificial in-
telligence. In question one, we wanted to know the percentage of such tasks com-
pared to tasks requiring in-depth legal knowledge and strategic thinking. Then, in
the second question, we asked companies to identify the most common mundane
and repetitive tasks that their lawyers perform on a regular basis and that could
be streamlined with artificial intelligence. We offered a list of possible tasks, such
as document review, contract drafting, contract revision, legal research, contract
analysis, due diligence, e-discovery, intellectual property management, compli-
ance monitoring, case management and others.
In another question, we asked companies whether they had implemented any tools
or solutions based on artificial intelligence. We offered three possible answers: yes,
no, we are currently exploring options. If the answer was yes, we asked about areas
or areas of practice where the company has implemented AI tools or solutions.
We offered a list of possible areas, such as contract analysis, legal research, infor-
mation retrieval, automation/document generation, e-discovery, intellectual prop-
erty management, regulatory compliance and risk management, dispute outcome
The next questions were about the practical aspects of using AI tools. We asked
what specific artificial intelligence tools the company uses, if any. We were also
interested in how many lawyers in the company use these AI tools or solutions, and
how long the company has been using them. In addition, we asked whether the
company conducted a pilot program or trial period before fully implementing AI
solutions, and whether the tools are used primarily for internal organizational tasks
or for tasks related to client work.
Next, we asked whether lawyers at the company are free to choose which technol-
ogy they want to use, or whether any technology must be approved in advance by
the company. We were also interested in how large the company’s AI innovation
department is, and whether the company employs AI specialists, such as legal en-
gineers or prompt engineers.
Finally, we asked companies where they look for new artificial intelligence ideas and
solutions. We offered a list of possible sources, such as scientific conferences, aca-
demic publications, law schools, bar events, legal technology/artificial intelligence
events, blogs, newsletters, Twitter, LinkedIn, networking with other professionals
and online communities. We’ve also left space to enter other sources.
The second part of the report features seventeen commentaries from esteemed
scientists who have long studied the relationship between artificial intelligence and
law. These commentaries respond to the report’s findings, highlighting both sur-
prising elements and those that align with their prior intuitions. We reference these
expert opinions, and in our conclusion, we integrate them to formulate conclusions
that may be incorporated into future editions of this report.
The average percentage of mundane and repetitive tasks in the surveyed compa-
nies was 38.2%. This means that more than a third of the work carried out in law firms
is routine tasks. However, these responses vary depending on the main focus of the
business. In litigation firms, the percentage of repetitive activities replaceable by
AI rises to an impressive 52%. This means that in companies that specialize in legal
processes, mundane and repetitive tasks account for more than half of the work.
Responses also vary depending on the size and structure of the company. In me-
dium-sized companies (with 11 to 99 lawyers) and in companies where more than
50% of employees are lawyers, the share of mundane and repetitive tasks is lower.
In such companies, the average percentage share of these tasks is less than 38.2%.
This may suggest that in larger firms with more resources and capabilities, mun-
dane and repetitive tasks are better distributed and automated, allowing lawyers
to focus on tasks requiring specialized knowledge and skills.
The most frequently indicated mundane and repetitive tasks that could be stream-
lined through the use of artificial intelligence were legal research, document review
and contract drafting. As many as 79.7% of respondents selected legal research as
a task that could be streamlined using artificial intelligence. In addition, 72.1% of
respondents identified document review as a task that could be streamlined, and
55.8% of respondents identified contract drafting. More than 40% of respondents
see potential in implementing AI in due diligence, case management, compliance
monitoring, contract analysis, and contract proofreading.
These results indicate the potential of using artificial intelligence in legal work. Au-
tomating these mundane and repetitive tasks can bring many benefits, such as sav-
More than half of the surveyed law firms are already implementing artificial intelli-
gence (AI). More specifically, 51% of firms have already implemented AI-based tools
and solutions, while 12% are currently exploring options in this area. Only 37% of
firms said they are neither implementing AI-based tools nor exploring such options.
The results varied by geography and how large the companies surveyed were.
Analysis of the data showed that companies in the United States are more likely
to use AI tools than companies outside the US. The smallest and largest compa-
nies are the most likely to use the new technology. Mid-sized companies show less
tendency to implement the technology. While this is not surprising when it comes
to large companies (100+ lawyers), which are structured to have a separate unit re-
sponsible for innovation and resources for implementation, the high propensity of
small companies is surprising. This seems to be the result of a grassroots revolution
that is entering the legal world by the firms that are the most flexible and struggling
for the market. As indicated earlier, medium-sized firms are the least likely to lose
the market in 2022-2023, and have already achieved better efficiency by traditional
methods compared to large firms. Hence, their lower propensity to invest in tech-
nology may stem from this. Failure to invest and follow this path of modernity may
be a risky decision in the long run.
The survey results are in line with expert predictions, given that many companies
are already using or exploring AI tools. To better understand the factors influenc-
ing openness to AI, a statistical analysis was conducted to clarify which company
characteristics have the greatest impact on this variable. The study found that
openness to AI correlates most closely with company size. The smallest companies
(1-10 employees) and those with more than 100 employees showed, on average,
the highest openness to AI implementation. In contrast, medium-sized companies
(11-99 employees) had the lowest openness. However, these differences were not
significant, averaging no more than one point.
Based on these results, it can be concluded that most law firms are open to im-
plementing AI technology. The implementation of this technology is particularly
popular in the United States and among very small and very large firms. In contrast,
medium-sized firms show less tendency to use AI. It is interesting to note that open-
ness to implementing AI was also declared by those companies that have neither
yet implemented it nor are currently looking for solutions. This could mean that
even these companies anticipate that AI adoption is inevitable.
We asked law firms what they consider when choosing tools that use artificial tech-
nology. The answers are not surprising. The key for lawyers is data security and pri-
vacy, protection of sensitive client information and the secrecy of the legal profes-
sion. The increase in cyber-attacks and privacy and data breaches has made data
protection all the more of a priority for law firms. Choosing AI tools that guarantee
a high level of security and ensure data confidentiality is an understandable deci-
sion for law firms. This is confirmed by the fact that as many as 79.6% of respondents
included this factor as a key consideration when choosing AI tools.
Another important factor is cost. The survey found that 75.5% of respondents
stressed the importance of cost when choosing AI tools. This is certainly a response
to the already described economic challenges of the legal industry. AI tools that
The last factor considered by more than half of respondents is the ease of use of
AI tools. This option was marked by 61.7% of companies. Law firms want the tools
they use to be intuitive and easy to use. They require them to be accessible to all
team members, regardless of their technological expertise. All of this is aimed at
streamlining daily operations and increasing productivity. Therefore, AI tools that
offer a simple and intuitive user interface are preferred by law firms.
Responses that should also be considered relevant are ease of integration with ex-
isting systems (almost half of the responses), and explainability and customizability
(both considered relevant by more than 30% of respondents). However, the results
indicate that the market is quite immature. Companies do not see a significant role
for vendors. Only 26% consider vendor support important, and only 18% consider
vendor reputation important.
In conclusion, the choice of AI tools by law firms is mainly dictated by data security
and privacy, cost and ease of use. Law firms are aware of the need to protect client
data and choose tools that are able to provide a high level of security. At the same
time, cost control and ease of use are of great importance for work efficiency. The
conclusion of these statistics is that law firms are trying to find the right balance
between these factors to choose AI tools that meet their needs and ensure success
in today’s competitive legal environment.
We asked lawyers where, in an era of the explosion of generative AI and the need to
upgrade their skills in this area, they get their knowledge. The question is also important
to understand how the legal industry is adapting to these new technological trends.
The most popular source of information about AI for lawyers is networking or con-
tact with other professionals in the industry. As many as 68.8% of surveyed lawyers
admitted that they gain information on AI through conversations and contacts with
colleagues. This is extremely important, as networking enables the exchange of ex-
perience and knowledge with other lawyers who are already working with AI tools.
This allows lawyers to learn more about the practical applications of AI in the legal
field and what benefits it can bring to their firms.
Another important source of information about AI for lawyers is AI-related events. Re-
search has shown that as many as 59.3% of lawyers use such events to learn more about
AI and its applications in the legal industry. Conferences, seminars and workshops are
ideal opportunities to gain knowledge from AI experts. At these events, lawyers have
the opportunity to listen to presentations, participate in panel discussions and ask ques-
tions, allowing them to better understand the topic and be able to apply AI to their work.
Surprisingly few lawyers use the academic world to gain knowledge about AI. Only
a little over 30% use academic conferences and publications, and even fewer use
bar events (29%) and law schools (14%). Still, a marginal source of information (aside
from LinkedIn, which yields about 33% of indications) is the Internet.
Analysis of the results showed that the most frequently indicated areas for imple-
menting AI-based tools are document automation, which was indicated by 39%
of respondents, and legal research, which was indicated by 34% of respondents.
Respondents are least likely to indicate the use of AI-based tools for managing
intellectual property, predicting litigation outcomes and risks, compliance and risk
management, and dispute resolution.
The statistical models conducted also showed some correlations between the re-
spondents’ answers and the characteristics of their companies. Virtually all e-dis-
covery choices were made by US-based companies. In addition, the largest compa-
nies with 1,000 or more employees were significantly more likely to choose contract
analysis, while they were least likely to choose information retrieval, compared to
other size groups. In contrast, the information retrieval option was typically chosen
by companies with 40 or fewer employees.
The smallest companies, those with fewer employees, were significantly more likely
to choose legal research in the context of implementing AI tools.
The most important conclusion, however, is that there is still no dominant AI tech-
nology used by law firms. This indicates the immaturity of the market and the still
huge space to be developed for technology companies.
We asked companies that use AI tools how long they have been using them. The
vast majority of companies (59.2%) have been using such tools for less than a year.
The results also showed that both 1-2 years and 2-5 years each had been using AI
for 18 companies (17.5%). A surprisingly small number of companies, only 6 (5.8%),
indicated that they have been using these solutions for more than five years.
Companies that use the tools longer are primarily larger companies with 100 or
more employees. The effect is even stronger for companies with 1,000 or more
employees. Smaller companies, especially those with 1 to 40 employees, were less
likely to use AI tools for a long time. Other company characteristics, such as indus-
try or geographic location, had no significant effect on the length of use of AI tools.
The answers to this question lead to several conclusions. First, they indicate that
the explosion of generative AI has influenced the spread of artificial intelligence. It
is the spread of this technology over the past year that has prompted law firms to
adopt AI in their operations. Secondly, they confirm that teams of a few dozen are
already successfully implementing AI-based tools. Finally, the results of this survey
suggest that larger firms have more financial capacity and resources to invest in
AI-based technologies over the long term, and in these firms, the use of AI tools
appears to be more established and widespread.
A survey of law firms reveals that the process of implementing artificial intelligence
(AI) is being undertaken by these companies with caution and attention. According
to the results, 58% of firms have chosen to pilot AI solutions before full adoption,
which indicates their desire to test and understand the potential benefits and risks
of the technology. Additionally, 50% of companies are monitoring the implementa-
tion of these tools very closely. Interestingly, however, half of the companies are
leaving the implementation of artificial intelligence for a bottom-up revolution,
Analyzing the results in the context of firm size, it can be seen that the larger the
firm, the higher the percentage of “more restrictive” responses. In small firms
(less than 40 employees), 38.3% allowed lawyers to choose the technology of their
choice at will, while in firms with 100 or more employees, such answers were not
given even once. In the case of the largest firms (1,000 employees or more), as many
as 23 out of 25 (92%) responses were “technology must be pre-approved,” while in
the other size groups, this response occurred in a range of about 23% to about 48%
of cases. This suggests that larger companies tend to have a greater tendency to
limit lawyers’ freedom to choose technology.
The survey also found that 43% of companies reported the presence of an AI inno-
vation department. The median size of the AI innovation department was 2-3. Most
firms have departments with 1-2 employees, indicating that they are not separate
units, but rather individual specialists working on technology within the firm. This
suggests that most law firms prefer a flexible approach to AI innovation, relying
on individual experts who are responsible for developing and implementing the
technology. However, they recognize the challenges and benefits of AI and are
investing in it.
Several conclusions can be drawn from the above results. First, law firms are taking
cautious steps in the AI implementation process, starting with a pilot or trial pe-
riod. Second, larger firms are more willing to impose restrictions on lawyers’ choice
of technology. Third, most companies prefer a flexible approach to AI innovation,
relying on individual specialists. Finally, the growing employment of dedicated AI
specialists indicates the growing importance of this technology in the legal sector.
The impact of artificial intelligence (AI) on the labor market is a research topic that
is attracting the attention of many experts. Here it is worth citing two reports that
address this issue and were published in June 2023: “AI at Work” by the Boston
Consulting Group and “The economic potential of generative AI. The next produc-
tivity frontier” by McKinsey & Company, analyze this impact on various aspects of
lawyers’ work.
According to the AI at Work report, employees are now more optimistic about the
impact of artificial intelligence on their work compared to how they viewed it five
years ago. In particular, generative AI seems to be seen as a tool that will save time
and promote innovation in various legal roles. However, the level of excitement
varies by seniority and country. Those at the top of the organization’s hierarchy are
more positive about the technology, while frontline employees express more con-
cern. In addition, employees are concerned that companies are not taking steps
to implement AI responsibly. This includes issues related to the implementation
of procedures and regulations in this regard, as well as insufficient attention to up-
grading the skills of staff unprepared to cooperate with AI. Interesting two figures
worth quoting - 36% of employees say their current job will be replaced by AI. 86%
of employees say they require additional training and upskilling as AI will change
their jobs.
The McKinsey & Company report, meanwhile, focuses on the potential impact of
generative artificial intelligence on knowledge work, particularly on decision-mak-
ing and collaboration activities that previously had the lowest automation poten-
tial. Previous generations of automation technologies were effective at automating
data management tasks, such as data collection and processing. However, genera-
tive artificial intelligence, with its ability to understand and use natural language,
increases the potential to automate these types of activities. As a result, generative
AI is likely to have the greatest impact on knowledge work, particularly on decision-
making and collaboration activities, which previously had the lowest automation
potential.
The McKinsey & Company report also shows that many job activities that involve
communication, supervision, documentation and general human interaction
have the potential to be automated using generative artificial intelligence. This
The survey conducted by our team looked at the impact of AI on the legal la-
bor market. We did not focus on this issue, but it clearly resonated with several
questions. When we asked lawyers about the biggest challenges in the era of
artificial intelligence, we got a range of responses, among which were legal is-
sues (80.2%), privacy and security (66.8%), and AI accuracy and reliability (63.9%).
However, AI’s impact on the labor market, i.e., layoffs of workers or changes in
job roles, was selected as the least challenging by only 11.4% of respondents.
This suggests that lawyers are not concerned about AI’s significant impact on
the labor market.
V What do you perceive as the top challenges facing lawyers in the age of
AI?
Table 22. Frequency table for multiple response question about lawyers’ top chal-
lenges in the age of AI
Second, paralegal tasks such as data collection and processing will be automated.
According to 53.2% of respondents, lawyers will need to retrain and adapt to these
changes, which may affect their current roles.
The survey shows that lawyers are not afraid of the impact of artificial intelligence
on the job market. They see the need to retrain and adapt to new tools and develop
AI-related specialties. However, the labor market is not considered a major chal-
lenge in the context of the introduction of artificial intelligence. Lawyers see AI as
an opportunity to improve efficiency and as a tool that can streamline their work.
Responses to the question, “What are your predictions for the impact of artificial
intelligence on the legal industry in the next 1 and 3 years?” (n = 201)
We asked lawyers about the challenges to be faced in implementing AI. The most
frequently cited challenges were legal issues (legal liability and regulation - 80.2%
of respondents chose this answer), privacy and security (66.8%), and the accuracy
and reliability of artificial intelligence (63.9%). These responses were selected sig-
nificantly more often than issues such as ethical issues (29.7%), the need to adapt
to AI (22%), lack of explainability (15.3%) and the labor market issues already de-
scribed (11.4%). What is interesting is that answers were given similarly regardless of
the size and location of the company surveyed, and the differences across statisti-
cal groups were small.
Lawyers are not afraid of new technologies. When asked about the opportunities
associated with new technologies, they see the great potential of artificial intel-
ligence as the answer to future challenges.
When asked how they foresee the impact of AI on lawyer work in the next 3 years,
more than half indicate that tools using artificial intelligence will become an impor-
tant part of lawyers’ workflows (53.2%). The same number of responses indicate that
paralegals’ tasks will be automated during this period (53.2%). Half of the lawyers
are overconfident that there will be widespread adoption of artificial intelligence in
various areas of legal practice (49.8%).
A significant portion of those responding also indicates that legal and ethical de-
bates about artificial intelligence and its role in the legal world (45.3%), the afore-
mentioned demand for lawyers familiar with AI (38.8%), and innovations related to
AI implementation (33.8%) await us within 3 years.
Second, lawyers feel they need additional education and training. Such a need for
both the legal and non-lawyer teams was seen by 52.2%. Small companies and the
largest companies indicated this most often.
More than 42% of those surveyed require improved security and privacy/data pro-
tection in the AI-related sphere. In contrast, the least frequently indicated op-
tions are investment in AI research and development (29.9%) and cooperation with
technology partners (38.8%). This leads to the conclusion that the majority of law
firms expect to receive off-the-shelf solutions, but almost 1/3 of firms are willing
to invest in new technologies. Despite the rarest of choices, this is a very sizable
number. It should come as no surprise that this option was indicated least often
by small firms.
Glossary:
Early and late sce- The early and late scenarios are the extreme scenarios of
narios our job automation model. The “earliest” scenario adjusts
all parameters to the extreme probabilities, resulting in
faster development and implementation of automation,
while the “latest” scenario adjusts all parameters in the
opposite direction. Reality is likely to be somewhere in
between these two scenarios.
Labor productivity Labor productivity is the ratio of GDP to the total num-
ber of hours worked in the economy. Increases in labor
productivity are due to increases in the amount of capital
available to each worker, the education and experience
of the workforce, and technological improvements.
It has been long emphasized that enhancing communication between legal busi-
nesses interested in or already implementing computational solutions, including
AI systems, and the relevant research communities, is highly needed. In this con-
nection, we have invited researchers specializing in the problems arising at the
intersection of computational technology on the one hand and legal theory and
practice on the other hand, in particular in Legal Informatics, Information Tech-
nology Law, and, specifically, the area of AI and Law, which focuses on the theo-
retical aspects of and practical development of intelligent tools supporting the
performance of legal tasks. The latter community has been active since the 1980s
and accumulated broad expertise in the application of manifold formal models and
computational technologies in legal contexts - to mention legal expert systems,
rule- and case-based models of legal reasoning, models representing and support-
ing legal argumentation in general, ontologies for law, negotiation support systems
as well as – more recently – Machine Learning-enhanced models for predictive jus-
tice, legal text analytics, summarization of legal documents or generation of legally
relevant text with Large Language Models. The activities of his research community
are promoted on a global level by the International Association for Artificial Intel-
ligence and Law (iaail.org), under the auspices of which the International Confer-
ence for Artificial Intelligence and Law (IAAIL) is organized every two years (the
last two editions were the 19 thedition in Braga, Portugal 2023 and the 18th entirely
online edition due to Covid-19 pandemic, in Sao Paulo, Brazil, 202). The confer-
ence is an in-cooperation event with ACM-SIGAI, who publish the proceedings,
and with the AAAI (Association for the Advancement of Artificial Intelligence). An-
other important organization is the JURIX Foundation for Legal Knowledge Based
Systems (jurix.nl) incorporated in the Netherlands with which the series of JURIX
– the International Conference on Legal Knowledge and Information Systems, or-
ganized annually, with proceedings published by the IOS Press, is affiliated. The
last two editions occurred in Saarbrucken, Germany (the 35th edition, in 2022) and
Vilnius, Lithuania (the 34th, in 2021). While neither the IAAIL nor JURIX Foundation
coordinates research as such, the conferences organized by them enable the com-
munity to meet on a regular basis and exchange ideas and criticism. Journal-level
research of the community is publicized principally in the Artificial Intelligence and
Law journal, which has recently had its 30 th anniversary, documented by a special
issue comprising four extensive co-authored overview papers devoted to particu-
lar decades of the research’s evolution and, additionally, topical overviews (Gov-
This part of the report is, therefore, designed to turn the attention of the legal busi-
ness to the opportunities which may result from the increased communication of
the sector with the abovementioned expert communities, taking into account their
differentiated, although to an extent overlapping, scopes of interest and expertise.
The invited experts were asked to prepare brief commentaries on the survey re-
sults. They were presented with an executive summary of the statistical analysis
of results and the full text of the statistical report. We did not impose any specific
structure on commentaries, but, in order to attain a degree of uniformity, we sug-
gested the following overall approach:
The first part may comprise the reaction to the findings of the survey. Are the results
conforming to your expectations, or do you find some of them surprising? How do
you assess the overall state of implementation of AI in the surveyed law firms? Do
you have any commentary with regard to the differences associated with the loca-
tion of the law firm, its size, or area of expertise? What is your opinion concerning
how law firms address the risks and benefits of AI in the workplace? What is your
opinion about their expressed preferences and needs concerning the technology?
The second part of the commentary could contain, for instance, your predictions
concerning the applications of AI technology legal practice and the associated
risks and potential benefits, as well as recommendations related to what should
be done (or avoided) in the coming years in connection with these processes. For
instance, you may indicate the existence of tools and projects addressing the ex-
pressed needs of the law firms, the expected progress in relevant scientific fields
(and associated implementation works), the need for initiation of dialogue between
certain stakeholders and platforms for such a dialogue, development of standards
and good practices etc. Please do feel free to also address different topics following
your area of expertise.
The majority of experts concur that the integration of AI in legal firms apparently
is no longer in its infancy, but rather is becoming entrenched in standard practice.
This survey result is often perceived as, to a degree, surprising. A substantial per-
centage, around 50.74% of legal entities, have embraced AI technologies in some
form, with another 11.82% actively exploring its possibilities, painting a picture of a
sector that’s proactively engaging with technological advancements.
Medvedeva’s observation adds a layer of nuance to this narrative. While the sur-
vey points to about 50% of law firms harnessing AI, this figure could be underrep-
resentative. It’s plausible that many legal firms employ AI-powered search tools
without categorizing them as ‘AI’ due to a deep integration or perhaps a misunder-
standing of the technology. This disconnect could also be attributed to the survey’s
framing or the respondent’s interpretation, suggesting that AI’s actual prevalence
might be even more widespread than indicated.
The implementation of AI in law firms brings forward both opportunities and chal-
lenges. There are significant concerns about experimenting with new AI solutions,
where Costantini highlights that jumping into new technologies without proper
testing can lead to unexpected problems and significant financial implications.
Furthermore, there is a surprising degree of autonomy given to individual lawyers
in some firms, posing potential risks to client data security and the overall quality
of work.
Ebers delves into the legal framework, discussing the European AI Act’s role in
regulating AI. The Act differentiates between high-risk and low-risk systems, im-
posing more stringent requirements on the former. Notably, the Act requires that
AI system providers inform users when they are interacting with an AI tool, but not
necessarily the end recipients of its outcomes. The debate on transparency obliga-
tions is crucial, especially when considering the trust between clients and lawyers.
Bias and transparency issues arise when dealing with AI, as pointed out by Ghosh.
AI systems, being trained on historical data, can inherit biases, and the lack of clar-
ity on their workings makes them hard to understand and trust completely. Espe-
cially for critical human decision-making tasks, these issues become paramount.
One interesting insight noted by Harašta is that many companies dive head-first
into implementing AI solutions without any trial period. This practice is particu-
larly prevalent in smaller firms, potentially due to the lack of internal processes.
This hasty implementation is ironic, as many believe that AI inherently brings ef-
ficiency, but without understanding current processes, it’s hard to truly gauge im-
provements. Also Zeleznikow observes that while 57.8% of companies claim to have
trialed AI solutions before full adoption, a notable 42.2% have skipped this prelimi-
nary step. This suggests that a large percentage of firms are readily adopting AI
without prior testing, indicating a rapid and confident embrace of the technology.
Libal emphasizes the concerns related to legal issues and the trust placed in AI
tools, suggesting that many may not recognize the true risks these tools pose, par-
ticularly in terms of liabilities and insurance. Quite to the contrary, Savelka mentions
the inherent risk-averse nature of the legal profession, which mirrors the cautious
approach taken towards AI. However the potential legal liabilities related to AI de-
cisions, especially when they affect clients, can’t be understated.
Walker finds it intriguing that while many law firms value the importance of AI’s
legal issues, there seems to be limited attention given to quality assurance, vendor
reputation, and training. This raises questions about how firms are ensuring the ef-
fectiveness and reliability of their AI tools.
In conclusion, while AI offers transformative potential for law firms, there’s a press-
ing need to address concerns about its implementation. Issues of transparency,
One of the most pronounced concerns for law firms is the potential liabilities as-
sociated with AI use, as highlighted by Costantini. A staggering 80.2% are preoc-
cupied with this, yet only a minor 15.3% appear attentive to the technical causes
behind these liabilities, such as the lack of explainability. This potential for legal
complications is further compounded when we consider the forthcoming regula-
tions, such as the European AI Act proposed in 2021. As Ebers points out, this act,
which adopts a risk-based approach, makes significant distinctions between high-
risk and low-risk AI systems. While the Act has provisions that ensure AI transpar-
ency for users, it seems to have a loophole. The end clients, the very individuals af-
fected by AI’s decisions, might remain in the dark about the machine’s involvement
in their cases.
Libal brings to light an intriguing observation: the seeming trust legal firms place
in AI tools. Over 60% of the firms surveyed expressed concerns about privacy, se-
curity, and accuracy. In contrast, explainability lagged behind at less than 16%. This
disparity suggests that firms may be more inclined to trust the outputs of AI with-
out delving deeply into the mechanics behind these outputs. Such an inclination
resonates with Zeleznikow’s observation that academics and researchers appear
to be more engrossed with the intricacies of explainability than their practitioner
counterparts.
-
Zurek takes this further by proposing that the difference in perception of explain-
ability between researchers and legal firms might arise from their inherent needs.
Legal firms predominantly seek AI’s assistance for more repetitive and mundane
tasks and not necessarily for intricate legal reasoning. This perspective frames AI
as an augmentation of traditional legal activities rather than a revolutionary force
set to redefine the profession.
Savelka raises ethical flags, cautioning about the dangers of blind trust in AI predic-
tions. Biased data and non-transparent AI tools can lead to skewed legal strategies,
posing ethical dilemmas. Additionally, the advent of AI also brings with it fears of
job displacement, especially for roles traditionally occupied by junior legal profes-
sionals.
However, as van Dijck elucidates, caution is urged: the line between repetitive and
mundane tasks may not be so clear-cut. Automation’s capability to handle tasks
previously relying on human judgment raises intricate questions. Harašta’s com-
mentary provides a nuanced perspective, addressing the imminent impacts of AI
on paralegal tasks. While there exists a robust appeal for automating repetitive
The above outlined, recurring themes of course do not exhaust the diversity and
scope of opinions, observations and criticism provided by the researchers’ com-
ments. We have noted, however, that these topics constitute a set of core issues
perceived by the majority of the commentators, and may, therefore, provide a con-
venient starting point for a detail-oriented lecture of the original commentaries,
which are presented below, in alphabetical order of the authors’ names.
Importantly, the references to the numbers of questions and to the tables are
made to the full statistical analysis, available at [link].
Analysis
The report is remarkable both for the number of firms involved and for the out-
comes.
A first remark can be made regarding ‘repetitive tasks’ (question 4.1.1.), which ac-
count for practically one-third of the respondents’ workload. This is a well-known
criticality not only in the legal sector, and evidently constitutes a waste of intellec-
tual resources and time.
It is significant that this data is higher in law firms that practise litigation because it
means that disputes are not only a social cost but also an internal expense for the
legal practices.
In this sense, the impact of AI would have a dual benefit, because it directly de-
creases internal costs and indirectly lowers litigation costs, yet it does not seem so
simple.
Going into more detail (question 4.1.1.2), respondents believe that the impact of
AI would most benefit three areas: legal research, document review, and contract
drafting. Of these, only two, however, are directly related to judicial activity, while
the third essentially concerns consultancy. In fact, it seems to be a ‘conservative’ an-
swer, because AI integrates database research tools that have already been known
for decades. It is only an improvement of existing tools; there is no breakthrough
It is useful to relate this figure to the answers given by law firms that already adopt-
ed AI (question 4.3.1.1.). In fact, the data are overlapping (document automation
39.0%, legal research 34.0%, information retrieval 24%). I assume that the difference
also depends on the technological solutions adopted and the economic assess-
ments that were made based on what was available on the market.
It is very interesting that AI has already been adopted (question 4.1.2) by practi-
cally half of the respondents (50.74%), albeit at different levels. If one considers this
figure together with those who are exploring this technology (11.82%), it emerges
that the use of AI in law firms is in fact an established reality. This emerges from the
question (4.1.3), where it can be seen that adding up the percentages of openness
to AI (30,54% + 37,93%) shows that most are inclined to adopt this technology.
The figure for openness to AI (question 4.1.3) could be explained by the fact that
smaller firms are quicker to adapt to change, while larger ones can organise them-
selves better. This suggests that AI could be a determining factor in the evolution
of the legal professions market, given the aggregation of small and medium-sized
firms into larger conglomerates.
The answer to question 4.3.1.4 could mean that, when it occurs, the adoption of AI
within a firm tends to be pervasive and is reflected throughout the business. The
lower percentage referring to clients could mean that the environment in which the
firm operates is not technologically advanced.
The answer to question 4.3.1.5. is also interesting. The fact that there is a large per-
centage of law firms that leave it up to the individual lawyer to self-organise is a risk
to the security of the client’s data and to the quality of the work performed. The fact
is that there seems to be awareness of this problem, as can be seen from question
5.1.3, to which respondents indicated guidelines (61.7%) and training programmes
(29.9%) as priorities. This means that there is awareness of the underlying issues
related to the introduction of this technology, but that in fact widespread and ho-
mogeneous knowledge can only be had in large practices that are able to structure
themselves in this way. This answer can be linked to question 4.3.1.6, concerning the
size of IT departments. In fact, fewer of the respondents have dedicated structures,
and in any case, the amount of personnel is generally very small (question 4.3.1.7).
The answer to the question on the future of the legal profession is interesting (ques-
tion 5.1.1) because it reveals a somewhat short-sighted attitude. First of all, one is
concerned about possible liabilities (80.2%) but does not pay attention to possible
technical causes (lack of explainability, 15.3%). But above all, among the problems
of AI, there is limited sensitivity to the impact on employment (labor market, only
15.8%). Respondents seem to not evaluate the impact on their own position. Yet,
in another question (question 5.1.2), ‘paralegal’ work is expected to be automated
soon (53.2%). In other words, there is a push towards process automation – and a
great excitement for its achievements – but no awareness of the indirect conse-
quences of it.
From the report I analysed, I believe that the advent of AI in law firms will have
a similar impact. Although there is a certain awareness of the importance of the
phenomenon, and also of the propensity to adopt these technologies, there is in-
evitably no clear perception of its actual scope.
From a certain point of view, the expected benefits from AI can also be achieved
without it. I refer to the management of ‘repetitive tasks’. I think that the optimisa-
tion of internal processes can be improved even without the use of AI - not all of
them, of course - and on the other hand, some activities, perceived as ‘bureaucratic’
or ‘time wasting’, are unavoidable within complex organisations and therefore dif-
ficult to eliminate. On the other hand, from personal experience, I cannot disregard
the fact that professional skills are accumulated through the exercise of activities
that are perceived as ‘repetitive’ only when one has reached a higher professional
level. Limiting too much - or even suppressing - certain tasks through automation
means interfering in the gradualness of the individual jurist’s professional growth.
This also poses the problem of training the jurists of the future and brings up the
problem of ‘automation complacency’.
A further assessment can be made with reference to the size of law firms. In fact,
the advent of AI is a further factor in the split between small law firms - possibly
specialized ‘boutique firms’ even in LegalTech issues - and large law firms. Medi-
um-sized firms cannot keep up with innovation because they are not agile enough
or sufficiently organised. I do not know whether this process actually benefits the
community.
On further notice, it is revealing that respondents are aware of the need for regula-
tion or at least guidance on how to work using AI. however, this kind of regulation
cannot be left to individual law firms, but should be the subject of consideration by
the whole legal community, possibly with a worldwide discussion. From this point
of view, the ABA’s initiative to set up a working group on the topic seems appro-
priate. https://www.americanbar.org/groups/leadership/office_of_the_president/
artificial-intelligence/.
PART I
The survey findings indicate that approximately 10-20% of tasks within law firms
are considered repetitive. While this statistic might appear consistent with initial
expectations, repetitive tasks may not equate to mundane ones—the interplay be-
tween human judgment and AI capabilities in handling these tasks warrants further
exploration.
State of AI Implementation
The report underscores a pivotal insight that nearly half of the respondents have in-
corporated AI into their operations. However, the nature of this AI implementation
The emphasis on data security and privacy in evaluating AI technologies aligns with
broader industry concerns. This cautious approach reflects a conscientious recog-
nition of AI’s potential implications on the sensitive legal information. The legal
sector’s prioritization of data integrity is not only a reflection of its responsibility to
clients. However, it may also show its commitment to upholding the ethical founda-
tions of the profession.
I would have been interested in more US / non-US comparisons. Finally, I find it dif-
ficult to conclude the various regressions due to the limited number of predictors.
The recent survey findings casting a spotlight on the significance of security and
privacy considerations in AI implementation within the legal domain underscore a
critical dimension of the technology’s integration. As law firms increasingly turn to
AI to enhance efficiency and decision-making, the heightened emphasis on safe-
guarding data integrity and privacy reflects the legal sector’s recognition of its
ethical responsibilities and the delicate nature of legal information.
AI’s transformative potential in the legal landscape extends beyond mere effi-
ciency gains; it can reshape how legal professionals approach research, document
analysis, and even predictive legal outcomes. However, the knowledge that vast
amounts of sensitive data underpin these advancements magnifies the importance
of data security. Legal practitioners are entrusted with their clients’ confidential
information and have the ethical duty to uphold privacy rights. As such, a proactive
approach to integrating robust security measures is non-negotiable.
The answer appears affirmative. In an era where data breaches and privacy viola-
tions attract headlines and regulatory scrutiny, the legal community’s commitment
to safeguarding sensitive information is an ethical and legal imperative. Integrating
AI in the legal sector necessitates a profound understanding of the algorithms’ op-
erations and their potential implications for privacy and data integrity.
AI tools help to increase the efficiency on repetitive tasks. Nearly one third of a
law firms work consists of such tasks which reveals a high potential for automation.
In contrast to a German study (How Legal Technology Will Change the Business
of Law, January 2016) which found that fewer law firms than expected had imple-
mented legal technology, this study found that about half of the respondents had
already or plan to implement AI tools. Nevertheless, one third is not planning to
do so.
It is not surprising that smaller firms which are working on more general cases and
are having more repetitive tasks are implementing AI tools. However, the finding
is surprising with regard to the cited German study which saw the highest risk for
small firms to be vanished by legal technology since they are doing more general
cases and standardized tasks which are now replaced by technology. They recom-
mended that small firms specialize and implement AI tools to increase their produc-
tivity without increasing the costs.
In addition, it is not surprising that big firms already implemented AI tools and
are very open to their use. Their clients require more transparency on fees and
a seamless collaboration with their in-house staff. They also require more work
for less money. Law firms whose business model is still based on high hourly
rates were forced to implement legal technology in order to reduce the costs
for the client. Furthermore, those firms have more capacity – financially and in
man-power – to develop, test and implement new technologies. As the cited
German study shows they are more likely to invest in legal-tech start-ups or
develop their own solutions.
The tasks with the highest potential for AI are legal research, document review
and contract drafting, case management and compliance monitoring. The study
shows that small firms are not replacing their work on documents by AI tools but
use such tools to enhance legal research. In contrast, big firms use AI tools mostly
for contract analysis which makes sense considering that they are mostly dealing
with nonstandard, complex cases.
As the German study also suggests, the difference between the US market and
other market might be based on the characteristics of the different legal systems.
In common law the specific rules of document disclosure in discovery lead to a high
The use of AI tools shows that the roles in law firms will change which corresponds
the findings of the German study by the Bucerius Law School and Boston Con-
sulting Group from 2016. As repetitive tasks are automated the firms will need
less general supportive staff, junior lawyers and generalists. Especially the work
of young lawyers consists of 30-50 % of tasks that can be automated1. They are
mostly trained on due diligence, document review, document generation and
legal research. However, experienced lawyers will remain important for dispute
outcome prediction, compliance and risk management and IP management
which are tasks – according to the findings of this study – the firms do rarely use
AI tools for. The role of the lawyer shifts to a project manager. In addition, the
profession will require more technical skills since the lawyer has to understand
the tools used. The
German study suggests that even the role of general supportive staff changes
which will need less legal education and more technical and management skills.
The current pyramid structure with a high ratio of junior lawyers per partner will
convert into a rocket structure with a low ratio of junior lawyers per partner. The
German study estimates that the ratio of junior lawyers to partners might decline by
up to three quarter. The law firm will, however, be supplemented by non-legal staff
such as project managers and legal technicians.
The automation of entry-level jobs forces law firms to find a way to train their young
lawyers in order to gain the required project management skills and technical lit-
eracy. In addition, law schools must supplement their curriculum in order to supply
the legal market with lawyers who have the knowledge and skills to succeed in their
new roles. They need to provide courses in project management and legal technol-
ogy. This should comprise classes in case management, database management
statistics, analytics and digital communication2.
1
How Legal Technology Will Change the Business of Law, January 2016, available at: https://www.law
school.de/fileadmin/content/lawschool.de/de/units/abt_education/Studienseite/Studien/Legal_Tech_
Report_2016.pdf.
2
How Legal Technology Will Change the Business of Law, January 2016, available at: https://www.law-
school.de/fileadmin/content/lawschool.de/de/units/abt_education/Studienseite/Studien/Legal_Tech_
Report_2016.pdf.
Liability for the use of AI tools and their regulation were overwhelmingly named as
the greatest challenges for law firms when considering implementing such tools.
A regulatory framework for the use of Legal technology is an essential step to cre-
ate legal certainty and reduce the risk of the deployment of AI tools. The European
AI Act proposed in April 2021 and currently entering the Trialogue might be a step
in the right direction.
The European AI Act regulates AI systems using a risk-based approach. The Act
only imposes regulatory burdens when the AI system is likely to pose high risks to
fundamental rights and safety. High-risk systems have to comply with high quality
data requirements, documentation and traceability requirements, transparency
obligations, the need of human oversight, accuracy and robustness requirements.
In contrast, low-risk systems are only facing limited transparency obligations. The
AI Act Draft qualifies only those legal AI systems which are used by judicial authori-
ties as high-risk systems. This scope is broadened by the Parliament’s report to
administrative authorities and the use on behalf those authorities. AI tools imple-
mented by law firms will only qualify as high-risk systems under the amendments
suggested by the Parliament if they use them on behalf of public authorities. Most
of the AI tools named in the survey are low-risk systems. The provider of an AI
system which is intended to interact with natural persons shall ensure that they are
designed and developed in such a way that natural persons are informed that they
are interacting with an AI system, unless this is obvious from the circumstances
and the context of use (Art. 52 AI Act Draft). The AI Act only binds the provider of
AI tools to flag the use of AI to natural persons using the systems. So, if a lawyer
uses a document generator or an AI tool for legal research, the provider is only
obliged to inform the lawyer about using an AI system. The client, who is actually
affected by the outcome of the deployment of the AI system, does not need to
be informed. However, as the survey reveals, AI systems which are directly used
for client collaboration are and will remain rare. Nevertheless, the EU should con-
sider broadening the transparency obligation so that natural persons who become
subject of an AI-supported decision have to be informed. Considering the mutual
trust between client and lawyer it should be considered malpractice if the lawyer
does not disclose the use of an AI system if the outcome of the case is significantly
influenced by its deployment. This might not be true for AI tools for legal research.
Also, systems that are used to organize workflows, to manage cases or compliance
are merely effecting the internal organization of the firm and not the outcome for
the client. But tools for document analysis or a document generator contain the
Moreover, during the legislative procedure the European Council introduced new
rules regarding general purpose AI systems. A general purpose AI system is an
AI system that, irrespective of how it is placed on the market or put into service,
including as open source software, is intended by the provider to perform gener-
ally applicable functions such as image and speech recognition, audio and video
generation, pattern detection, question answering, translation and others. Those
rules are a reaction to the recent hype about large language models like GPT-3
and their wide range of possible fields of use. One famous incident illustrates the
risks of those systems when they are used in a legal context. An American lawyer
used ChatGPT for his case research. Unfortunately, the cases, ChatGPT cited, were
made up. They never existed. The system does not check the factual correctness of
its texts and the user is not able to research on which documents it was trained in
order to assess its legal competence. The EU Council suggests that those systems
are treated like high-risk AI systems unless, the provider explicitly excludes the use
in high-risk AI systems. Thus, the use of those systems for the applications named
in the survey would only be accompanied by transparency and documentation re-
quirements.
Conclusion
As the survey shows, the work of a law firm has a lot of potential for automation by AI
tools. The survey also shows that most law firms have already recognized the pos-
sibilities and implemented AI tools for the most repetitive tasks. The technological
change plays to the client’s demand of higher productivity but less costs. However,
the law firms must be aware that the role of a lawyer changes and demands more
project management and technical skills. In addition, the tasks that are automated
were executed by junior lawyers and paralegals up to now. Thus, the structure of
the entity is going to change requiring more non-legal staff. The education of law
students and the training of young lawyers has to focus more on acquiring man-
agement and technical skills. Another key fact for the successful implementation
of AI tools in law firms is a regulatory framework which will provide legal certainty
regarding liability issues. Unfortunately, the AI Act imposes for most of the systems
mentioned in the survey the mere obligation to flag the use of AI systems when
interacting with humans. However, the sensitive relationship between lawyer and
client requires a regulatory framework for the use of AI systems. There is still a great
need for action for the legislators.
The report represents a wide and very relevant analysis of the appeal and penetra-
tion of AI technologies in the world of law firms.
The results of the survey confirm the expectations about the interest of law firms in
the applications of AI technologies in the legal domain, especially of large compa-
nies, which can invest more resources to train and qualify their personnel.
The state of play of AI technologies in the surveyed law firms of all size reveals a
good level of implementation of such tools, in particular in the field of document
automation and legal research.
At the same time, less penetration of AI tools has been found in domains like intel-
lectual property management, dispute outcome and risk predictions, compliance
and risk management, as well as dispute resolution. These results show that AI tools
are especially used by law firms for mundane or repetitive tasks, while more ad-
vanced applications of legal reasoning, widely discussed in the scientific literature
as applications of legal reasoning and inferences (in terms of legal compliance or
predictive justice services), seem still under-represented.
Moreover, the survey reveals that the largest companies require employees to be
pre-authorized to use AI tools: this fact reveals how the use of AI is felt sensitive by
large companies.
The survey revealed that AI applications are considered important by law firms,
firstly to streamline repetitive tasks and improving efficiency, as well as to reduce
The interviewed companies seem not completely aware of the potential of AI tools
in the legal domain, as far as the AI and Law research can offer, but they properly
underlined the “ethical and legal debates” that the use of such technologies might
raise.
For these reasons, it is clearly underlined the need for a policy and related guide-
lines able to provide a governance framework for AI tools application in legal mat-
ters.
Recently the European Commission addressed such challenges in the “White Pa-
per On Artificial Intelligence. A European approach to excellence and trust”. Such
White Paper is targeted to promote AI technologies in the public and private
sectors. This document is conceived around four main pillars aiming at: creating
excellence and testing centres that can combine European, national and private
investments; fostering new public-private partnerships in AI, data and robotics;
promoting the adoption of AI by the Public Sector; creating an Ecosystem of Trust:
Regulatory Framework for AI (ex: data protection, privacy, non-discrimination).
In this framework, law firms can have an important role for providing use cases and
feedback about the usage of AI tools in the legal professions, as well as for identify-
ing ethical issues which can hamper the application of AI technologies in the legal
domain.
http://cse.iitkgp.ac.in/~saptarshi
The report on “AI in Legal Business” provides a comprehensive glimpse into the
current state of AI integration within the legal sector. The study is divided into two
phases – the first phase involves a survey of 203 law firms and the second one en-
gages researchers worldwide to analyze the key findings on several aspects such
as AI adoption, openness to change, challenges of lawyers, and future predictions.
The sample characteristics section showcases the diversity of the legal industry in
terms of firm size, lawyer count, AI engagement, primary activities, and geographi-
cal location (US / non-US).
As per the report, the legal profession includes a significant portion of mundane
and repetitive tasks, with the average being 38% of the workload [as per Table 5
in the Report]. This aligns with our understanding of legal work as told by Legal
professionals in India as well. However, companies that specialize in “litigation”
have a higher average of these repetitive tasks, about 14% higher than other sub-
fields [Section 4.2.2 in the Report], which might be due to the nature of litigation
processes. The fact that nearly 80% of respondents believe AI could enhance “legal
research” [as per Figure 10 in the Report], indicates a growing recognition of tech-
nology’s benefits in making repetitive tasks more efficient.
Interestingly, the location of the law firm does not seem to significantly affect the
distribution of repetitive tasks. However, there are notable differences when it
comes to the size and specialization of firms. Medium-sized companies, those with
11 to 99 employees, stand out by having fewer repetitive tasks on average. On the
other hand, firms with at least 50% of lawyers have a lower average of repetitive
tasks. [Section 4.2.2]
The smallest (1-10 employees) and the largest companies (100+ employees) along
with the companies in the U.S., are more likely to have implemented AI. The re-
duced likelihood of medium-sized companies (11-99 employees) adopting AI is
somewhat surprising. This suggests that the relationship between AI adoption and
Also, more than half of the surveyed companies have already integrated AI tools
into their operations, and an additional 11.8% are actively exploring AI options.
This suggests that AI implementation is becoming a norm rather than an excep-
tion in the legal sector. The fact that a significant number of companies (68.5%) are
adopting AI in their firms reflects the growing recognition of AI’s potential [Table 9].
The overall trend of openness to AI indicates a promising path toward harnessing
technology for legal innovation.
Regarding the evaluation of AI solutions, it is not surprising that “data security and
privacy” tops the list, chosen by 79.6% of respondents, which reflects a responsible
approach to adopting new technologies. This aligns with the emphasis on protect-
ing sensitive legal information. Nevertheless, as per the expectation, “cost” (75.5%)
and “ease of use” (61.7%) are also prominent factors which imply that firms are
carefully considering the financial implications of AI integration as well as looking
for solutions that can be seamlessly integrated into their workflow. Unexpectedly,
“vendor-related issues” like reputation and support are ranked lower, with only
18.4% and 26% of respondents selecting them respectively [Table 10]. But, US firms
valuing vendor reputation might mean they really care about trusting technology
providers [Section 4.2.3].
“Networking” and “AI events” emerge as the top sources for new AI ideas and
solutions, with 68.8% and 59.3% of respondents selecting them, respectively. This
aligns with the understanding that the exchange of ideas and participation in in-
dustry-specific events are common avenues for staying updated on technological
advancements. However, there are notably low preferences for “newsletters” and
“online communities”, with only 4% and 3% of respondents choosing them; this
indicates that firms are prioritizing direct interactions and real-time engagement
with AI developments [Table 11].
Also, firms of different sizes look for AI ideas from different sources. Smaller compa-
nies are more likely to rely on sources like social media (e.g., Twitter) and blogs. Inter-
estingly, larger firms are inclined to engage with AI events, indicating a strategic focus
on industry events for innovation. “Academic publications” are favored by medium-
sized firms, showcasing a commitment to scholarly resources [as per Section 4.2.3]
As expected, larger companies (with 100+ employees, and even more so for those
with 1000+) tend to use AI tools for a longer duration compared to smaller compa-
nies, particularly those with 1 to 40 employees [Section 4.3.2]
Many companies (59 out of 103) are following pilot programs or trial periods before
implementation of AI which indicates that they are careful about handling the pos-
sible risks that come with adopting AI. They use AI for both “internal tasks” and
“client work” (47.6%), which makes sense. But it seems a bit surprising that fewer
companies use AI just for “client work” (14.6%), as it could help improve services for
clients [Tables 14, 15 in the Report].
The substantial majority (59.2%) of companies using AI-based tools for less than a year
indicate a gradual yet steady transition. However, half of the companies exhibit a trend
towards more cautious technology adoption, with a notable requirement for pre-ap-
proval (49.5%) among their lawyers [Table 16]. Interestingly, smaller firms appear to offer
more autonomy to their lawyers in selecting technology. The presence of AI innovation
departments is significant (in 44 out of 103 firms), suggesting a proactive approach to
staying at the forefront of AI advancements, particularly in larger firms. Even though it is
a small number, the presence of dedicated AI positions (in 13 out of 103) shows a com-
mitment to leveraging specialized expertise for effective AI integration [Table 18, 19].
As per the report, AI-powered tools are already being used to automate tasks such
as “document automation”, “legal research”, “contract review”, etc. In the coming
(1) Increased efficiency: AI can automate many of the time-consuming and re-
petitive tasks that lawyers currently perform. This will free up lawyers to focus
on more complex and strategic work.
(2) Improved accuracy: AI can analyze large amounts of data more quickly and
accurately than humans. This can potentially lead to better decision-making
and more favorable outcomes for clients.
(3) Increased access to justice: AI tools can make legal services more affordable
and accessible to people who would not otherwise be able to afford them. Ac-
cess to a Law practitioner is expensive in many countries in the world. If an AI
system can at least give preliminary guidance to common masses in simpler
legal problems, they would be able to access justice much more easily than at
the present times.
According to the report, the most preferred AI systems by the respondents were:
“document generators” (84.2%), “document summarization tools” (69.5%), “case
law analytics tools” (63.5%), and “compliance & risk management systems” (59.1%).
Among these, the respondents have chosen the “automated document genera-
tor” (41.9%) as the most necessary AI tool. This strong preference shows how cru-
cial it is to efficiently create documents in legal work. These document generators
are highly valued, especially in medium and large firms (from 43.9% to 57.1%) and
more favored by US firms (54.8% vs. 36.2% from non-US firms). On the other hand,
small and non-US firms emphasize the “compliance & risk management systems”
[Table 26].
The least chosen AI tools are “negotiation support systems” (21.7%) and “legal
argument assistants” (27.1%). But, “case management systems” stand out as most
appealing to firms with 100-999 employees as well as for Litigation-focused firms
to handle their cases better, while medium-small firms (11-39 employees) find them
relatively less attractive [Figure 28].
However, there are also some risks associated with the use of these AI tools in legal
practice:
In particular, I would like to add some comments regarding the application of Large
Language Models (LLMs) in the Legal domain. The integration of LLM-based AI
tools, like ChatGPT, Bard, etc. into the legal industry has gained huge momentum
since 2022. With their applications ranging from the writing of agreements and con-
tracts to legal research, document automation, and more, LLMs have the ability to
expedite a variety of legal practices. LLMs can rapidly generate documents and
gather information, relieving legal professionals of repetitive and mundane tasks.
Moreover, they can expedite research and case preparation with their ability to com-
prehend complex legal texts quickly. However, there are some important factors to
keep in mind. When LLMs are used in certain contexts, they also bring up issues and
dangers. For instance, there may be inherent risks when applying LLMs for judgment
prediction. The outputs generated by LLMs, if used unwisely, could potentially bias
human decision-making. The difficulty arises from the fact that the intrinsic biases of
these models are often undisclosed or inadequately understood. Hence, while LLMs
offer invaluable support in tasks like drafting and research, their applications in tasks
involving critical human decisions demand heightened scrutiny.
Here are some additional recommendations that should be made in the coming years:
(1) Lawyers should be trained more on how to use AI tools and how to identify and
mitigate bias.
(2) There should be a dialogue between lawyers, technologists, and policymakers
to develop standards and best practices for the use of AI in legal practice.
(3) Common people should be educated about the potential benefits and risks of
AI in legal practice.
(4) AI could also be used to create more personalized legal advice and services.
INTRODUCTION
I will start this commentary with a personal confession: As a rule, I get over-excited
by scientific achievements, while at the same time, I am sceptical about the real-
life impact they might have in the future. The development of artificial intelligence
and its implications for the legal industry is no exception. As a researcher, I enjoy
reading about recent advancements in legal information retrieval, legal question
answering or argument modelling. However, as a lecturer, I often bore my students
with emphasizing flaws standing in the way of practical application, frequently re-
lated to problematic scalability, unclear contribution to legal practice or low ac-
ceptance of any given technology from professionals.
The confession brings me to the following statement: All who read, teach or re-
search legal tech and related changes in conducting legal practice have strong
opinions on the issue. These opinions are often based on anecdotal evidence from
consultations, research activities, and dialogues (not research interviews!) with
people of similar interests.
The authors of the report – Michał Jackowski and Michał Araszkiewicz – finally al-
lowed us to confront our opinions with meticulously collected data. Such an op-
portunity is rare and must be used to build narratives interpreting the data vis-á-vis
one’s experience. I cannot overstate how important the data are and how grateful
we should be that someone collected the data. Only because of the hard work of
the authors we can indulge ourselves in searching for meanings, both in the current
state of play and for future development.
NEW FRIENDS...
While reading the report, I was genuinely surprised by the results related to three
survey questions:
3
Qualitative analyses are not scarce. Unfortunately, the potential for generalization of their results is lim-
ited. See e.g., SOUKUPOVÁ, Jana. AI-based Legal Technology: A Critical Assessment of the Current Use
of Artificial Intelligence in Legal Practice. Masaryk University Journal of Law and Technology, 2021, vol.
15, no. 2, pp. 279–300.
First, the data suggest artificial intelligence tools are more prevalent in legal prac-
tice than expected. Approximately half of the sample (103 firms) uses AI-based
tools or solutions, with document automation, legal research and information
retrieval being the leading fields of practice where law firms implemented these
tools. Additionally, companies often use their tools for internal organizational tasks
and tasks related to client work.
From my experience from consultations with law firms’ representatives, the urge
to ‘do AI’ is intense. However, representatives often mention ‘just not now’ in the
same breath. They often state that ‘others’ (e.g., clients) push them towards being
‘more cutting-edge’ and ‘more innovative’. The pressure is often counterproduc-
tive, as the naturally conservative lawyers 4 avoid ‘innovation for innovation’s own
sake’ and seek clear added value in implementing AI-based tools or solutions.
The data leads me to believe that some companies are either readier or braver
to embrace the change brought forth by AI. The readiness may be influenced by
general market readiness (especially in the USA), data availability issues (English
vs. under-represented languages), fiercer competition, and pressure to optimise
companies’ activities. Additionally, embracing the change may be motivated by
trying to appear ‘more cutting-edge’ to draw in a specific clientele. Data suggest
that larger companies were significantly more likely to use AI tools longer relative
to smaller companies, which unfortunately sheds little to no light on the factors
behind the decision to implement.
The second surprise concerns the resources or support lawyers require to adopt
and implement AI technologies successfully. Most answers were related to internal
AI usage guidelines, training programs and security/privacy improvements. R&D
investment placed dead last. The result shows me that lawyers are not concerned
with the maturity of the existing technology but mainly with safe ways of using it.
To me, this shows awareness of the field and related issues. When talking with law-
yers (either with attorneys or judges), they often cite reservations toward AI about
performance, which they deem either significantly lower to humans performing
the same task or impossible to compare to humans in a meaningful way.
The survey shows more mature thinking about AI in the legal industry. We have
technology that may be used meaningfully (especially with a meteoric rise of Large
Language Models). However, we still have to figure out how to implement it in any
4
See e.g., BROOKS, Chay, Cristian GHERGES and Tim VORLEY. Artificial Intelligence in the Legal Sector:
Pressure and Challenges of Transformation. Cambridge Journal of Regions, Economy and Society, 2020,
vol. 13, no. 1, pp. 135–152.
The third surprise relates to a question about the type (function) of the AI system
lawyers would like to implement under the assumption that it is highly accurate and
safe. Leading answers include a document generator, document verification tool,
document summarization tool, and case law analytics tool. Such results again show
the state of discussion as relatively mature, as these issues are strongly related to
often-repeated lower-level tasks that every lawyer needs to address daily. In my
interpretation of the results, the more complex tools appeared less. I believe that
automation must start from lower-level support tasks to ease the cognitive load for
individual lawyers. Tools allowing more pleasant reading of the document by high-
lighting the most-cited parts or summarizing the case for paralegals to assess if it is
worth reading are essential. If reliable, these lower-level tools can positively impact
technology acceptance by building or enhancing trust.6 However, when presenting
this opinion to attorneys and judges, I was often confronted with pushback call-
ing for complex supportive technologies over automation or support of lower-level
tasks. Fortunately for me (and unfortunately for my audiences), the survey supports
my deeply held belief.
Aside from the abovementioned surprises, I want to draw attention to some unsur-
prising results, which, in my opinion, signify issues either preventing AI from being
implemented to its full potential or downplaying some of its potential large-scale
impacts. These come in three areas:
5
For example, judges in the Czech Republic and Slovakia often complain that they are given access to
legal information retrieval tools or databases (because either the court or the Ministry of Justice pays for
licenses) but are left struggling to develop the needed skills without assistance.
6
I extend the argument that ‘knowledge enhances trust’, see BARYSE, Dovile. People’s Attitudes towards
Technologies in Courts. Laws, 2022, vol. 11, no. 5.
Firstly, the data suggest AI-based tools and solutions are often deployed with lit-
tle attention to how these tools benefit law firms. As mentioned above, from my
experience, many firms want to ‘do AI’. However, the desire is often not framed by a
sufficient understanding of firms’ processes. Such an approach does not allow the
technology to be used fully.
The survey suggests that over 40% of companies fully implemented the AI solutions
without engaging in a pilot program or trial period. Such a result is hardly surpris-
ing, especially at the level of smaller companies. Mid-size and smaller companies
often need to be more staffed about developing and managing internal processes
(see the abovementioned text related to developing internal guidelines and staff
training stemming from the same underlying condition). The initial investment in
AI-based tools and solutions is often significant. A meaningful inclusion into firms’
workflow must be an absolute priority. However, this aspect of legal tech is often
overlooked, which is suggested by my personal experience and the survey data.
Such an approach stems from the idea that AI-based technology allows perform-
ing tasks more efficiently, which 87% of respondents in the survey believe. On the
other hand, I think this perception is inaccurate. To be more efficient, one has to
know what is being done, with what investment (in time or money), and what effi-
ciency. It is unreasonable to expect an AI-based solution to automatically enhance
performance if the performance was never measured before. Efficiency comes from
understanding processes, which small and medium-sized law firms often struggle
with.
Finally, I reached the final unsurprising outcome of the survey, which is little atten-
tion to the impact of AI-based tools and solutions on the labor market. In the sur-
vey, only 11.4% of respondents perceived these as one of the top challenges facing
lawyers in the age of AI. The amount of attention is not surprising, as I encounter
it daily when discussing AI with attorneys, judges or students. As a rule, the indi-
viduals believe that most of their work has high-added value and cannot be easily
automated. In response, I often point out that most of us could automate at least
20% of our work-related activity10. If we do this for five employees, we free up time
to enhance the cumulative workload of these five people. Or we freed up 1,0 FTE,
which is not needed anymore for handling the same total workload. Automation will
inevitably lead to lawyers losing their jobs. Especially (but not solely) public sec-
tor employers will be hard-pressed to cut their budgets. The impact on the labor
market will be significant. The survey supports my opinion that this issue is under-
represented in public discourse despite its glaring importance.
WHAT NEXT?
The text above sums up the biggest challenges of developing and deploying AI-
based tools and solutions.
7
See the need to rethink how young lawyers acquire skills, in SUSSKIND, Richard. Tomorrow’s Lawyers.
Third Edition. Oxford: Oxford University Press, 2023, pp. 230–232.
8
With even the task of ‘retrieving relevant documents’ being multi-layered, see e.g., VAN OPIJNEN, Marc
and Cristiana SANTOS. On the concept of relevance in legal information retrieval. Artificial Intelligence
and Law, 2017, vol. 25, no. 1, pp. 65–87.
9
See WEBB, Michael. The Impact of Artificial Intelligence on the Labor Market. SSRN, 2020. Available at
https://dx.doi.org/10.2139/ssrn.3482150, p. 40.
10
See similar argument raised by SUSSKIND, Richard. Tomorrow’s Lawyers. Third Edition. Oxford: Oxford
University Press, 2023, p. 97.
Additionally, automation will have a significant impact on the labor market. The nar-
rative of freeing the highly educated workforce is automatically seen as beneficial.
However, it will send ripples throughout the field of law, affecting the market of
legal service providers,11 providers of pro bono services,12 and the universities pro-
viding specialized legal education.13 As a society, we must be better at addressing
disruptive technological challenges with high impact on society. More attention
must be paid to the labor market impacts of large-scale AI deployment (both in law
and generally).
Michal Jackowski and Michal Araszkiewicz (along with their fellow researchers) have
done a great service by arranging this survey and building statistical models to
interpret the results, using a logistic regression approach.
The survey appears to have been well designed and executed, and provides a
useful benchmark of attitudes and practices early in the generative AI revolution
(the spring of 2023.) The report is thoughtful and painstaking. It highlights a broad
range of opinions, views, and behaviors in the business and legal community.
Most who actively follow the legal tech scene have visibility into just a small number
of firms, so the great range of organization types and sizes is most welcome. Nearly
as many firms with 1-10 employees as those with 1,000 or more responded, covering
a mixture of US and non-US firms.
Over half of the respondents reported already using AI, and people are clearly not
hesitant to talk about their ideas and experiments.
11
See SUSSKIND, Richard. Tomorrow’s Lawyers. Third Edition. Oxford: Oxford University Press, 2023,
p. 111–113, or HONGDAO, Qian, Sughra BIBI, Asif KHAN, Lorenzo ARDITO, and Muhamad Bilawal
KHASKHELI. Legal Technologies in Action: The Future of the Legal
Market in Light of Disruptive Innovations. Sustainability, 2019, vol. 11, no. 4, and also REPLOGLE, Tyler J.
The Business of Law: Evolution of the Legal Services Market. Michigan Business & Entrepreneurial Law
Review, 2017, vol. 6, no. 2, pp. 287–304.
12
See SUSSKIND, Richard. Tomorrow’s Lawyers. Third Edition. Oxford: Oxford University Press, 2023, p.
138–141, or THOMPSON, Darin. Creating New pathways to Justice Using Simple Artificial Intelligence
and Online Dispute Resolution. International Journal of Online Dispute Resolution, 2015, vol. 2, no. 1,
pp. 4–53.
13
See CONNELL, William and Megan HAMLIN BLACK. Artificial Intelligence and Legal Education. The
Computer & Internet Lawyer, 2019, vol. 36, no. 5, pp. 14–18, or SAVELKA, Jaromir, Matthias GRABMAIR
and Kevin ASHLEY. A Law School Course in Applied Legal Analytics and AI. Law in Context, 2020, vol. 37,
no. 1, pp. 134–174.
1. The most frequently indicated tasks that AI could enhance were legal research
(79.7% of respondents), document review (72.1%), and contract drafting (55.8%).
3. The most frequently indicated use of AI was streamlining repetitive tasks and
improving efficiency (87%). The second most common choice, chosen by more
than half, was human error reduction (56.5%). The rarest choice was enabling
better collaboration between legal professionals and clients (15%).
5. The most frequently indicated factors to consider were data security and pri-
vacy (79.6%), cost (75.5%), and ease of use (61.7%).
6. The most frequently indicated challenges were legal issues (legal liability and
regulation - 80.2%), privacy and security (66.8%), and AI accuracy and reliability
(63.9%), with different types of firms perceiving these top challenges similarly.
9. Only 13 entities said that they employ people in positions specifically dedi-
cated to AI, such as legal engineers or prompt engineers.
– Companies in the U.S. were typically more likely to use AI technology, as well
as the smallest companies (1-10 employees) and those with 100+ employees.
– Vendor reputation was a much more important consideration for firms from
the USA.
– Ease of use is less important for bigger companies (100 employees and more)
than smaller ones.
– In the case of the largest firms (1,000 employees and more), as many as 92% of
the answers were “technology must be preapproved”, while in the other size
groups this answer was between ca. 23% and ca. 48% of the cases. The larger
the company, the higher the proportion of “more restrictive” responses.
– There are significant differences between respondents from the USA and
those from other countries regarding which AI tool is the most necessary. For
instance, automated document generation was deemed necessary by 54.8%
vs 36.2%,
Lawyers and paralegals are wordsmiths, who spend most of their time reading,
writing, listening, and speaking. In other words, consuming and producing texts. So
it’s not surprising that about 58% of the survey responses relate to tools for work-
ing with documents. The use most frequently identified was automated document
generation (41.9%).
We lawyers tend to live in Word, Outlook, and similar tools. Texts are pervasive.
One early observer suggested that: It should not be too surprising if law ends up
leading the parade in the work-as-text movement, and if legal technologists in-
creasingly find themselves understanding the law’s constitutive processes in docu-
mentary terms. Text (inevitably open-textured) and technique, after all, define the
context within which the architects of legal technology must operate.14
Quibbles
Despite the recent explosion of self-declared generative AI experts the topics ex-
plored in this survey are likely new to many of the respondents. That may help
explain the certain lack of imagination I detect in the answers.
14
Lauritsen, M., 1992. Technology report: Building legal practice systems with today’s commercial author-
ing tools. Artificial Intelligence and Law, 1, pp.87-102.
Asking questions like these – “What percentage of your firm’s workload compris-
es mundane and repetitive tasks, as opposed to tasks that require in-depth legal
knowledge and strategic thinking?” and “Can you identify the most common mun-
dane and repetitive tasks that your legal professionals handle on a regular basis
that AI could enhance?” – reinforces a questionable narrative that even the latest
AI tools are mostly suited for ‘mundane’ work. What counts as ‘routine’ of course
itself is a tricky question.15
It seems that effectiveness was not among the possible answers to the question:
“What factors do you consider most important when evaluating AI solutions for
your firm?”
What’s to come
We’re still very much in an exploratory phase, the opening act of a rapidly evolving
drama about law practice. Our infatuations will likely remain superficial for a while.
Flourishing in this era will require new intimacy between humans and machines. I’ve
suggested thinking about that in terms of phenomenology.16 And I recently read
a brilliant discussion about the implications of AI for scientific reasoning that can
largely be applied to the work of lawyers.17
We will need to figure out how best to model legal workers’ inner cognitive worlds
and choreograph their interactions with the outer worlds of computational models.
That seems to require attention to the nuances of human experience when working
with machine intelligence.
Should be fun!
15
See e.g. Computational Intelligence and the Paradoxes of Legal Routine (1990)
16
Toward a phenomenology of machine-assisted legal work (2018)
17
Hope, T., Downey, D., Weld, D.S., Etzioni, O. and Horvitz, E., 2023. A computational inflection for
scientific discovery. Communications of the ACM, 66(8), pp.62-73. Available at https://dl.acm.org/doi/
pdf/10.1145/3576896.
The survey examined the adoption of AI in law firms of diverse sizes, types, and lo-
cations, with nearly 30% located in the USA. Although the survey does not explicitly
define AI, it’s inferred that the term refers to tools rooted in computational statis-
tics, as this is usually the definition understood by legal professionals. These tools
range from processing documents and information retrieval to chatbots.
The survey initially investigated which legal tasks could be automated by AI, em-
phasizing repetitive and mundane tasks, which account for roughly 35% of the aver-
age firm’s workload.
The primary tasks in this category were legal research, document review, and con-
tract drafting. Although contract drafting was only ranked third in terms of rep-
etition and monotony, tools designed for document generation piqued the most
interest among law firms. Over 40% of firms in general showed interest, with the
figure rising to almost 55% in the USA. Conversely, there was a significantly higher
interest in compliance and risk assessment tools outside the USA (>12%) compared
to within the USA (<2%). These discrepancies suggest that factors beyond the na-
ture of the tasks influence the adoption of AI in law firms. The survey also confirmed
the industry’s emphasis on automating mundane and repetitive tasks, with a stag-
gering 87% of firms prioritizing this AI benefit.
Additionally, the survey reveals that repetitive tasks are more prevalent in large
firms,
The results indicated a pronounced interest in the use of AI tools, with over 60%
of the companies showing enthusiasm, particularly among small firms (with fewer
than 10 members) and large firms (with more than 100 members). Of the companies
already leveraging AI tools, a minority have been doing so for more than a year
(less than 25%). This trend aligns with the recent surge in popularity surrounding AI,
notably with platforms like chatGPT and generative AI.
The primary factors influencing AI adoption are security and privacy, cost, and us-
ability.
communication between professionals and clients, with only around 15% of re-
spondents expressing interest. This observation is further corroborated by the
mere 7% of companies that have already implemented such solutions. While such
tasks are scarcely viewed as mundane or repetitive, it’s notable that certain client
communication tasks like client onboarding and meeting scheduling are techno-
logically well-established. Moreover, they align closely with the primary
concerns and challenges of AI as outlined in this survey. In addition, it’s worth not-
ing that these tools garnered more attention in the USA, suggesting cultural differ-
ences might play a role in this disparity.
From this perspective, legal professionals seem to prefer tools that might occa-
sionally produce minor errors—like those used in certain types of legal research or
document analysis—over tools that must be absolutely error-free and, as a result,
necessitate subsequent review (e.g., tools that generate legal documents). This ob-
servation appears contradictory, especially when considering that document gen-
eration tools are ranked highly in importance by legal professionals. The nuanced
priorities of these professionals present an intriguing paradox that might warrant
further exploration in the realm of AI tool adoption in the legal sector.
One should emphasize that it’s inherent to statistical-based AI tools to have a mar-
gin of error. To eliminate these errors, one must resort to deterministic computa-
tional methods or introduce a human review process to validate the AI’s outputs.
In the case of legal research, the broad scope of AI’s search capabilities can indeed
diminish the likelihood of human oversights, despite the inherent statistical errors
associated with such searches. This is because the vastness of the databases and
the efficient processing power of AI can uncover nuanced or obscure details that
might elude human researchers. In contrast, for document generation, while the
efficiency gains are evident, it remains less clear how AI could mitigate human er-
rors. The precision required for legal documents, combined with the complexities
and subtleties of language, poses challenges that may not be as pronounced in the
realm of legal research.
Moreover, the fact that potential risks posed by AI to legal insurance are not widely
regarded as a significant hurdle (with only around 15% of law firms identifying it as
a concern) further complicates the dialogue. It suggests that while firms are wary
of errors in the content of legal documents, they might not necessarily associate
those errors with increased liability or risks significant enough to impact insurance.
Differentiating between these two categories of tools might better guide legal pro-
fessionals in their AI integration strategies, ensuring that the technology’s applica-
tion aligns with the unique demands and standards of each specific task.
When we consider AI tools where small error margins might be permissible, we’re
primarily looking at applications where the sheer volume of data makes human pro-
cessing impractical or highly time-consuming. Legal research and certain types of
document analysis fit neatly into this category. For these tasks, the computational
might of AI, which can swiftly sift through and analyze vast datasets, becomes its
most salient strength. In these scenarios, while a human touch might yield more
nuanced results, the scale of the task makes the efficiency and broad scope of AI
invaluable. The trade-off between quality and quantity is strategic and deliberate.
Conversely, when it comes to tasks where precision trumps all else—such as legal
document generation—the stakes are considerably higher. In these contexts, even
By applying this lens to some of the tools’ characteristics and challenges men-
tioned in the survey, we can expand subjectively on the results. For the first class of
tools, which might be used for broad data processing like general legal research,
explainability isn’t as critical. They are meant for tasks where slight imperfections
are tolerable, given the sheer volume of data processed. Here, the primary goal is
efficiency and coverage. Moreover, aspects like legal compliance, ethics, and pin-
point accuracy are not as crucial compared to the second class.
Tools in the second class are geared towards high-stakes tasks where precision is
paramount, such as legal document generation. Here, explainability is of utmost
importance. Users need to understand the AI’s decision-making process to ensure
it is consistent with legal and ethical standards. Other challenges, especially legal
compliance and accuracy, also come to the forefront given the tasks’ sensitivity.
When looking at the longevity and maturity of AI tools in the legal field, it’s plau-
sible to infer that tools from the first class, with their fewer challenges and well-
established technologies, have been in use for a more extended period among the
survey participants. However, the landscape is evolving with the advent of genera-
tive AI. Despite being a recent entrant, there’s a growing number of companies
offering legal document generation tools harnessing this technology. But given
that these tools fall into the second class, they inherently come with concerns about
legal compliance, ethics, and accuracy, as they’re still relatively unproven.
Similarly, case law analytic tools, with their noted demand by over 63.5% of law
firms, would also fit into the second class due to their precision-centric nature. The
rise of platforms like ChatGPT being used by legal professionals underscores the
need for rigorous regulation and accuracy. The emphasis law firms place on legal
challenges — constituting over 80% of their concerns and accounting for about
Looking at the broader picture, AI adoption might not be solely driven by the na-
ture of tasks or geographical considerations. The evolving job market and the skills
AI can potentially replicate play a crucial role in determining AI’s applicability. For
instance, the declining demand for legal secretary roles over the past two decades
provides a telling example. While other factors certainly play a role in such trends,
the increasing capabilities of AI tools cannot be overlooked.
University of São Paulo Law School and Director of the Lawgorithm Institute of Re-
search on Artificial Intelligence and Law
The emergence of Artificial Intelligence tools applied to the legal practice has at-
tracted great attention in the last decade, with the creation of a market of legaltechs
and lawtechs in many countries. The services provided by these companies, the
majority of them startups with innovative solutions but scarce structure, have raised
questioning on the chambers of lawyers and bar associations, concerned with the
threat to the legal profession, the compliance with the requirement of expertise to
Soon it became clear, though, that such a threat was not effectively present. The
first reason is that much of the available systems and services provided do not de-
liver the high expectation created upon the technology and its capabilities, in part
due to the present hype and some exaggeration on the marketing by lawtechs,
particularly startups still not prepared with enough structure to provide long term
services. The second reason is the understanding that the present tools are mainly
complementary to lawyers’ activities and not substitutes to human practice.
Against this background there is still much enthusiasm for the technology within
the legal field and a market of consultancy on legal solutions and technology in
legal operations has appeared in many countries, since law firms and the public
sectors became aware of the risks of these investments and limitations of the solu-
tions. However, although some companies have published some reports on the
development of these markets, there is up to date no empirical study revealing the
effective present state of affairs in adopting AI by legal firms, methodologically
well-grounded and free of commercial interests. The present research is a first step
in fulfilling this gap, already bringing very interesting findings.
The first of them is that although there is widespread belief among law firms that
the technology will be a great asset in the near future for the delivery of legal ser-
vices, the effective adoption of AI is still in its infancy. For instance, approximately
only half of the firms which were inquired effectively deploy AI systems and a signifi-
cant part only for internal affairs, not as part of a legal service delivered to clients.
We still have to take into account that the boundaries between AI systems and sim-
ple automation are not always clear to the laymen (including inquired lawyers and
partners). Just a few law firms have departments dedicated to implementing such
innovative tools, the number of employees involved in it is still low in average and
almost none have employees that are fully dedicated to this activity. Besides, most
applications are focused on repetitive tasks, such as document automation, legal
research, information retrieval, case management and contract analysis (which also
is usually limited to gathering relevant information from contracts) and most have
been adopted recently (1-2 years).
The scenario shows that there is room for investments, but law firms are still cau-
tious in hiring these solutions, even though most of them trust in the relevance of
these tools for the near future. The reasons for that are also shown in the results of
the inquiry. There are concerns about data security and the confidentiality of the
The opportunities are promising and the inquiry also provides key information to
lawtechs and companies willing to invest in this field.
2. Some projections
The research was launched and information gathered mostly based on AI tools us-
ing traditional models of machine learning. But we have quite recently experienced
the emergence of large language models and generative AI, which brought about
a widespread use of generative AI, such as the ChatGPT offered by OpenAI. The
emergence of these systems brings two main points:
First, the former belief that AI systems are only going to perform repetitive tasks
is now challenged, with a report by Goldman Sachs predicting loss of 40% of legal
positions in legal offices. Whether this is an accurate prediction will depend on the
adaptation of legal firms to the new tools that will be available and it is likely that
legal firms will accelerate investment and the evaluation of tolls based on founda-
tional models, but fine-tuned and specified to legal applications.
Hence, it seems that future inquiries about the effective use of these tools in legal
firms will show a more widespread adoption of systems implemented to perform
more complex tasks, both internally and in the delivery of services to clients. Such
previsions demand, though, inquiry and empirical research to check the effective
implementation. It is also interesting to verify which tools are deployed in courts,
eLaw – Center for Law and Digital Technologies & Department of Business Studies,
Leiden University
The rapid growth of interest in artificial intelligence (AI) has led to its integration
into various industries, including the legal sector. Historically, the legal field has
been conservative in adopting technology due to the high cost of error and liability
issues, but recent advancements in AI have gained significant attention. The sur-
vey indicates that approximately 50% of law firms utilize AI in their practice, with a
significant portion implementing AI within the firm within the past year. The recent
adoption suggests that it might be driven by the growing popularity of Large Lan-
guage Models (LLMs), such ChatGPT and GPT-4, whether because the law firms are
more open to using such systems, or because of their wider availability.
The survey was performed on firms of different sizes and from different locations,
as well as with different proportions of lawyers at the firm. The analysis of the re-
sponses suggests that by far the most common mundane tasks performed in any
of the firms are legal research and document review (Figure 1). Correspondingly,
the most common answer to how AI could help lawyers overcome challenges is
efficiency at repetitive tasks (Figure 6). However, when asked what they would like
to implement in their firms the most interest has been shown to be in document
generation, followed by document verification tools (Figure 9). This interest in docu-
ment generation further confirms that the interest in adopting new legal technol-
ogy might stem from the growing performance of LLMs, after all GPT-4 has been
claimed to be able to pass the bar.18
18
https://law.stanford.edu/2023/04/19/gpt-4-passes-the-bar-exam-what-that-means-for-artificial-intelli-
gence-tools-in-the-legal-industry/
This inclination, while driven by the growing capabilities of such models, introduces
significant apprehensions. Generative AI models operate based on statistical rela-
tionships between words within training data and lack any actual understanding
of the world or, in fact, law. Given this architecture, they are also known to ‘hal-
lucinate’, i.e., make information up. Even though the systems can be fine-tuned on
more specific (e.g., legal) data and combined with other models21, and there have
been attempts to reduce such hallucinations, 22 creating a completely accurate sys-
tem is doubtful, and the integration of AI systems that don’t have a foundation in
factual or legal principles could profoundly impact the legal practice.
In response to the survey question “What factors do you consider most important
when evaluating AI solutions for your firm?”, the most common answers were data
security and privacy, cost, and ease of use. Strikingly, the choice of performance
was not among the answers, which is extremely concerning, but perhaps it wasn’t
included in the survey options. While it’s not surprising that keeping data private
is a big concern, it is at least as crucial to consider how well a system will perform
with the specific data that the law firm deals with. What type of data was it tested
on? Will it work as well on the firm’s data? Can the system handle changes in laws,
keep up with legal precedents, and understand shifts in how laws are interpreted?
What types of errors does it make? Are they the same as humans, since the system
is trained on human data? Or are these different mistakes that might be harder to
detect and mitigate? What are the potential costs of the system making mistakes,
like generating incorrect text, 23 misunderstanding information, or missing impor-
tant details? Who is liable for those mistakes? These are all important questions to
consider when selecting an AI solution.
The survey findings hint at a potential oversight, only half of the respondents say
that the technology used by lawyers is required to be pre-approved before use,
19
https://casetext.com/
20
https://www.uncoverlegal.com/
21
Xavier Daull and others, ‘Complex QA and Language Models Hybrid Architectures, Survey’ (arXiv, 7 April
2023) http://arxiv.org/abs/2302.09051 accessed 15 Aug 2023.
22
Lewis and others (n 33); Baolin Peng and others, ‘Check Your Facts and Try Again: Improving Large Lan-
guage Models with External Knowledge and Automated Feedback’ (arXiv, 8 March 2023) http://arxiv.org/
abs/2302.12813 accessed 15 Aug 2023.
23
See, for instance, this report on the case about a lawyer using ChatGPT to prepare a court filing https://
www.nytimes.com/2023/05/27/nyregion/avianca-airline-lawsuit-chatgpt.html
This is also why I am not sure if the question about what systems law firms would like
to implement assuming that such a system is highly accurate and safe is an entirely
fair one. It is interesting to see what the law firms would like to have in an ideal world
(or if they had a genie) to see what most desirable technology would be to develop.
However, while the allure of the most common answer - document generation - is
significant, the current accuracy and safety of such systems leave room for concern.
While there are attempts to make them more accurate, achieving it in the near fu-
ture is unlikely, at least not using LLMs; there can be other ways of generating docu-
ments, from simply providing a template to filling it in with information retrieved
from the documents, that may not require such unreliable systems.
Information about the types of AI tools already implemented in law firms is insight-
ful, yet it also raises some questions. Defining what falls under Artificial Intelligence
has always been a complicated task. From Figure 10, representing ‘The most neces-
sary AI tools’, it is clear that a wide range of systems are included, and they are like-
ly to use different types of underlying technologies, although most are probably
based on machine learning. While only half of the surveyed firms claimed to use AI,
it is quite unlikely to be true, considering that most law firms today use AI-powered
search systems. It is not clear if this oversight is due to misunderstanding of how
the search technology works, having such a deep integration that it is not seen as
‘AI’ anymore, or simply the survey’s answer choices. It is worth noting that although
search may be a widely used AI system, it also has risks and can have an impact on
legal practice. For instance, decisions made in their design, the specific annota-
tions, ranking algorithms and even the presence of errors during data sourcing, col-
lection, or processing, can have a significant influence when used on a large scale.24
Future progress in scientific fields related to AI will likely yield improvements in vari-
ous AI technologies, including generative models. However, it is important to rec-
ognize that perfection will not be achieved, and over-reliance on imperfect systems
could have negative consequences by potentially influencing the nature of legal
advice, fostering a passive and ineffective methodology in certain facets of legal
24
L. Diver, P. McBride, M. Medvedeva, A. Banerjee, E. D’hondt, T. Duarte, D. Dushi, G. Gori, E. van den
Hoven, P. Meessen, M. Hildebrandt, ‘Typology of Legal Technologies’ (COHUBICOL, 2022), available at
https://publications.cohubicol.com/typology
AI’s incorporation into legal practice is inevitable, but its adoption should be pru-
dent. Legal professionals must recognize the scope and limitations of AI-enabled
technologies. The survey suggests that law firms have very few employees respon-
sible for AI innovation and adoption, and many law firms do not have any. This
implies that they are likely limited in their ability to assess the risks of adopting
specific technologies, while they might rely on them in their practice. Proactive
engagement in discussions that include not only lawyers and software providers,
but specialists in legal technology who understand the risks and ethical issues that
implementing such technologies carry, as well as policy formulation about AI’s inte-
gration is vital. Neglecting this involvement could lead to a situation where flawed
AI, despite its inherent limitations, becomes ingrained within the very fabric of legal
institutions and legal practice.26
Michal Jackowski and Michal Araszkiewicz have provided a detailed picture on the
extent to which law firms have adopted AI tools around the world. I have drawn on
their statistics at face value, and think their work shows many interesting things.
To deepen these ‘interesting things’, e.g., how the interplay between lawyers and
philosophers may evolve, the gist of the figures provided by Jackowski and Arasz-
kiewicz can be summed up in three points. Although clear differences exist among
law firms due to their size – five sizes in the report – and region, e.g., US and EU,
there are significant convergences and trends regarding openness, practices, and
concerns.
First, there is indeed “a high openness among the surveyed entities to adopting AI
technologies in their firm” and moreover, “a little more than a half of the companies
have already implemented AI-based tools and solutions (103 firms, 50.7%).
Second, the most popular AI-based tools or solutions are document automation
(39%) and legal research (34%). This outcome is confirmed by what law firms think
“about how AI can best help lawyers overcome these challenges.” As shown by Fig. 6,
“the most frequently indicated use of AI” certainly is not revolutionary. Rather, the
25
https://www.cohubicol.com/blog/casetext-cocounsel-openai-typology/
26
https://www.lawscot.org.uk/members/journal/issues/vol-68-issue-08/chatgpt-and-the-future-of-law/
Third, we have a picture of what law firms deem the most relevant challenges fac-
ing lawyers. The top three challenges are (i) legal issues, i.e., liability and regulation
(80.2%); (ii) privacy and security (66.8%); and, (iii) AI accuracy and reliability (63.9%).
Interestingly, some of the areas most debated in meetings and scholarly journals
are relegated at the end of the list, e.g., lack of explainability (20.3%), after ethical
issues (36%), and adaptation (34,4%).
It is apparent that the problems under scrutiny in the papers of scholars – on, say,
trustworthy AI and transparency of the algorithms – are different from the problems
of practitioners and lawyers (for a long while, I’ve been both). As a theoretician, over
the past two decades, I have focused on the impact of AI systems on the law and
problems of ‘adaptation’, in particular, how technology reshapes, or transforms pil-
lars of the legal system; as a lawyer, focus is often on more practical issues, such
as preventing troubles of liability and regulation brought about by the use of AI
tools. These differences between theory and praxis, scholars and lawyers, are here
to stay, and however, they should not overlook the dynamics of the process under
scrutiny, i.e., how law firms are increasingly adopting AI tools in their workflow.
We did know about such trends with the disruption of ChatGPT in 2023 and the
adoption of some models by big firms in US and UK. Considering the hazards of
generative AI, e.g., Large Language Models producing misleading information that
can lead to less well-informed users, it is noteworthy that also new developments
of technology catches on in legal business. Thanks to the statistical results of Jack-
owski and Araszkiewicz, we have now clearer ideas on the adoption of AI by law
firms. All in all, it would be interesting – and even necessary – to repeat the experi-
ment and update the results within, say, three years. My conjecture is that some
results will be different. They do not only regard, of course, the number of compa-
nies that have implemented AI-based tools and solutions, but also, some relevant
challenges facing lawyers, as adaptation and ethical issues. Here, the results may
be different because of the growing impact of AI in human societies, including the
functioning of courts and the administrative corpus of legal systems. Jackowski and
Araszkiewicz offer the reference work to check this new frontier of legal business.
(2) How do you assess the overall state of implementation of AI in the surveyed law
firms?
(3) Do you have any commentary with regard to the differences associated with the
location of the law firm, its size, or area of expertise?
It is interesting to know that there are differences between USA companies and
Non-USA companies. If it would be related with the difference between case-
based countries and rule-based countries, then the same trend of USA companies
could be found in UK companies. So I am curious about UK companies’ trend.
(4) What is your opinion concerning how law firms address the risks and benefits of
AI in the workplace?
I believe that legal compliance will be a major concern when we use AI technology
since EU now considers “AI law” which regulates AI tools to protect human rights.
(5) What is your opinion about their expressed preferences and needs concerning
the technology?
Harvey will empower more than 3,500 of A&O’s lawyers across 43 offices operating
in multiple languages with the ability to generate and access legal content with
unmatched efficiency, quality and intelligence.
https://www.allenovery.com/en-gb/global/news-and-insights/news/ao-announc-
es-exclusive-launch-partnership-with-harvey
Another successful story would be a report saying that GPT-4 passed the multiple-
choice portion of the exam and both components of the written portion, exceeding
not only all prior LLM’s scores, but also the average score of real-life bar exam tak-
ers, scoring in the 90th percentile.
https://law.stanford.edu/2023/04/19/gpt-4-passes-the-bar-exam-what-that-means-
for-artificial-intelligence-tools-in-the-legal-industry/
In Japan, we use multiple choice questions as a part of bar exam and our research
group creates data for COLIEE (Competition on Legal Information Extraction and
Entailment) for retrieving relevant articles in Japanese civil code given one legal
question (named task 3 in COLIEE) and checking entailment of one legal question
given relevant articles (named task4). We experimentally applied Chat-GPT to solve
task 4 and we found that the correct rate for the entailment is 60 percents. To solve
entailment task, the solver must perform logical reasoning but the result shows that
Chat-GPT is not so good at logical reasoning.
Moreover, if we do not provide appropriate training data, GPT would give a false
answer and it will take more time to check it than to make an answer by human. This
kind of problems has already occurred.
1. In May 2023, a lawyer who used Chat-GPT submitted to the court a fictitious
precedent created by Chat-GPT.
https://edition.cnn.com/2023/05/27/business/chat-gpt-avianca-mata-lawyers/in-
dex.html
2. In June 2023, two American authors filed a lawsuit against OpenAI in San Fran-
cisco federal court, alleging that OpenAI misused their work to “train” ChatGPT in
violation
of copyright law.
https://jp.reuters.com/article/ai-copyright-lawsuit-idCAKBN2YF17R
3. Brian Hood, Mayor of Hepburn Shire Council was involved in a scandal related
with a foreign bribery within his company, but was never found guilty. However,
ChatGPT says ``In 2012, he pled guilty to one count of bribery and was sentenced to
four years in prison,’’ which is a clear defamation.
“Australian mayor reads world’s first defamation lawsuit over ChatGPT content”,
https://www.reuters.com/technology/australian-mayor-readies-worlds-first-defa-
mation-lawsuit-over-chatgpt-content-2023-04-05/
The first overall impression is that the study confirms past experiences with the use
of technology in the legal services industry. Indeed, they could as well have come
from a time, in the 1980/90s when legal technology first made headlines and cre-
ated similar concerns as those that we have now. Then and now, the “low hanging
fruits” – repetitive, mundane, low-value tasks – seem to be the ones that lawyers are
most willing to outsource to technology.
However, when looking at some of the answers in more detail, a more nuanced pic-
ture emerges – which also may point to future research that helps to unpack some
of the answers more.
Even though the survey questions gave the respondents some paradigmatical ex-
amples of functions that legal technology may perform, such as document man-
agement and search, there was no introduction to specific tools or technologies,
or the means.
This leads to a methodological question with the study: can we be sure that the
participants had a shared understanding of what AI is, and is there a danger that
there are systematic divergences in the way in which some of the questions were
interpreted depending on whether the respondent was e.g. a dedicated technolo-
gist within a larger law firm, or a “traditional” lawyer working in a small firm?
4.1.2. for instance has a significant number of firms stating that they do not, cur-
rently, use AI tools – and some do not intent to use them in the future either
If one uses a broad definition of AI, this is most certainly wrong. It is almost certain
that the “no-AI” law firms have a spam-filter…. Or use Google, at least sometimes,
for research, and of course the IR tools that Westlaw (or their local equivalent) pro-
vide. Document drafting was noted as a routine task – and everything from docu-
ment wizards to voice-to-text to the extremely intelligent ML-based Grammarly, or
a simpler spell-checker, will most certainly be already in use.
That we may be dealing here with different definitions of AI for the purpose of
the study is further reinforced in 4.3.1.2. The overwhelming majority of responses
claim they have introduced AI less than 5 years ago – most of the examples I just
gave are decades old. The reason that so many firms claim to have introduced AI
only recently could be due to increased awareness of AI in law, or merely the way
in which technology vendors are now labelling their products. A 1019 study e.g.
showed that a significant percentage of startup companies that offer or use “AI”
– Some but not all respondents may have made a distinction between “ge-
neric” AI and “legal AI”, and only think of tools that are specific to the legal
roles they perform. Such a distinction was more natural with GOFAI – e.g.
Bench Capon’s notion that “true” legal AI must have at least some isomor-
phic formal representation of legal rules. This distinction is more difficult to
draw for machine learning-based tools, though here one cold distinguish
systems that are trained on legal texts and data specifically. For the now
emerging “foundational” AIs such as ChatGPT, this distinction becomes
even more fragile. While some system may require fine tuning of pre-trained
systems to law-specific tasks, other applications may not require this. 4.3.1.4
may cover some aspects of this, but the distinction between “client facing”
vs “internal” does not map perfectly onto this topic. Client facing AI could
be a chatbot that does a first interview – without any legal knowledge – while
a purely internal use may ensure compliance with professional requirements,
and therefore contain significant “legal” knowledge
– Some but not all respondents may not know enough about AI to realise how
ubiquitous it already is. Or with other words, once AI works in the background
so that nobody notices it any longer, it stops being “proper AI” – AI is always
in the future, so to speak.
– The different environments in which they work may give them different oper-
ational definitions of what they/their firm means with AI. From my experience,
some dedicated AI/knowledge engineering staff in larger firms often think of
“proper AI” as the next thing they want to develop – they see themselves as
separate from “IT support” and have to make the case to their employer that
what they do is novel and goes beyond the “off the shelf solution”. Others
take the opposite approach an emphasise the “AI nature” of what they do,
even for rather mundane issues such as client billing etc. So even experts
may use vastly different definitions of AI, not out of ignorance but because of
the way their roles are defined.
These diverging definitions and conceptualisations can be an impediment for the
efficient uptake of AI, in my experience. One of the biggest drivers currently is a
A) See beyond any vendor hype and resist acquiring unsuitable tools just because
of the label
B) Ease them into adopting more advanced technologies when they realise how
much they are already using
C) Help them understand what their real needs are – the separation between
“real” legal tech and “back office” tech is in turn based on specific, and con-
testable, theories of what legal work and legal knowledge is, and the less glam-
orous back office AI can be more relevant to a firm than a “we predict court
decisions” tool’.
For me surprising answer to 4.1.1.2. was that “IP management” was one of the tasks
where AI has little to offer. Generally, IP management is one of the success stories;
Trademark search used to take hours, now AI-based businesses like TrademarkNow
do this search in seconds. Patent search too benefits greatly from advances in AI,
e.g. visual search of graphics. Identifying infringing online content can use every-
thing from highly sophisticated bespoke tools such as SnapDragons IP manage-
ment to simply using Google image search.
1. An ambiguity in the term “IP management” Did the respondents interpret this
as “management of the client’s IP” – a task often taken on by law firms – or
managing the firms own IP? If the understood it as the latter, then this could
simply indicate that law firms traditionally do not see themselves as genera-
tors of IP protected material. But this too would be interesting. In an AI world,
the data that law firms hold will be increasingly valuable, and could also for
them become the “new oil”. This could involve using their own repository of
licenses, contracts or submissions made on behalf of their as input for training
AIs, or otherwise as source of new AI enabled income streams. More wide-
2. AI is used extensively in IP, just not by law firms. Instead, new types of “legal
services providers” that are enabled by AI have come into existence, like Trade-
marknow or Snapdragon, now compete with lawyers for business
2.1. This could also explain why “compliance tech” like DRM is excluded – it is
not a concern for law firms, even though they are part of IP management
and perfom quasi legal tasks. In this case lawyers may well be aware of
these technologies, but when asked a question in the interview context im-
mediately read it as “supporting me in my work” rather than a more general
prediction about AI use
3. Many established forms of “search” are not considered proper AI – they are
by now too routine to be even noticed, “AI-based research” is reserved to new
tools that come explicitly with the AI label – as noticed above more an impres-
sion generated by vendors.
Again under the caveat that people may have had very different ideas of what an AI
system is, this was an interesting answer. Especially for those firms without a pilot,
how would they know they got value for money? And those with a pilot, it would be
interesting to see what criteria for evaluation were used – from a mere psychologi-
cal “the users say they like it” to hard data that quantifies success. It is also unclear
if these pilots only evaluated success, or lack of it, internally, or if any attempts are
made to systematically involve client feedback.
One obstacle I observe is that often ambitious IT projects, including but not limited
to AI, are “always deemed to have been a success” unless they cause catastrophic
failures that are too obvious to hide. Conversely, under the EU AI Act constant
monitoring of performance, and a duty to report failures, is a key aspect of creating
trustworthy AI.
I found 4.3.1.6 and 4.3.1.7 difficult to analyse – how can 44 firms have an AI innova-
tion department, but only 13 have at least one dedicated AI person? Here too some
contextual interpretation of the questions by the respondents seems to have taken
place, some may be thinking of committees or working groups on which everyone
can serve as included while other respondents may have understood this as a ques-
tion more narrowly, as use of dedicated groups of specialists.
I also found the correlation between size and AI adoption telling – size matters.
For me this indicates that the large firms now can be left to their own devices, the
greatest impact for helping the digital transformation of the justice systems will
be by targeting smaller firms and helping them with a program focussed on their
needs and resources. This includes training – but also a delivery of training that
responds to their situation (a large law firm can send staff to training more easily
than a 1 person outfit, for obviously reasons) Smaller firms are also particularly
vulnerable – they face “technological lock in” more often than large firms, and are
more affected by staff turnover when the one person who knew the system leaves.
A comprehensive support package therefore would allow them to lean in their own
time, emphasise intelligibility and user friendliness over capabilities, and look at
the entire environment to create sustainable solutions for small firms
There were some interesting and unexpected replies regarding the regulatory en-
vironment for legal technology in the answers to 5.1.1.1ff
“Ethics” is not a predominant concern for firms, but “regulation” is – for a regulated
profession where ethical standards are often “enforceable”, that is surprising. Then
again, maybe some respondents did not include professional ethics under “ethics”
but thought of it as an aspect of “regulation and law” instead.
A similar problem with “explainability”. Only a minority have this as a concern, but
as it plays an important role in 3 of the other fields that are concerning for them
- Data protection, ethics and regulation (where the AI Act will make explainability
a requirement for all high-risk, that is also all legal, applications), there may be a
misconception on the side of firms regarding the role of explainability, or the duties
that come with GDPR and AIA.
As past predictions have proved premature, it may be worth thinking about the
reasons why things may be different this time. One change from the 1990s is that we
all got more used to carrying out complex transactions online, assisted by technol-
ogy. Smart-ish Online banking tools did not automate routine transactions, rather,
they enabled non-specialists, the customers, to carry them out with confidence
themselves. This led to a general change in perception of what type of activity
expensive, professional (and regulated) advice is necessary. This change of percep-
tion may now finally also change the way clients perceive the role of law firms. As
noted above under “pilot”, it was unclear from the answers to what extent law firms
take client perception, expectation and satisfaction into account when evaluating
their AI needs.
We should also reflect on what we mean by “routine task”. Are they today the same
as in 1989, or do waves of technological adaption simply shift the meaning, and
perception, of what we consider “routine”? To some extent, the examples given in
the survey, like case management, document drafting etc can be seen as both rou-
tine or complex, depending on the context. In this view, the AI of today automates
routine operations today, at which point we stop thinking of them as AI – AI now is
the promise to address the new routines that the older ones created, for which now
new tools are needed etc (the routine of checking one’s spam folder only exists
since AI filtered spam).
The answers to 5.1.4.1. and 5.1.4.2 follow the pattern of previous answers, with an
emphasis on text generation – as noted before, this could mean generic AI tools
(V2T, grammar checkers, ChatGPT) just as much as law-specific applications. The
same holds true for management and process tools, another high-demand technol-
ogy. Surprising for me was the comparatively low percentage of e-discovery tools,
one of the success stories of legal tech globally. The reason may be the applicable
procedural law, and also costing structure – e-discovery is particularly popular in
the US where a combination of procedural rules and high discovery costs push this
in the forefront. I don’t think for this question we have a US/non-US breakdown,
which may have been interesting to see. If true, it would give an indication of how
the legal environment, including the allocation of costs, drives or disincentives digi-
tal transformation.
Amongst other results, the report gives a clear picture of where automation is
mostly occurring at the moment in legal firms (e.g. legal research, document re-
view) and where automation is deemed to be needed (e.g. document generation).
These observations confirm that a core part of the legal activity is a matter of text—
or at least, that it is perceived as such by people participating in it—and that this
core is considered to be to some extent reproducible, potentially becoming less
vulnerable to human errors. This general attitude may explain why language mod-
els are expected to provide much disruption with respect to current practices, and
why organizations are willing to get prepared. Yet, I would like to utilize the oppor-
tunity of this space to highlight two important points with respect to this strategy.
Yes, legal experts are right in stating that the meaning of law depends on context,
but operations running in organizations are generally specified more concretely,
up to the extreme case of software-driven operations, which are, at least on paper,
generally deterministic. The open-textured nature of law clashes with the controlled
nature of operations, and indeed, public and private organizations face significant
challenges in coordinating their legal and IT departments. This issue becomes evi-
dent when the scale of organizational activities increases, and developers cannot
keep up an organic view of what has been done and why. If the law changes, how
can we be sure that the processes we run are complying with the law? Today, AI
adoption, and the concurrent AI regulation, are only exacerbating this tension, be-
cause the use of AI increases the computational component present in organiza-
There is a clear opportunity for product and process innovation here, targeting
what should be the core expertise that legal firms should strive for, to develop and
to maintain. This innovation can plausibly take advantage and build upon research
developed in the AI & law field for decades, even more so if facilitated by language
model components. Indeed, prompting is in principle a much more accessible in-
terface to humans than formalizing, programming, or annotating, and, although
verification will be still needed downstream, this embedding may bring in the loop
social participants and situations generally left out for reasons of economic oppor-
tunity. Yet, such an advance requires adequate attention from the private sector,
both for early experimentation, and later for participation in standard-setting initia-
tives. Is your firm participating in any research initiatives? If the answer is no, ask for
reconsideration: in terms of strategy, being an early follower is good, but being a
trend-setter is better.
Are the results conforming with your expectations, or do you find some of
them surprising?
The results from the report largely align with my expectations. The emphasis on
document generation, summarization, and case law analytics in the report reso-
nates with the day-to-day tasks I observed at the firm. Automating these tasks can
lead to significant efficiency gains, allowing legal professionals to focus on more
complex aspects of their work. However, a few points did stand out. I had the op-
portunity to collaborate with various departments and teams. The diversity in their
AI needs and challenges was evident. Hence, the report’s indication of a homog-
enous response across different types of firms is surprising. I would have expected
more variation based on the firm’s size, specialization, and client base. The rela-
tively low percentage of respondents seeing AI as a tool for better collaboration
between legal professionals and clients is intriguing. Given the advancements in
AI-driven communication tools, there’s a significant opportunity here that seems
underexplored.
The survey results suggest that the legal industry is in an exciting phase of techno-
logical transformation. The legal sector appears to be transitioning from the early
stages of AI adoption to a more mature phase. While there’s evident enthusiasm,
the full potential of AI in legal practice is still being explored. I witnessed firsthand
the integration of AI tools for tasks like document analysis and predictive analytics,
but there was also a recognition that we were only scratching the surface. Larger
firms, with more resources at their disposal, seem to be leading the charge in AI
adoption. However, the survey’s indication that both the smallest firms and those
with 100+ employees have implemented AI suggests that the perceived value of AI
transcends firm size.
The data suggests that firms with a focus on litigation have distinct AI needs. This
aligns with my observations, where litigation teams often dealt with vast amounts
of data within discovery as well as with case documents and historical rulings. AI’s
potential to streamline and provide insights in this area is immense. The predictions
about AI becoming an essential part of legal workflows and the potential for auto-
mating paralegal tasks reflect a forward-thinking industry. It’s heartening to see the
legal sector’s openness to these changes. On the other hand, the legal profession
is bound by strict ethical and regulatory standards. The cautious approach to AI, as
indicated by the survey, resonates with the profession’s commitment to upholding
these standards.
Do you have any commentary with regard to the differences associated with
the location of the law firm, its size, or area of expertise?
The legal industry’s approach to AI, as with many other aspects, is influenced by
various factors, including the firm’s location, size, and specialization. The distinc-
tion between US-based firms and their global counterparts in AI preferences aligns
with my observations. The US legalmarket, with its unique regulatory landscape,
client expectations, and competitive dynamics, often shapes how technology is
perceived and integrated. For instance, regulatory compliance tools might be more
sought after in regions with more stringent regulations.
Larger firms often have the resources and infrastructure to experiment with a
broader range of AI applications. Their size allows them to invest in dedicated in-
novation departments, pilot programs, and collaborations with tech companies.
Conversely, smaller firms might prioritize AI tools that offer immediate efficiency
gains or address specific challenges. The survey’s indication that both the small-
The universal recognition of challenges like legal liability, privacy, and AI accuracy,
irrespective of location, size, or specialization, is telling. It underscores the legal
industry’s commitment to upholding its core values.
What is your opinion concerning how law firms address the risks and benefits
of AI in the workplace?
The legal sector deals with highly sensitive information, and the emphasis on pri-
vacy and security concerns in the survey resonates with this fact. The anticipation
of ethical and legal debates surrounding AI, as indicated by the survey, is both
expected and necessary. These debates will shape the future of AI in law, ensuring
that the technology is used responsibly and ethically.
The legal landscape is dynamic, and so is the field of AI. The emphasis on con-
tinuous training and the development of internal guidelines suggests a proactive
approach to adapting to this changing landscape. The legal profession is inher-
ently risk-averse, and the cautious approach to AI reflects this. The potential legal
liabilities associated with AI decisions, especially if they impact client outcomes,
are significant. Law firms seem to be acutely aware of this and are taking steps to
mitigate these risks.
I observed the time-intensive nature of tasks like document review and legal re-
search. AI’s potential to automate these tasks can lead to significant time savings,
allowing attorneys to focus on more nuanced legal work. The interest in compliance
& risk management systems aligns with the complexities of the modern regulatory
landscape. Given the high stakes in legal decisions, AI tools that can assist in iden-
tifying potential compliance issues or regulatory changes are invaluable. The fact
that a significant portion of firms has an AI innovation department is a testament
to the industry’s forward-thinking approach. Such departments can spearhead the
exploration of cutting-edge AI solutions, ensuring that the firm remains at the fore-
front of legal tech advancements.
Automation of repetitive tasks will free up lawyers to focus on complex legal rea-
soning and client interactions. AI tools, with their ability to analyze vast amounts
Law firms should invest in or collaborate with tech companies to develop AI tools
tailored to their specific needs. Existing platforms can be further refined based
on feedback from legal professionals. Continuous collaboration between the legal
industry and academia will be crucial. Research in natural language processing,
machine learning, and ethics will directly impact AI’s role in the legal sector. I fore-
see significant advancements in these areas that can be translated into practical AI
tools for law firms.
A dialogue between law firms, tech companies, regulators, and clients are essen-
tial. Platforms like legal tech conferences, workshops, and forums can facilitate
these discussions. Given the ethical and regulatory challenges associated with AI
in law, this dialogue will be crucial in shaping the future landscape. Industry-wide
standards for AI in legal practice should be developed. This includes ethical guide-
lines, data protection standards, and best practices for AI tool implementation. As
AI becomes more integrated into legal practice, these standards will become even
more crucial.
Please do feel free to also address different topics following your area of
expertise.
I witnessed the sheer volume of documents that legal professionals deal with daily.
Advanced NLP techniques can be employed to extract relevant information, iden-
tify patterns, and even predict legal outcomes based on historical data. The poten-
tial for automating due diligence, contract review, and other document-intensive
tasks is immense. Using historical case data, AI models can be trained to predict
litigation outcomes. This doesn’t mean replacing human judgment but augment-
ing it. Lawyers can leverage these insights to better advise clients and strategize.
AI’s decisions are only as good as the data it’s trained on. Biased data can lead to
biased outcomes. Given the high stakes in the legal field, it’s crucial to ensure that
AI tools are transparent, explainable, and free from biases. AI-driven chatbots and
virtual assistants can revolutionize client-lawyer interactions. These tools can pro-
I make these comments from the perspective of a former partner in what was then
a mid-sized law firm in the United States during the 1980s, combined with 30 years
of teaching and research as a member of a university law faculty, including 25 years
of research in AI and law. In general, I expect partners in law firms in the U.S. to
be generally conservative about adopting new technology—especially so concern-
ing AI, about which most partners would have very few informed intuitions. This
natural tendency is reinforced by the typical pricing structure for legal services in
terms of billing by hour of time spent on a client’s matter. There is little economic
incentive to have the billable hours of senior associates or junior partners reduced
by technology, unless this reduction is driven by client expectations, or by an in-
ability to hire qualified attorneys in sufficient numbers to complete the available
work. Moreover, if there is more work to do in a firm than the people can handle,
any decision to divert effort toward learning to use a new technology must meet
a rigorous cost-benefit analysis. Increased efficiency alone might not provide suf-
ficient incentive. Of course, as in any service domain, there are likely to be some
“early adopters,” especially if there is some competitive advantage to adopting
new technology (increasing the benefits to weigh against costs). This dynamic situ-
ation is reminiscent of the phase years ago when law firms gradually created web-
sites for their law offices, although website creation did not generally have a claim
to increase lawyer efficiency.
In my view, it is not surprising that almost half of survey respondents report that
they have not “already implemented AI-based tools and solutions” (100 firms,
49.3%). Of those, 76 firms (37.4%) “declared that AI-based tools are not implement-
ed in their firm, nor are they looking for such options.” Of the companies that have
implemented some AI-based tools, “[t]he vast majority (59.2%) of companies report
using AI-based tools for less than a year.” It is consistent with my expectation of
conservative interest on the part of firms that less than 25% of surveyed firms had
implemented any AI-based tools for over a year prior to the survey.
Also consistent with my experience was the dominant response among all firms
about “the most common mundane and repetitive tasks that your legal profession-
als handle on a regular basis that AI could enhance” (Figure 1). The two highest
answers from 197 firms were legal research (79.7%) and document review (72.1%).
This response is consistent with the actual adoption practice thus far. From those
firms that had already adopted AI-based tools (100 respondents on this question,
Figure 4), the two highest answers were that they had implemented AI tools in
document automation (a reported 39%) and in legal research (34%). It is probably
significant that so many of the respondents to this survey regarded legal research
as “mundane and repetitive.” This suggests that law offices in which most of the le-
gal research requires imagination and innovation by experienced attorneys would
be among the last to seriously consider adopting AI-based tools.
Also in line with my expectations were the answers about a firm’s sources of in-
formation about AI. The two highest reported sources (by a wide margin, see
Figure 3) were “networking” (68.8% of 199 respondents) and “AI events” (59.3%).
This suggests to me that the exploration and implementation of AI tools by legal-
services competitors is the main driver for obtaining information, let alone develop-
ing motivation for adoption.
In the months during which OpenAI’s ChatGPT has captured the public’s imagina-
tion (since November 2022), and a revolution was begun in generative AI, law firms
(and their clients) have undoubtedly felt increased market pressure to at least “talk
the AI talk.” But this very rapid change has occurred (and will continue to occur) in
an area of technology that even those lawyers who are responsible for implement-
ing technology understand the least. Indeed, generative AI is an area currently too
little understood by nearly everyone. The current public awareness and fascination
is creating a powerful counterforce that pushes against the conservative nature of
the legal profession. Balancing the two forces in practice will produce a great deal
of anxiety within law firms.
I find it interesting that the survey creators, in formulating their question about
future AI adoption (about what firms believe will be desirable), told respondents
to assume that the AI tools they envision would be “highly accurate and safe.” It
remains to be seen which AI tools could ever meet such a high standard. But that
wording does accurately reflect, I think, how managing lawyers in law firms think
about AI. To overcome the economic and other factors that make them naturally
conservative, they will need strong evidence that any tools they adopt are “highly
accurate and safe.” In general, they will not receive such evidence, and the adop-
tion of AI applications in legal business will be a fraught endeavor for the foresee-
able future.
As I suggested in Part I, for major firms a tested strategy for adopting technology
has been “going as slow as your competition.” Moreover, we will see movement in
adoption primarily in firms where legal research and document review are consid-
This situation will increase the opportunity for the legal-tech industry to gain mar-
ket share in any number of areas of application. And I predict that the reputation
of the legal-tech provider will become a dominant factor in securing contracts with
legal firms, because law firms will not expect to hire or develop in-house the talent
sufficient to keep up with developments in generative and other AI.
I. Preamble
The Liquid Legal Institute (LLI) recognizes the critical role of artificial intelligence
(AI) in the legal industry and supports academic research focused on this field. The
LLI is convinced that AI has the potential to revolutionize the legal industry, but
it must be implemented in a responsible and ethical manner. Academic research
focused on AI in legal business is essential to achieving this goal.
The report is structured according to the survey questionnaire analyzed, with sec-
tions on the collected sample, AI adoption and openness to change, and chal-
lenges facing lawyers and future predictions.
– Categories of top challenges: The most common challenges for lawyers in the
age of AI are legal issues (regulation and liability), privacy and security, and AI
accuracy and reliability. Over half of the respondents selected these options.
The least common challenge was related to changes in the labor market (dis-
placement or job role changes).
– Predictions for the impact of AI on the legal industry: The future impact of AI in
the legal industry is expected to include automation of paralegal tasks, wide-
spread adoption of AI tools in legal workflows, and usage of AI in various legal
practice areas. Over half of the respondents selected these options. The least
common choice was related to AI-related risks becoming a significant problem
for law firms’ insurance.
The statistical analysis presented in this report is highly relevant to the Liquid Legal
Institute’s focus on AI in legal business. While the exploratory nature of the analysis
means that caution must be taken in interpreting the results, the findings can still
provide valuable insights into the presence of AI in the legal industry. The themes
raised and analyzed in the report can help inform the LLI’s working groups, such
as New Methods and Digitization, as they explore innovative approaches to AI im-
plementation and the integration of AI into existing processes. The importance of
factual reflection and critical interpretation of the analysis is also in line with the
LLI’s commitment to promoting transparency and ethical implementation of AI in
4. A community-driven effort can help ensure that the development and use of
AI in legal business aligns with the needs and values of the wider community,
promoting public trust and accountability.
Where is digital law going to? Commentary on “Report for the project ‘AI in Legal
Business’”
Email: xiongminghui@zju.edu.cn
Xiao Chi
Email: xiao.chi.21@ucl.ac.uk
I would like to thank Professors Michal Jackowski and Michal Araszkiewicz for their
trust and for allowing me to be the first to see their project report. The results of
this report provide vital references for the future development of AI in legal busi-
ness. As stated in their report, the purpose of their analysis was not to verify a spe-
cific hypothesis but to generate knowledge from the data about the presence of AI
in the legal business. My comments will also try to achieve the same purpose. More
specifically, rather than focusing on the methodology, I will provide brief comments
This project is of great significance since artificial intelligence has been integrated
into the lives and work of lawyers, regardless of their willingness to embrace AI. I
suspect that this may be one of the motivations of Jackowski & Araszkiewicz’s team
to investigate the application of artificial intelligence in legal business. The integra-
tion of artificial intelligence into the lives and work of legal professionals belongs
to the category of digital law, which is a commonly used term in Chinese legal
communities. I prefer to summarizing it into two development directions: first, the
digitalization of the world of the rule of law; second, the rule of law in the digital
world. The former direction focuses on the application of AI in Law, such as formal
models of legal reasoning, computational models of evidential reasoning, and ex-
ecutable models of legislation, etc. This direction is mainly with contributions from
academic communities specialized in AI and Law, such as IAAIL and JURIX. The
latter direction focuses on new legal issues arising from the widespread use of AI,
such as digital human rights, digital rights, and privacy protection, etc. This direc-
tion is mainly with contributions from jurists who study legal issues related to the
application of AI technologies. These two directions attract attention from jurists
and experts in various fields. The previous direction belongs to the application of
artificial intelligence or one of the focuses of legal informatics which is the current
name of the original Artificial Intelligence and Law entry on Wikipedia. Note that
legal informatics is considered as a branch field of artificial intelligence or infor-
mation science. Therefore, a substantial amount of research and effort has been
invested in this direction, not only by legal experts but also by experts from other
fields such as computer science, making it a relatively mature direction. For the lat-
ter direction, there has been a growing interest in it recently. In China, most jurists
are keen to study it now.
This project focused on both the aforementioned directions. Note that Jackowski &
Araskiewicz’s team conducts the research through statistical methods. In this way,
they can obtain a comprehensive overview of the adoption of artificial intelligence
in law firms, the challenges faced by lawyers and firms, and the future predictions
regarding the impact of AI on the legal industry. Now I will comment on this report
from these two directions.
On the one hand, digitization issues in the world of the rule of law are mainly em-
bodied in the question of Section 4: “Can you identify the most common mundane
and repetitive tasks that your legal professionals handle on a regular basis that AI
could enhance?” This project categorized the digitalization issues of the rule of
law world into the following ten main categories: (a) document review, (b) contract
On the other hand, the issues of the rule of law in the digital world are mainly re-
flected by the question “What do you perceive as the top challenges facing lawyers
in the age of AI?” in Section 5. This project categorized the issues into the following
seven categories: (a) ethical issues, (b) privacy & security, (c) AI accuracy & reliabil-
ity, (d) adaption to AI, (e) legal issue, (f) labor market and (g) lack of explainability.
According to this report, “legal issue” is the most relevant issue of the legalization
of the digital world. Besides this issue, others are also concerned more or less by
jurists, especially “privacy & security” and “AI accuracy & reliability”, since both of
them are chosen by over sixty percent of respondents. An example of the issue
“privacy & security” from a real-world application is the General Data Protection
Regulation (GDPR), which was drafted and passed by the European Union in 2018. It
In general, this project demonstrates the charm of the two development directions
of digital law and is of great significance for the future development of AI in the
legal business. It not only offers valuable knowledge for legal scholars and scholars
from other fields to refer to, but also provides important guidance for the future de-
velopment of legal science and technology. The knowledge is comprehensive and
reliable since it is derived from data collected from over two hundred firms across
countries, making the project international and large-scale. Furthermore, the find-
ings of the survey report are consistent with our intuitions, which indicates that the
methods used in the project are scientific and reasonable. However, the respond-
ents in this project are mainly from European and American law firms. Therefore, I
suggest more law firms from regions other than Western countries be involved in
future research. This could lead to different discoveries.
27
https://orcid.org/0000-0002-8786-2644
28
See McCarty, L.T., 1977. Reflections on taxman: An experiment in artificial intelligence and legal reason-
ing. Harvard Law Review, 837-893.
29
See Vossos, G., Dillon, T., Zeleznikow, J., & Taylor, G. (1991). The use of object oriented principles to
develop intelligent legal reasoning systems. Australian Computer Journal, 23(1), 2-10. and Zeleznikow,
John, and Dan Hunter. Building Intelligent Legal Information Systems: Representation and Reasoning in
law. No. 13. Kluwer Law and Taxation Publishers, 1994.
The challenges in using Artificial Intelligence are far fewer. This is why I am delight-
ed to be able to comment upon this large-scale project on Artificial Intelligence in
Legal Business. Those using Artificial Intelligence in legal professions will greatly
benefit from this work.
Phase one of the study consisted of surveying 203 law firms that responded to the
survey. This is a very large empirical sample, upon which researchers will be able to
explore many connections.
The report distinguished between US-based companies (62 companies, 30.5%) and
non-US-based companies (mainly from Europe, 141 companies, 69.5%). I would like
30
See for example Zeleznikow, John. “The benefits and dangers of using machine learning to support
making legal predictions.” Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery (2023):
e1505.
31
Helen Yan, Louis de Koker and John Zeleznikow
32
At least for property cases. It is impossible to put a value on child welfare cases.
33
See for example Stranieri, A., Zeleznikow, J., Gawler, M., & Lewis, B. (1999). A hybrid rule–neural ap-
proach for the automation of legal reasoning in the discretionary domain of family law in Australia.
Artificial intelligence and law, 7(2-3), 153-183.
It is heartening to know that a little more than half of the companies have already
implemented AI-based tools and solutions (103 firms, 50.7%), while 24 of them
(11.8%) are currently exploring options in this area. But where are the adopters
based? This matters, because for example China in many instances mandates the
use of AI. Hence my desire to know more about the data. Just as the users of data
analytics in law need to know where the data comes from, we need to know where
the companies surveyed are based.
It is interesting to know that companies in the U.S. were typically significantly more
likely to use AI technology, as well as the smallest companies (1-10 employees) and
those with 100+ employees. Medium-sized (11-99 employees) companies imple-
mented the technology less frequently. Is this a recent phenomenon? I believe so,
as I have not been aware of such a great interest in AI by US legal firms 34. My experi-
ence in Australia is that firms are interested in using AI, but rarely actually do so.
To me, the most interesting point is that Academic publications were the least pop-
ular for the smallest firms, but the most popular among medium-sized firms (11-99);
academic conferences were the most popular for the biggest firms. And is this truer
in the US, where there is a growing distrust in research and expertise – the objec-
tion to vaccination during the COVID19 pandemic is merely a significant example
of this trend.
I was surprised that respondents are least likely to report using AI tools for intel-
lectual property management, dispute outcome and risk predictions, compliance
and risk management, and dispute resolution. Certainly, early AI and Law research
in the 1970s and 1980s investigated these domains. And clients would see greater
34
Later in the report I see that the vast majority (59.2%) of companies report using AI-based tools for less
than a year. Both 1-2 years and 2-5 years were selected by 18 companies (17.5%) each. Only 6 (5.8%)
companies indicated that such solutions have been present in their company for more than five years.
The fact that almost all selections of e-discovery were made by firms from the USA
is probably true because of the nature of the US legal system, where there is so
much data to peruse and disputants often try to confuse their opponents by pro-
viding a plethora of data e.g. in class action cases.
Most companies (47.6%) say their AI tools are used both for internal organizational
tasks and for tasks related to client work. 37.9% of firms report using AI tools only for
internal organizational tasks of the company (where there is no reputational benefit
and efficiency and effectiveness are upmost and bias and fairness matter less). The
fewest companies (14.6%) use AI tools only for client work.
It is surprising that whilst 57.8% of companies say they have used a pilot program or
trial period before fully implementing the AI solutions, a significant 42.2% say they
have not engaged in pilot programs. What this indicates is that 42.2% of firms are
prepared to use AI without previously testing it. These 42.2% of firms have been
convinced they need to use AI very quickly! This is very different to the scene even
5-10 years ago.
And only 37.9% of firms report using AI tools only for internal organizational tasks of
the company. What this means is that currently 62.1% of firms are prepared to use
AI tools for client work – a very significant recent trend. Unsurprisingly, the larger
the company, the higher the proportion of “more restrictive” responses. Demand-
ing that “technology must be preapproved”. Smaller firms give their AI users more
freedom. One can argue larger companies believe in more control or are just more
cautious.
Unsurprisingly, the most frequently indicated challenges were legal issues (legal
liability and regulation), privacy and security and AI accuracy and reliability. I am
surprised that users were not concerned about the lack of explainability. I assume
researchers and academics are more concerned with the issue of explainability
than are practitioners.
When asked about how AI can best help lawyers overcome challenges, the major
challenges were streamlining repetitive tasks and improving efficiency – tasks that
non-AI based software have performed for some time. Thus, it was not surprising
that the future impact of AI chosen by the largest proportion of respondents were:
AI-enabled tools becoming an essential part of legal workflows (53.2%), automated
The most preferred AI systems law firms wanted to implement were: document
generator (84.2%), document summarization tool (69.5%), case law analytics tool
(63.5%) and compliance & risk management system (59.1%). The fewest compa-
nies indicated a negotiation support system (21.7%) and legal argument assistant
(27.1%). Clearly firms wanted AI tools that assist them to improve their performance
(especially document generators), not to make decisions for them.
There are significant differences depending on the size of the company and wheth-
er it is US-based or not, as well as the legal domain in which the law firm practices.
This strongly supports our conclusion that one cannot build generic AI tools
for the legal domain. Necessary tools will vary depending on the country of
origin of the firm (and on many occasions the region of the country), the size
of the firm and the legal domain in which the law firm practices.
-
Tomasz Zurek, University of Amsterdam, The Netherlands, Maria Curie Sklo-
dowska University, Poland
My first observation is that the legal business takes the potential influence of AI on
our lives very seriously. I was really surprised that over half the analysed firms have
already implemented any AI-based tool or solution and another 12 per cent are
exploring the topic. Moreover, a significant number of the firms declared as open
to AI technology. I think that this is an important symptom showing that the techno-
logical development broke into this, very traditional, discipline, and lawyers realize
that it will significantly influence the shape of legal business in the future.
After a closer look at the research results, we can observe, that most of the tasks
supported by already implemented AI tools are tasks related to the creation and
analysis of legal documents (document automation, legal research, contract analy-
sis, and, at least partially, information retrieval), usually written in natural language.
Another interesting, and similar to the above, observation is that the analysed le-
gal firms see the potential area of usage of AI-based tools in supporting them in
repetitive and mundane tasks It presumably means that legal firms do not predict
the existence of tools which can support them in the more challenging tasks (for
example, producing legal advice, compliance analysis, and so on).
Concluding the above, I can point out that although the results of the research
show that there is a great interest in AI within the legal business, most of the firms
see the potential usage of AI as an extension and support in their everyday tasks,
rather than the groundbreaking change of the legal business as such.
It is also worth noticing, that although many reviewed legal firms have already im-
plemented some AI tools in their practice, most of them did it quite recently (almost
First of all since companies are looking for tools that would help them in mundane
and repetitive (and probably time and resource-consuming) tasks, the implemen-
tation of such tools could significantly speed up the processes of preparing legal
analyses, contracts, documents, court applications, etc., which in turn may increase
the number of cases to be heard in the courts. The development of such tools will
increase the time pressure concerning the preparation of documents, as well as
will influence the competition amongst law firms. Moreover, this could significantly
affect the ability of the courts to deal with such a large number of cases and result
in very long queues. This could be a significant problem, symptoms of which are
already observable. Is there a way to overcome this problem? This is an open issue
and can undoubtedly be solved, at least partially, by introducing tools supporting
artificial intelligence in courts. Especially mechanisms that can automate some sim-
ple and routine cases. However, this is very difficult to implement not only from a
technical but also, and above all, a social point of view. This is undoubtedly a topic
that requires further research from at least two points of view: the explainability of
such a system and the mechanisms of trust that people place in technical devices.
The second conclusion is related to the above. I wonder if and how the develop-
ment of artificial intelligence will affect law as such. The report shows that the ana-
lysed companies assume that the legal business in the future will be more or less
similar to the current one (but perhaps faster and less focused on routine tasks).
I’m curious to see if this is what will happen. Perhaps the rapid development of
artificial intelligence and its impact on society will be strong enough to significantly
change the way law works. For example, the increasing role of autonomous devices
will force the necessity of creating systems which autonomously should check the
compliance with law. This may open the door for systems which can autonomously
provide legal reasoning, product conclusions, analyses, etc. I think that such mech-
anisms, in the long run, can significantly change the legal business.
In conclusion. The report shows that most law firms are aware of the potential im-
pact of artificial intelligence on the legal business and are trying to adapt to the
new circumstances. However, most of them seem to be at an early stage of imple-
menting AI in their business. Of particular interest is the future of the legal business
as a whole. In my opinion, some (a minority) of firms are trying to lead the new AI-
based business by investing in research and development, while the rest are simply
using the tools available on the market. How artificial intelligence will affect the
legal business is still an open question.
Based on a careful analysis of the data in our review, we have identified the follow-
ing conclusions:
1. Law firms are facing declining performance and pressure to become more ef-
ficient, which generative AI technology can certainly help with.
2. About 38% of law firm tasks are repetitive tasks that can be replaced by AI.
These are primarily legal research, document review and contract generation.
3. 51% of law firms have already implemented AI. The majority in the US. Most are
the largest firms (+100 lawyers) and the smallest (1-10 lawyers).
4. The largest number of AI tools deployed are tools for automating document
processing and legal research. However, there is a lack of dominant technology
here - companies are still looking for the best solutions and are far from it. The
market is still immature.
5. The generative AI revolution has completely changed this market. 60% of com-
panies have been using AI for less than a year - an obvious impact of this tech-
nology on the development of innovation in the legal world.
6. Law firms vary in their approach to AI - most are implementing the technology
very cautiously, preceding pilots and ensuring that the technology is preap-
proved. However, nearly half of law firms are allowing AI to be used from the
bottom up, taking advantage of the fact that lawyers with AI are many times
more effective than without AI.
10. Lawyers are convinced that AI will significantly change the way work is done.
53% believe that AI tools will become an essential part of workflows and parale-
gals’ tasks will be automated. 38% believe that lawyers who specialize in AI will
have a better chance of finding work and advancing their careers.
12. 1/3 of law firms intend to partner with technology companies and invest in AI
R&D.
13. If the tools were accurate and secure the average company would implement
5-6 AI-based tools. The most preferred AI systems were a document generator
(84.2%), a document summarization tool (69.5%), a jurisprudence analysis tool
(63.5%), and a compliance and risk management system (59.1%).
Following the conclusions drawn from our careful analysis, we wish to highlight
the subsequent key findings that are crucial to our understanding of the broader
context:
2. About 40% of legal tasks will be carried out using AI in the next few years.
3. Law firms using AI are able to generate several hundred billion dollars in ad-
ditional value, becoming key players in their market.
4. Knowledge of AI and the ability to do project work using AI will become one of
a lawyer’s most important assets in the job market.
We plan to continue surveying the law firms with regard to the state of AI tools im-
plementation and inviting researchers to provide their insights and criticism, to en-
able an ongoing dialogue and inspiration between legal business and academia, as
well as other expert groups. In our commitment to keeping our hand on the pulse,
we aspire to serve as a comprehensive compendium of knowledge, detailing the
The expert commentaries on the results of the first edition of the survey have al-
ready provided us with numerous suggestions about how the following editions
of the survey might be extended or improved. Taking these suggestions, and the
results of our own analyses into consideration, we may indicate the following direc-
tions we intend to develop the “AI in Law Firms Survey” project in the near future:
1. Employing more geographical diversity (in particular, inclusion of the law firms
operating in Southern America or Asia);
3. Putting more emphasis on the different classes of risks connected with the AI
implementation (concerning legal or reputational responsibility, organizational
problems, questions concerning professional development);
5. Emphasising risks that may follow from AI biases or hallucinations and how they
can be handled in a law firm;
6. Investigating the impact on the legal labor market, business models (automati-
zation) and the law as such;
7. Intending to attract further expert opinions, also from different communities
(for example from cognitive science, professional legal associations, policy-
making authorities or legal tech business).
1. Above The Law, Generative AI in the Law: Where Could This All Be Headed?
Wolters Kluwer, 2023.
2. Araszkiewicz M., Bench-Capon T., Francesconi E., Lauritsen M., Rotolo A., Thir-
ty years of Artificial Intelligence and Law: overviews, Artificial Intelligence and
Law 30 (4) 593-610.
3. Arredondo, P., Driscoll, S., Schreiber, M., GPT-4 Passes the Bar Exam: What
That Means for Artificial Intelligence Tools in the Legal Profession. Stanford
Law School, 2023.
5. Beauchene, V., de Bellefonds, N., Duranton, S. and Mills, S., AI at Work: What
People Are Saying, Boston Consulting Group, 2023.
6. Brooks, C., Gherhes, C. and Vorley, T., Artificial intelligence in the legal sector:
pressures and challenges of transformation.,Cambridge Journal of Regions,
Economy and Society, 2020. pp. 65–87.
7. Connell, W., Hamlin Black, M., Artificial Intelligence and Legal Education. The
Computer & Internet Lawyer, 2019. p. 40.
8. Chui, M., Hazan, E., Roberts, R., Singla, A., Smaje, K., Sukharevsky, A., Yee, L.,
Zemmel, R., The economic potential of generative AI. The next productivity
frontier, McKinsey & Company, 2023.
9. Chui, M., Yee, L., Hall, B., Singla, A., Sukharevsky, A., The state of AI in 2023:
Generative AI’s breakout year. Quantum Black AI by McKinsey, 2023.
10. Diver, L., McBride, P., Medvedeva, M., Banerjee, A., D’hondt, E., Duarte, T., Du-
shi, D., Gori, G., Van den Hoven, E., Meessen, P., Hildebrandt, M., ‘Typology of
Legal Technologies’, Cross-disciplinary Research in Computational Law (CRCL):
Computational ‘Law’ on Edge, Cohubicol, 2022.
11. Daull, X., Bellot, P., Bruno, E., Martin, V., Murisasco, E., Complex QA and Lan-
guage Models Hybrid Architectures, Survey. arXiv, 2023.
12. Governatori G., Bench-Capon T., Verheij B., Araszkiewicz M., Francesconi E.,
Grabmair M., Thirty years of Artificial Intelligence and Law: the first decade.
Artificial Intelligence and Law, 30 (4), 481-519, 2022.
14. Hongdao, Q., Bibi, S., Khan, A., Ardito, L., Khaskheli, M. B., Legal Technologies
in Action: The Future of the Legal Market in Light of Disruptive Innovations,
Sustainability, 2019.
15. Jones, J. W. and others, Report on the State of the Legal Market, Mixed results
and growing uncertainty. Thomson Reuters Institute, 2023.
16. Lauritsen, M., Computational Intelligence and the Paradoxes of Legal Routine,
Medium, 1990.
17. Lauritsen, M., Technology report: Building legal practice systems with today’s
commercial authoring tools. Artificial Intelligence and Law, 1992, 1, pp.87-102.
21. McBride, P., Casetext’s CoCounsel through the lens of the Typology. Cohubi-
col, 2023.
22. McBride, P., Diver, L., ChatGPT and the future of law. Law society of Scotland,
2023.
23. Nguyen, H., Fungwacharakorn, W., Nishino, F., Satoh, K., “A Multi-Step Ap-
proach in Translating Natural Language into Logical Formula, JURIX, 2022. pp.
103-112
24. Opijnen, V., Santos M. and C., On the concept of relevance in legal information
retrieval. Artificial Intelligence and Law, 2017.
25. Peng, B., Galley, M., He, P., Cheng H., Xie, Y.,Hu, Y.,Huang, Q.,Liden, L.,Yu, Z.,
Chen, W., Gao, J., ‘Check Your Facts and Try Again: Improving Large Language
Models with External Knowledge and Automated Feedback’, arXiv, 2023.
26. Replogle, T. J., The Business of Law: Evolution of the Legal Services Market.
Michigan Business & Entrepreneurial Law Review, 2017. vol. 6, no. 2, pp. 287–
304.
27. Sartor G., Araszkiewicz M., Atkinson K., Bex F., van Engers T., Francesconi E.,
Prakken H., Sileno G., Schilder F., Wyner A., Bench-Capon T. Thirty years of Ar-
28. Savelka, J., Grabmair, M., Ashley, K., A Law School Course in Applied Legal
Analytics and AI. Law in Context, 2020 vol. 37, no. 1, pp. 134–174.
29. Soukupová, J.. AI-based Legal Technology: A Critical Assessment of the Cur-
rent Use of Artificial Intelligence in Legal Practice. Masaryk University Journal
of Law and Technology, 2021.
30. Stranieri, A., Zeleznikow, J., Gawler, M., Lewis, B. A hybrid rule–neural approach
for the automation of legal reasoning in the discretionary domain of family law
in Australia. Artificial intelligence and law, 1999.
31. Susskind, R., Tomorrow’s Lawyers. Third Edition. Oxford University Press, 2023.
p. 97, 111–113, 138–141, 230–232 ,
32. Thompson, D., Creating New pathways to Justice Using Simple Artificial Intel-
ligence and Online Dispute Resolution. International Journal of Online Dispute
Resolution, 2015, vol. 2, no. 1, pp. 4–53.
33. Veith, C., Bandlow, M., Harnisch, M., Wenzler, H., Hartung, M., Hartung, D., How
Legal Technology Will Change the Business of Law, Boston Consulting Group,
2016
34. Villata S., Araszkiewicz M., Ashley K., Bench-Capon T., Branting L. K., Conrad
J., Wyner A. Thirty Years of Artificial Intelligence and Law : the third decade,
Artificial Intelligence and Law. 2022, 30(4), 561-591,
35. Vossos, G., Dillon, T., Zeleznikow, J., & Taylor, G., The use of object oriented
principles to develop intelligent legal reasoning systems. Australian Computer
Journal, 1991, 23(1), 2-10
37. Webb, M., The Impact of Artificial Intelligence on the Labor Market. SSRN,
2020.
39. Zeleznikow, J., Hunter, D. Building Intelligent Legal Information Systems: Repre-
sentation and Reasoning in law. Kluwer Law and Taxation Publishers, 1994.