0% found this document useful (0 votes)
72 views22 pages

Basic Concepts of Ai For Legal Scholars

Uploaded by

camila
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
72 views22 pages

Basic Concepts of Ai For Legal Scholars

Uploaded by

camila
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 22

CHAPTER 1

BASIC CONCEPTS OF AI
FOR LEGAL SCHOLARS

Rembrandt Devillé*, Nico Sergeyssels**


and Catherine Middag***

1. INTRODUCTION

1. Artificial Intelligence has become a hot topic due to major advances in


the field. However, many people participate in the debate without having the
necessary understanding of the subject. In this chapter, we will explain some
basic concepts of AI that may be useful for legal scholars and practitioners. It
will provide readers with the necessary background to fully understand the
impact of AI on law.
First, we provide a clear definition of AI and discuss the Turing Test. This test
was a first controversial attempt to measure machine intelligence (part 2). We
then focus on the working of AI. We consider two main AI approaches, namely
knowledge-based and data-based learning. The latter is gaining importance every
day, mainly due to the massive production of data by the Internet of Things (IoT).
Machine learning (ML) can be considered the core of the data-based approach.
One very popular ML method is the artificial neural network (ANN), which is
described as well. We briefly discuss how it works and focus on its evolution into
deep learning (DL). This evolution results from the increased data production
and computing power. While DL has been a quantum leap for AI, it also has some
drawbacks. These will be covered as well (part 3). AI has several sub-disciplines,
many of which rely on ML. We briefly discuss search algorithms, computer
vision, natural language processing (NLP), speech processing and agents (part
4). Having touched upon the foundations of AI, we subsequently focus on the
wide range of areas and fields in which AI is already used. We discuss the current

* Researcher Knowledge Center AI, Erasmus University College Brussels.


** Researcher Knowledge Center AI, Erasmus University College Brussels.
*** Lecturer Artificial Intelligence and head of the Knowledge Center AI, Erasmus University
College Brussels.

Intersentia 1

https://doi.org/10.1017/9781839701047.002 Published online by Cambridge University Press


Rembrandt Devillé, Nico Sergeyssels and Catherine Middag

state of the art and expected evolutions in transportation, robotics, healthcare,


education, public safety and security, art and entertainment, and law. We also
look at the more distant future of AI (part 5). We conclude this chapter with
some considerations regarding the ethical and safety aspects of AI (part 6).

2. DEFINING AND MEASURING AI

2.1. DEFINITION: WHAT IS AI?

2. Nowadays, AI is ubiquitous in the news. This results in a huge number of


articles and (academic) papers on this technology. These texts are usually either
overly optimistic or pessimistic1 regarding the possibilities as well as the dangers
and challenges of AI. Most of the time, those claims are a consequence of the fact
that people are not properly informed about AI. This lack of knowledge already
starts with the name as there is no clear, unambiguous and commonly used
definition of the term artificial intelligence.

3. According to Wikipedia, ‘AI is intelligence demonstrated by machines,


unlike the natural intelligence displayed by humans and animals’.2 This leaves
us with two questions: what is intelligence and how can you demonstrate
intelligence? The Merriam-Webster Dictionary defines intelligence as ‘the ability
to learn or understand or to deal with new or trying situations’.3 According to
David Wechsler, the designer of well-known intelligence tests, intelligence is ‘the
global capacity of a person to act purposefully, to think rationally, and to deal
effectively with his environment’.4 We can thus define AI as the ability/capacity
of a machine to act purposefully, think rationally and deal effectively with its
environment, like humans are ideally supposed to do. However, since human
thinking and behaviour do not always imply rationality, Russell and others make
a clear distinction: machines can either think like humans or think rationally,
can either act like humans or act rationally. All these approaches are considered
part of AI.5 Note that intelligence, and thus AI as well, is evolving over time as
we are building on knowledge gathered by previous generations.

1 L. Steels, ‘Artificiële Intelligentie: Naar een vierde industriële revolutie’, Koninklijke Vlaamse
Academie van België voor Wetenschappen en Kunsten, KVAB standpunten, 2017, vol. 53,
44 p.
2 Wikipedia, ‘Artificial Intelligence’, http://en.wikipedia.org/wiki/Artificial_Intelligence.
3 Merriam-Webster Dictionary, ‘Intelligence’, https://www.merriam-webster.com/dictionary/
intelligence.
4 D. Wechsler, The measurement and appraisal of adult intelligence, 4th edn (Baltimore:
Williams & Wilkins Co, 1958), 297 p.
5 S. Russel and P. Norvig, Artificial intelligence: a modern approach, 2nd edn (New Jersey:
Prentice Hall, 2002), 1080 p.

2 Intersentia

https://doi.org/10.1017/9781839701047.002 Published online by Cambridge University Press


Chapter 1. Basic Concepts of AI for Legal Scholars

2.2. THE TUR ING TEST AND THE LOEBNER PR IZE

4. Taking into account the aforementioned definition of the concept of AI,


the question remains which behaviour, thinking and actions can be qualified
as ‘human-like’. In 1950, Alan Turing designed a test to determine whether
a machine could imitate human intelligence. This is now known as the Turing
test.6 Consider the following setup: person C interviews persons A and B. Person
A and B are of opposite genders. All three persons are located in different
rooms by means of a screen and keyboard. Person A lies about his/her gender,
person B tells the truth. Person C has to guess the gender of person A and B.
In the imitation game, a machine takes over A’s role. According to Turing, the
computer is able to think as a human if it is able to deceive as often as person A
in the original game.

5. As can be expected, this test resulted in a lot of criticism from various


scientific perspectives. One major critique relates to the question as to what
is actually tested with this set-up. Turing’s experiment seems to equate verbal
communication and the ability to lie with intelligence, which is a rather
controversial statement.7

6. Another criticism is that the proof of imitation of intelligence does not


necessarily imply the proof of intelligence. A well-known thought-experiment
used as an argument for this point is the so-called Chinese room. It was designed
by the philosopher John Searle.8 Imagine a person without any knowledge of
the Chinese language locked in a room with boxes of Chinese symbols and an
instruction book in English explaining how to manipulate the symbols. People
outside the room then send Chinese symbols as input. The person inside the
room can easily convert this input to a correct Chinese output by using the
instruction book. For outsiders, it seems as if the person in the room perfectly
understands Chinese even when this is not true. He only mimics this by using
the instruction book. Searle concludes that ‘The point of the argument is this: if
the man in the room does not understand Chinese on the basis of implementing
the appropriate program for understanding Chinese then neither does any other
digital computer solely on that basis because no computer has anything the man
does not have’.9

6 I. Turing, ‘Computing machinery and intelligence-AM Turing’, Mind, 1950, vol. 59, no. 236,
p. 433.
7 A. Ayesh, ‘Turing Test Revisited: A Framework for an Alternative’, arXiv preprint, 2019,
arXiv:1906.11068.
8 J. Searle, ‘Minds, Brains and Programs’, Behavioral and Brain Science, 1980, vol. 3, pp. 417–
424.
9 J. Searle, ‘Chinese room argument’, Scholarpedia, 2009, vol. 4, no. 8, p. 3100.

Intersentia 3

https://doi.org/10.1017/9781839701047.002 Published online by Cambridge University Press


Rembrandt Devillé, Nico Sergeyssels and Catherine Middag

7. Despite the controversial nature of the Turing test, it has been invaluable to
the development of AI. One of the main legacies of the test is the Loebner Prize.10
This annual competition uses the Turing test, albeit in modified form, to test the
capabilities of chatbots in several areas. The most human-like chatbot wins the
Prize. The ultimate goal is to find a chatbot that passes the Turing test and is thus
capable of making the judge believe that it is a human. However, this goal has not
been achieved so far. Although chatbots are increasingly more capable of having
a human-like conversation, research by Jacquet and others shows that they lack
the capability of answering complex questions and elaborating on their previous
answers.11 Following Turing’s arguments, we can thus conclude that machines
cannot think yet, although they are becoming increasingly good at learning how
to imitate humans.

3. BASIC PRINCIPLES OF AI
3.1. KNOWLEDGE-BASED VERSUS DATA-BASED
LEAR NING

8. An important question is how AI actually learns to imitate humans. First


of all, computers are designed to process information. AI systems, therefore,
approach intelligence as a way of information processing.12 In order to do so, a
computer needs two important components: an internal way of representing this
information and a way to transform this information into the desired output.
This latter component contains the intelligence as a computer can only transform
information into output if it has learned how to do so. Basically, two different
approaches exist for this learning step, namely knowledge-based learning and
data-based learning.
Knowledge-based approaches initially dominated AI research because data
was still scarce. Typically, an expert in the field tried to pour his knowledge
into a model (e.g. a set of rules, patterns or logical statements). This model was
subsequently implemented as a series of instructions – and thus as an algorithm
– in the machine to obtain its goal. Data-driven methods increasingly emerged.
Systems were presented with many examples of inputs and the corresponding
outputs. The system itself had to find or recognise patterns in order to provide
correct answers. This process of deducing patterns and learning from examples/
experience is called machine learning. It is considered to be the motor of AI.
Nowadays both learning approaches are increasingly combined, for example in

10 H. Loebner, ‘How to hold a Turing test contest’, in R. Epstein, G. Roberts and G. Beber (eds),
Parsing the Turing test (Dordrecht: Springer, 2009), pp. 173–179.
11 B. Jacquet and J. Baratgin, ‘Mind-Reading Chatbots: We Are Not There Yet’, International
Conference on Human Interaction and Emerging Technologies, 2020, pp. 266–271.
12 Steels, ‘Artificiële Intelligentie: Naar een vierde industriële revolutie’, 44 p.

4 Intersentia

https://doi.org/10.1017/9781839701047.002 Published online by Cambridge University Press


Chapter 1. Basic Concepts of AI for Legal Scholars

agents.13 Whereas a data-based method is used to understand the input, acting


upon it is handled by a knowledge-based approach. Both methods have their
strengths and weaknesses. Data-based techniques are generally better to make
optimal use of the immense amount of data that is now being produced or for
performing pattern recognition. Knowledge-based techniques by contrast are
better at reasoning rationally to come to the solution or at explaining in language
what knowledge they contain or how they arrive at the solution.

3.2. INTER NET OF THINGS AND BIG DATA

9. During the last decade, smart devices have become increasingly more
popular. Smart devices consist of a machine (e.g. watch, refrigerator or car)
connected to a smaller (mostly electronical) apparatus that is used to detect
changes in or around the machine (e.g. movement, heat, light, noise or pressure).
This apparatus, also known as a sensor, sends its findings or data to other
electronic devices, which can then rely on it. When this machine is able to
connect with the user or with other machines through a network such as the
internet, it is called a smart device. Typically, smart devices store their data or
information in a so-called cloud. This is a cluster of interconnected computers
with the purpose of providing massive data storage and processing power for
many smaller devices with less memory and weaker computing power.14

10. Nowadays, large groups of smart devices are in some way interconnected,
often through the cloud. This eventually leads to the Internet of Things in
which these devices exchange information with each other without requiring
a human to supervise each action.15 Examples are manifold. Think of a smart
watch monitoring the user’s health and presenting this data on the user’s
smartphone. Another example is a smart home in which climate, lights or music
are controlled by the user through an agent or smart device that knows all his/
her preferences.16 One can also think of smart cities where all traffic lights
are interconnected to optimise traffic flows.17 The IoT market is growing very

13 M.J. Wooldridge and N.R. Jennings, ‘Intelligent agents: Theory and practice’, The knowledge
engineering review, 1995, vol. 10, no. 2, pp. 115–152; L. Padgham and M. Winikoff, Developing
intelligent agent systems: A practical guide, 1st edn (Chichester: John Wiley & Sons, 2004), 240
p. Also see part 4.5.
14 See also: Z. Mahmood, Connectivity Frameworks for Smart Devices: The Internet of Things
from a Distributed Computing Perspective (Derby: Springer, 2016), 356 p.
15 L. Atzori et al., ‘The internet of things: A survey’, Computer networks, 2010, vol. 54, no. 15,
pp. 2787–2805.
16 D. Pavithra and R. Balakrishnan, ‘IoT based monitoring and control system for home
automation’, 2015 global conference on communication technologies, 2015, pp. 169–173.
17 P. Rizwan et al., ‘Real-time smart traffic management system for smart cities by using Internet
of Th ings and big data’, 2016 international conference on emerging technological trends, 2016,
pp. 1–7.

Intersentia 5

https://doi.org/10.1017/9781839701047.002 Published online by Cambridge University Press


Rembrandt Devillé, Nico Sergeyssels and Catherine Middag

rapidly. A recent study cites staggeringly high numbers regarding the growth of
IoT. It also stresses the need to be mindful of privacy and security issues in order
to protect the data generated by IoT from being used for malign purposes.18 The
rise of IoT implies that much data is being generated, giving AI the opportunity
to learn from so-called big data. Everything that has to be learned can be found
in many examples (numerous input-output-pairs, called samples) or with a lot of
characteristics (called features).

3.3. MACHINE LEAR NING

11. An important subfield of AI that benefits from big data is machine


learning. As already mentioned, ML aims to develop systems that learn models
from data. According to Mitchell, a computer program is said to learn from
data for a certain task if its performance with respect to that task improves after
gaining experience in solving the specific task.19 For an image classification task,
for instance, the model may be designed to distinguish pictures of dogs from
pictures of cats. The task of learning (‘training’) the model involves identifying
parameters, structures and hidden concepts in the presented data (‘training
data set’). In the case of the previous example, the model would be presented
with numerous images of cats and dogs aiming to learn to distinguish between
those two categories. The learned model can subsequently be used to make a
prediction for any new image of a cat or a dog. Applications of ML are manifold
ranging from sales prediction and fraud detection to disease prediction.

12. Two subcategories of ML can be distinguished depending on how the


learning happens: supervised and unsupervised learning. In supervised learning,
the algorithm learns which output to associate with a certain input (‘labelled
data’). One can compare it to a child that learns to name animals through a
parent showing and naming the specific animal on a picture. In unsupervised
learning, the algorithm has only been given input without any output. Its main
aim is to discover patterns and structures on its own (‘unlabelled data’). Semi-
supervised learning is a type of learning that combines both subcategories. For
some part of the data, input and output are given, while for other parts only
input is available.
In addition to supervised and unsupervised learning, a third subcategory
of machine learning models is typically distinguished, namely reinforcement
learning. In reinforcement learning, the intelligent agent needs to learn an
optimal set of actions – a control policy – to achieve a certain goal. The algorithm

18 H. Espinoza et al., ‘Estimating the impact of the Internet of Th ings on productivity in


Europe’, Heliyon, 2020, vol. 6, no. 5, e03935.
19 T. Mitchell, Machine learning (New York: McGraw-Hill, 1997), 414 p.

6 Intersentia

https://doi.org/10.1017/9781839701047.002 Published online by Cambridge University Press


Chapter 1. Basic Concepts of AI for Legal Scholars

is not told how to do this but only receives feedback in the form of rewards.
These rewards can be positive or negative experiences bringing the agent closer
to or further away from the (desired) goal. This can be compared to how children
learn to ride a bike. Parents typically do not explicitly tell their children how to
learn to cycle. Instead, children themselves explore which actions positively or
negatively contribute to learning how to ride the bike. They improve their biking
skills by learning from these experiences.

3.4. ARTIFICIAL NEUR AL NETWOR KS AND DEEP


LEAR NING

13. An important class of supervised learners is formed by artificial neural


networks (ANNs), which tend to perform very well in case of a large number of
samples. ANNs are loosely inspired by the human neural system. The human
nervous system consists of about 86 billion neurons.20 These neurons receive
electrical input signals from neighbouring cells and, depending on the input,
produce an output. This output is then propagated to further neurons.
ANNs consist of a large number of artificial neurons that mimic the
functioning of the biological neurons by reacting upon the input cells, and thus
by letting their output depend on the input. A weighted sum of the input and a
threshold is multiplied by a so-called activation function, resulting in the output
of the neuron. A typical and simple architecture of an ANN is given in Figure
1. This ANN consists of three layers, being the input layer, the hidden layer and
the output layer. The input layer consists of M inputs, called X1, X 2, …, X M in
the example. Each input can be connected to each of the Q hidden neurons in
which the inputs are transformed with linear combinations and an activation
function, leading to values Z1, Z2, …, ZQ. Each of the hidden neurons can then
be connected to each of the output neurons in which the outputs of the hidden
neurons are again transformed to obtain the final output values called Y1, …, YP.
Let us illustrate this with an example. Imagine an ANN designed to forecast
tomorrow’s weather. Possible inputs could be characteristics of the weather
today: hourly statistics of temperature, air pressure or humidity. These are the
X1, X 2, …, X M in Figure 1. Tomorrow’s weather could be defined by tomorrow’s
minimum and maximum temperature, humidity, hours of sunshine or
probability of rain. These are the resulting values Y1, …, YP. The hidden layer
could then transform today’s hourly statistics into a useful intermediate result
such as global statistics of today’s temperature or air pressure, which are then
the resulting Z1, Z2, …, ZQ. These mean values can then be combined to derive

20 J. Randerson, ‘How many neurons make a human brain? Billions fewer than we thought’, The
Guardian, 28 February 2012, https://www.theguardian.com/science/blog/2012/feb/28/how-
many-neurons-human-brain.

Intersentia 7

https://doi.org/10.1017/9781839701047.002 Published online by Cambridge University Press


Rembrandt Devillé, Nico Sergeyssels and Catherine Middag

a forecast for tomorrow. Note that the hidden layer is not user-defined: the user
cannot choose which intermediate results or steps are taken. Hence the name
hidden layer. One does not know exactly what happens in that layer. Only input,
output, and the number of neurons of the hidden layer are determined by the
user. The other aspects are determined by the training algorithm. The training
itself is done by giving the network many examples (e.g. which weather pattern
from a certain day corresponds to which weather pattern for the day after?). The
algorithm looks at every example and finetunes all linear combinations of the
ANN until it succeeds at simulating the examples as good as possible. At that
point, the linear combinations are fi xed, and the neural network is ready to be
used for new predictions.

Figure 1. Schematic of an artificial neural network with one hidden layer. Every circle
denotes a computing node (neuron), resulting in the value marked within the node.21

ANNs are used for a wide range of applications such as image recognition and
speech recognition. One could, for example, build an ANN to make a distinction
(‘classify’) between male and female voices. For this, thousands of examples of
female and male voices would have to be recorded and presented to the ANN.
A well-trained ANN will be able to distinguish correctly between all male
and female voices it has seen during training. It will also be able to correctly

21 C. Middag, ‘Automatic analysis of pathological speech’, Doctoral dissertation UGent, 2012,


180 p., https://biblio.ugent.be/publication/3007443.

8 Intersentia

https://doi.org/10.1017/9781839701047.002 Published online by Cambridge University Press


Chapter 1. Basic Concepts of AI for Legal Scholars

classify new voices, which is of course the whole goal of training: learning how
to continue to correctly distinguish between new male and female voices after
training without any human guidance.

14. ANNs already exist since the 1940’s22 and have been improved ever since.
Starting from a two-layered network that was not able to learn logical basic
functions, robust training algorithms with a solid mathematical basis have been
introduced. The number of layers has also been increased. Several configurations
were introduced, including loops to make memory functions available. The
biggest improvement, however, occurred only one decade ago. The increase in
computing power and the rise of big data resulted in massive datasets to train
huge networks on. It suddenly became possible to train ANNs with more than
just a few layers. This is referred to as deep neural networks (DNNs). This deep
learning technology led to major breakthroughs in several fields within AI.23
Results in image and speech recognition based on deep learning were remarkably
better than any other method that was used before. The results are impressive.
Artificial intelligence based on deep learning now outclasses humans for some
specific tasks in image recognition,24 which has significant consequences for our
society. Suddenly self-driving cars, AI healthcare diagnosis tools and so many
other futuristic applications come within reach.25

15. Although deep learning is reshaping the future of AI, many experts raise
concerns about this technology. First of all, due to the high number of parameters
to be trained in a DNN, the tuning of the parameters can become very complex.
Finding the optimal values for all the linear combinations in the neural network
can be compared to searching for the lowest point on a very bumpy surface in an
extremely high-dimensional space. Often, only a local minimum will be found.
As a result, training a DNN on the same data multiple times, thereby starting
the search each time on a different random spot, often leads to finding another
local minimum and thus to different settings for the network. Consequently, the
behaviour of these slightly different networks can affect the overall performance.
A second weakness of DNNs is their brittleness. Slightly different input
can lead to another linear combination being the strongest. This can result in
a totally different output. Blurring a picture of a cat or slightly changing the
background and angle of the picture can (already) lead to a failure of the system

22 R.S. Lee, ‘AI Fundamentals’, Artificial Intelligence in Daily Life, 2020, pp. 19–37.
23 M.R. Minar and J. Naher, ‘Recent advances in deep learning: an overview’, arXiv preprint,
2018, arXiv:1807.08169.
24 A. Krizhevsky et al., ‘Imagenet classification with deep convolutional neural networks’,
Advances in neural information processing systems, 2012, pp. 1097–1105.
25 C. Badue et al., ‘Self-driving cars: A survey’, Expert Systems with Applications, 2020, vol.
165, p. 113816; K. H. Yu et al., ‘Artificial intelligence in healthcare’, Nature biomedical
engineering, 2018, vol. 2, no. 10, pp. 719–731. We will briefly discuss this further in part 5.1.

Intersentia 9

https://doi.org/10.1017/9781839701047.002 Published online by Cambridge University Press


Rembrandt Devillé, Nico Sergeyssels and Catherine Middag

to recognise the cat in the picture.26 This of course makes it very easy to ‘fool’
a DNN. Changing only a few pixels in a picture of a lion, for instance, can
suddenly make the DNN recognise it as a library.27 This fragility of DNNs can
also lead to dangerous situations,28 which have to be prevented in all cases.
Another concern relates to the fact that DNNs are so-called ‘black box’
models. Despite their high accuracy, it is very difficult to explain why a certain
input leads to the predicted output. It can even be impossible to understand why
the output was calculated in that very specific way. This lack of interpretability
and explainability makes it sometimes ethically impossible to use these
methods. The only information that can be retrieved is a mathematical formula
consisting of non-linear combinations of the different inputs, which cannot
be converted into an explanation a human would understand. One can easily
imagine situations in which this explainability in human language of decisions
made by AI is crucial. Take the example of an AI system used in dermatology.
When a certain mole is identified as malignant, doctors would like to know why
it is identified that way (e.g. size, colour, irregularity, etc.). However, a DNN
only provides an answer in the form of benign/malign. Another example occurs
when clients of a financial institutions are refused a loan based on a DNN. They
of course would want to know the reason(s) of this refusal. In these and many
other cases, one needs so-called ‘explainable’ AI,29 which does not only provide
an output (a prediction) but also an explanation. This is a sort of ‘followed
path’ through the flow chart of the model to show how it came to the specific
prediction.

3.5. DATA BIAS AND MODEL BIAS

16. Another concern when using DNNs or, more generally, data-based
learning algorithm relates to the data which is used to train the model. One
simple rule is that the model cannot learn what it has never seen. Imagine
training a classifier for tomatoes. Pictures of all kinds of red tomatoes are used
to train this model. If this model has learned to identify tomatoes as being red
objects, it will not be able to classify yellow cherry tomatoes as actual tomatoes.
One should thus handle training data with care to prevent ‘data bias’. This is a
deviation of the true variety in the observed object’s nature. There are numerous
examples of problems associated with data bias, some of which having rather

26 M. Mitchell, Artificial intelligence: A guide for thinking humans, 1st edn (Pelican: Penguin UK,
2019), 336 p.
27 D. Heaven, ‘Why deep-learning Ais are so easy to fool’, Nature, 2019, vol. 574, pp. 163–166.
28 See in this regard part 5.1.1.
29 A.B. Arrieta et al., ‘Explainable Artificial Intelligence (XAI): Concepts, taxonomies,
opportunities and challenges toward responsible AI’, Information Fusion, 2020, vol. 58,
pp. 82–115.

10 Intersentia

https://doi.org/10.1017/9781839701047.002 Published online by Cambridge University Press


Chapter 1. Basic Concepts of AI for Legal Scholars

dramatic consequences. Imagine that an algorithm-based passport photo-


checker is mainly trained with white people. As a consequence, the pictures of
Asian people can be rejected since their eyes seem more closed.30 Google’s image
recognition algorithms accidentally categorised two black persons on a picture
as gorillas.31 The algorithm that Amazon used when scanning cover letters also
appeared to discriminate against women.32 To remedy this problem, Google has
created a contest to develop better algorithms without racial data bias.33
Data-based learning algorithms can also be subject to model bias. Model
bias describes how well a model matches the data used for training. If an
inappropriate and often too simple a model structure has been chosen (e.g.
a linear model for a quadratic curve), the model will not match with the data
set. The model bias will be high. A low bias indicates a good match between
the training data and the model. A possible solution for this problem is to opt
for complex models to decrease the model bias. However, one has to be very
careful as more complex models tend to memorise every detail of the training
data, thereby losing sight of the bigger picture. As a consequence, complex
models may not be able to generalise towards new data even if this new data is
only slightly different from the training data. The latter problem is referred to as
‘overfitting’. The balance between these two problems is also known as the bias-
variance trade-off problem.34

4. AI SUB-DISCIPLINES

17. Before we delve deeper into its numerous applications, we briefly touch
upon a few sub-disciplines of AI to clarify a number of concepts. As ML has
already been addressed above, we will not discuss it again in the following
paragraphs.

30 S. Cheng, ‘An algorithm rejected an Asian man’s passport photo for having “closed eyes”’,
Quartz, 7 December 2016, https://qz.com/857122/an-algorithm-rejected-an-asian-mans-
passport-photo-for-having-closed-eyes/.
31 J. Vincent, ‘Google ‘fi xed’ its racist algorithm by removing gorillas from its image-labeling
tech’, The Verge, 12 January 2018, https://www.theverge.com/2018/1/12/16882408/google-
racist-gorillas-photo-recognition-algorithm-ai.
32 J. Dastin, ‘Amazon scraps secret AI recruiting tool that showed bias against women’, Reuters,
10 October 2018, https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/
amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK0
8G.
33 T. Doshi, ‘Introducing the Inclusive Images Competition’, Google AI blog, 6 September 2018,
https://ai.googleblog.com/2018/09/introducing-inclusive-images-competition.html. Also see
chapter 6 for more information.
34 P. Mehta et al., ‘A high-bias, low-variance introduction to machine learning for physicists’,
Physics reports, 2019, vol. 810, pp. 1–124.

Intersentia 11

https://doi.org/10.1017/9781839701047.002 Published online by Cambridge University Press


Rembrandt Devillé, Nico Sergeyssels and Catherine Middag

4.1. SEARCH ALGOR ITHMS

18. Search algorithms were among the first AI applications to break through.
As the word suggests, search algorithms look for the optimal path to reach
a goal, thereby starting from an initial state. These mostly knowledge-based
algorithms are used to, for example, solve games (chess, checkers, Go, …), find
optimal routes or schedule classes/industrial pipelines. They are also of major
importance considering that they are often used to find the best configuration of
parameter settings for many ML algorithms.

4.2. COMPUTER VISION

19. Computer vision can be seen as a machine learning problem. The data
consists of images or movies. This data is subsequently analysed by using machine
learning techniques to understand what can be seen on the image/movie. With
the introduction of deep learning, a lot of progress has been made. The progress
made since 2015 is even so extensive that computer vision is (already) performing
as well as humans for some task. The error rate ‘of the top 5’35 in naming objects
in images (with thousand classes) decreased from 28.2% in 2010 to 6.7% in
2014.36 Automatic facial recognition systems are now reliably used in China to
provide access to buildings or to transfer money.37 Moreover, research shows that
computer vision can detect skin cancer more accurately than dermatologists.38
Similarly, it outperforms medical doctors at detecting breast cancer.39

4.3. NATUR AL LANGUAGE

20. Although DNNs have resulted in major leaps in the capabilities of AI,
understanding human language remains a great challenge. The field of Natural
Language Processing (NLP), which focuses on processing and understanding
written text, is still one of the most challenging sub-disciplines of AI. In addition

35 Top 5 means that the correct object has to be in the top 5 of suggested possibilities of the
learner.
36 O. Russakovsky et al., ‘Imagenet large scale visual recognition challenge’, International
journal of computer vision, 2015, vol. 115, no. 3, pp. 211–252.
37 W. Knight, ‘Paying with your face: 10 Breakthrough Technologies’, MIT Technology Review,
22 February 2017, https://mittr-frontend-prod.herokuapp.com/s/603494/10-breakthrough-
technologies-2017-paying-with-your-face/.
38 T.J. Brinker et al., ‘Deep learning outperformed 136 of 157 dermatologists in a head-to-head
dermoscopic melanoma image classification task’, European Journal of Cancer, 2019, vol. 113,
pp. 47–54.
39 P. Galey, ‘AI Is Now Officially Better at Diagnosing Breast Cancer Than Human’, Science Alert,
3 January 2020, https://www.sciencealert.com/ai-is-now-officially-better-at-diagnosing-
breast-cancer-than-human-experts.

12 Intersentia

https://doi.org/10.1017/9781839701047.002 Published online by Cambridge University Press


Chapter 1. Basic Concepts of AI for Legal Scholars

to the actual understanding of the written language, AI is also used to generate


language. The field of Natural Language Generation (NLG) is relied upon to
construct answers for chatbots as well as for summarising or paraphrasing.
Although the performances for the parsing of texts, the naming of parts of the
sentence and the detection of spam are already quite advanced,40 sentiment
analysis and translating texts remain harder to realise for NLP. Interpreting
complex speech and having a true conversation are still far from solved
problems.41 Moreover, NLP still struggles with every typical human aspect of
language such as emotions, styles, ambiguities, or synonyms.42

4.4. SPEECH

21. Whereas NLP deals with the processing of written text, automatic speech
recognisers (ASRs) are used to understand spoken language. Similarly to NLP/
NLG, speech recognition has a counterpart, namely speech synthesis. It aims
to produce speech that is as intelligible and natural as possible. Once again,
the use of deep learning has led to a major advance in the field. Recognition
errors dropped from 23% to 8% for the English Google ASR.43 Similar trends
are visible in Microsoft’s ASR system.44 As is the case for NLP, however,
speech recognition is still far away from achieving a human level. It is also
very sensitive to many external factors, which merely are the consequences of
training material and data bias.45 As an ASR learns through data, its results
ultimately depend on the quality and variation within the training dataset that
is used. An ASR trained on clean speech of native English-speaking adults, for
instance, will have difficulties understanding children (who have a higher pitch
and less articulation skills) or dialects (cf. unseen is unknown). Such systems
will also perform poorly in noisy conditions.
Starting in the 1970s with so-called formant-based synthesisers producing
the typical robot-like mechanical voices,46 the clarity and naturalness of the
produced speech has been an issue for a long time. A first improvement was the
introduction of the waveform concatenation-based methods concatenating little

40 C. Rădulescu et al., ‘Identification of spam comments using natural language processing


techniques’, 2014 IEEE 10th International Conference on Intelligent Computer Communication
and Processing, 2014, pp. 29–35.
41 Jacquet and Baratgin, ‘Mind-Reading Chatbots: We Are Not There Yet’, pp. 266–271.
42 Z. Kaddari et al., ‘Natural Language Processing: Challenges and Future Directions’,
International Conference on Artificial Intelligence & Industrial Applications, 2020, pp. 236–246.
43 J. Novet, ‘Google says its speech recognition technology now has only an 8%-word error
rate’, VentureBeat, 28 May 2015, https://venturebeat.com/2015/05/28/google-says-its-speech-
recognition-technology-now-has-only-an-8-word-error-rate/.
44 W. Xiong et al., ‘The Microsoft 2017 conversational speech recognition system’, 2018 IEEE
international conference on acoustics, speech, and signal processing, 2018, pp. 5934–5938.
45 Also see supra part 3.5.
46 Y. Ning et al., ‘A review of deep learning-based speech synthesis’, Applied Sciences, 2019, vol.
9, no. 19, p. 4050.

Intersentia 13

https://doi.org/10.1017/9781839701047.002 Published online by Cambridge University Press


Rembrandt Devillé, Nico Sergeyssels and Catherine Middag

pieces of voice together to produce the smoothest possible sentence. Nowadays,


mainly statistical parametric speech synthesis methods are used. These methods
use some form of data-based learner to predict how the text should sound
(e.g. pitch, volume, emphasis, etc.). With the introduction of deep learning,
the intelligibility and naturalness of the synthesised speech have improved
significantly.38 Yet, some major challenges (still) remain unsolved. Expressive
language or putting the emphasis on the correct word/syllable, for instance,
remain difficult. This requires that the text is correctly and rightly interpreted,
which brings us back to the challenges related to NLP.

4.5. AGENTS

22. An emerging field in which knowledge-based techniques have become


very important is the domain of the autonomous agents. These applications
interact with their users – being either humans or other computer programs –
and perform certain tasks for them. In order to do so, they have to understand
the meaning and the intent of the user’s questions/commands. Therefore, they
need a language production/understanding component. They should be able
to look up things on the web themselves, give answers in natural language and
perform the desired actions such as checking the weather, making reservations
or buying products.47 Famous examples of such agents are SIRI,48 Google
Assistant,49 Microsoft Cortana50 and Alexa.51 These agents have already been
compared in the literature.52

5. THE CUR R ENT AND FUTUR E USE OF AI


APPLICATIONS
5.1. AN OVERVIEW OF SOME AI APPLICATIONS IN THE
PR ESENT AND NEAR FUTUR E

23. As already mentioned, AI is not new. It has been evolving since the
1940s with ups and downs (cf. AI ‘winters’ and ‘summers’).53 Due to recent

47 Steels, ‘Artificiële Intelligentie: Naar een vierde industriële revolutie’, 44 p.


48 Apple, ‘Siri’, https://www.apple.com/siri/.
49 Google, ‘Google Assistant’, https://assistant.google.com/.
50 Microsoft, ‘Cortana’, https://www.microsoft.com/en-us/cortana.
51 Amazon, ‘Alexa User Guide: Learn What Alexa Can Do’, https://www.amazon.com/
b?ie=UTF8&node=17934671011.
52 G. López et al., ‘Alexa vs. Siri vs. Cortana vs. Google Assistant: a comparison of speech-based
natural user interfaces’, International Conference on Applied Human Factors and Ergonomics,
2017, pp. 241–250.
53 M. Haenlein and A. Kaplan, ‘A brief history of artificial intelligence: On the past, present, and
future of artificial intelligence’, California management review, 2019, vol. 61, no. 4, pp. 5–14.

14 Intersentia

https://doi.org/10.1017/9781839701047.002 Published online by Cambridge University Press


Chapter 1. Basic Concepts of AI for Legal Scholars

breakthroughs in computing power, data availability and deep learning,


however, it is again attracting a lot of attention. The expectations of AI in the
future are thus very high. Nevertheless, the media often portray scenarios on
the use and capacities of AI systems that may be unrealistic. We will, therefore,
focus on more realistic expectations in this part, thereby relying on scientific
research. At Stanford University, the 100 Year Study Panel54 gathers experts in
different subfields of AI from both academia and industry. The aim of the Panel
is to examine the evolution of AI during a century. Every few years, it meets to
review recent changes and trends. Based on this analysis, predictions of future
developments in the field are made. In their most recent report dating from 2016,
the experts foresee a significant rise of AI applications across a wide range of
industries such as transportation, robots, healthcare and education by 2030.55
Without being exhaustive, we discuss some important areas in which AI will
arguably be extensively used during the coming decade.

5.1.1. Transportation

24. Experts agree that future transportation will be electric and autonomous.56
Computer vision used in self-driving cars is less prone to errors than human
vision thanks to deep learning. The same goes for a vehicle’s responses to traffic
risks. Self-driving cars will bring commuters to their workplace allowing them
to relax or prepare their working day. This will lead to less stress and safer traffic.
Youths, the elderly and disabled will also benefit from an increased mobility.
According to Stone and others, autonomous transportation will not be limited
to cars but will also involve trucks or drones.57 As powerful as current computer
vision algorithms may be, they still have vulnerabilities that can potentially
be very dangerous. These should definitely be addressed before the mass
introduction of self-driving cars. Research has shown that simple adversarially
crafted stickers can cause errors in road sign and image recognition models.58
Sometimes, a few pixels can make the difference between a correct and an
incorrect recognition.59 Current research focuses on the defence against these
and other so-called adversarial attacks.

54 Stanford University, ‘One Hundred Year Study on Artificial Intelligence (AI100)’, https://
ai100.stanford.edu/.
55 P. Stone et al., ‘Artificial Intelligence and Life in 2030. One hundred year study on artificial
intelligence: Report of the 2015–2016 Study Panel’, Stanford University, September 2016, 52 p.
56 See: Steels, ‘Artificiële Intelligentie: Naar een vierde industriële revolutie’, 44 p.; Stone et al.,
’Artificial Intelligence and Life in 2030. One hundred year study on artificial intelligence:
Report of the 2015–2016 Study Panel’, 52 p.
57 Stone et al., ’Artificial Intelligence and Life in 2030. One hundred year study on artificial
intelligence: Report of the 2015–2016 Study Panel’, 52 p.
58 K. Eykholt et al., ‘Robust physical-world attacks on deep learning visual classification’,
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018,
pp. 1625–1634.
59 T. Maliamanis and G.A. Papakostas, ‘Adversarial computer vision: a current snapshot’,
Twelfth International Conference on Machine Vision, 2020, vol. 11433, p. 1143328.

Intersentia 15

https://doi.org/10.1017/9781839701047.002 Published online by Cambridge University Press


Rembrandt Devillé, Nico Sergeyssels and Catherine Middag

Another important application domain of AI within transportation consists


of transportation planning and the regulation of traffic dynamics. With the rise
of IoT, traffic can easily be monitored and thus planned in an optimal way to
reduce traffic jams and travel time using among others search and planning
algorithms.

5.1.2. Robots

25. Robots are examples of so-called cyber-physical systems (CPS): deeply


intertwined combinations of a controlling soft ware component and mechanical
or electronic parts.60 Self-driving vehicles are a special subclass of robots. The
control, monitoring, data transfer and data exchange can take place locally, on
an embedded computer or over the internet. Other examples of cyber-physical
systems are monitored sensor networks or IoT devices. Robots can be seen
as a more local CPS application concentrated in one machine and designed
specifically to replace humans in one or more situations.
There are several views on the breakthrough of robots in our daily life.
Stone and others suggest that home robotics will find their way in households
because of advances in AI and mechanics.61 Steels by contrast is somewhat more
reserved as some mechanical issues including battery life, safety and production
are still unresolved.62 Therefore, robots are more likely to be found in industrial
production environments for assembly and quality control. Robots are very good
at accurately and reliably locating, moving and positioning objects. As such, they
are often used to automate repetitive and sometimes dangerous jobs. Examples
are welding robots, gun painting robots, CNC machines or packaging robots for
chocolates.63 However, manipulation tasks such as folding clothes or emptying
the dishwasher remain extremely difficult.
While many robots in industrial applications are deployed in the workplace,
another smaller type of robot is gradually on the rise: the collaborative robot or
‘cobot’. A cobot is specially designed to work with humans. It is often smaller
than the average robot and equipped with sufficient safety measures to ensure
that cooperation with humans takes place in a responsible way. However, a major
problem with cobots is the level of human-computer interaction.64

60 E.A. Lee, ‘Cyber physical systems: Design challenges’, 2008 11th IEEE International
Symposium on Object and Component-Oriented Real-Time Distributed Computing, 2008,
pp. 363–369.
61 Stone et al., ’Artificial Intelligence and Life in 2030. One hundred year study on artificial
intelligence: Report of the 2015–2016 Study Panel’, 52 p.
62 Steels, ‘Artificiële Intelligentie: Naar een vierde industriële revolutie’, 44 p.
63 H. Van Brussel et al., ‘Naar een inclusieve Robotsamenleving: Robotisering, automatisering
en werkgelegenheid’, KVAB Press, 2016, 47 p. Also see chapter 11.
64 See for more information J.E. Michaelis et al., ‘Collaborative or Simply Uncaged?
Understanding Human-Cobot Interactions in Automation’, Proceedings of the 2020 CHI
Conference on Human Factors in Computing Systems, 2020, pp. 1–12.

16 Intersentia

https://doi.org/10.1017/9781839701047.002 Published online by Cambridge University Press


Chapter 1. Basic Concepts of AI for Legal Scholars

5.1.3. Healthcare

26. Countless AI-based applications are currently being developed or


investigated for use in healthcare. Those applications are based on all possible
subfields of AI. It has already been mentioned that recent breakthroughs even
resulted in computer vision being stronger than doctors for the recognition of
some types of cancers. This technology can be relied upon by doctors to diagnose
diseases. Genes are being analysed by AI systems to unravel hidden patterns
between human constitution and the risk for diseases65 as well as to develop
personalised medicines. These medicines are further refined taking into account
a patient’s individual needs.66 Voices are also being analysed to detect articulation
problems,67 depression, drug intake or dementia.68 All these applications have
promising results that can be used in daily practice in the near future.
Moreover, medical insights are gathered automatically from assimilating
a large number of medical databases with NLP techniques.69 Chatbots are
being built to help patients find the right care or hospital (e.g. in times of the
COVID-19 pandemic70). Electronic health records are completed and analysed
automatically to ensure a better follow-up of patients.71 AI-powered robots are
also used to assist doctors during precision surgery.72 In addition to working
with humans, robots can also become part of the human. Powered exoskeletons
can be seen as wearable robots designed to restore locomotion of parts of the
human body.73 Researchers are also working towards intelligent prosthetics
controlled by neural impulses, which are understood by AI.74

65 J. Li et al., ‘Decoding the genomics of abdominal aortic aneurysm’, Cell, 2018, vol. 174, no. 6,
pp. 1361–1372.
66 E.C. Nice, ‘The status of proteomics as we enter the 2020s: Towards personalised/precision
medicine’, Analytical Biochemistry, 2020, p. 113840.
67 C. Middag et al., ‘Robust automatic intelligibility assessment techniques evaluated on
speakers treated for head and neck cancer’, Computer speech & language, 2014, vol. 28, no. 2,
pp. 467–482.
68 B. Desplanques and K. Demuynck, ‘Cross-lingual speech emotion recognition through factor
analysis’, 19th Annual Conference of the International-Speech-Communication-Association,
2018, pp. 3648–3652.
69 C. Luque et al., ‘An advanced review on text mining in medicine’, Wiley Interdisciplinary
Reviews: Data Mining and Knowledge Discovery, 2019, vol. 9, no. 3, e1302.
70 A.S. Miner et al., ‘Chatbots in the fight against the COVID-19 pandemic’, npj Digital
Medicine, 2020, vol. 3, no. 1, pp. 1–4.
71 P. Yadav et al., ‘Mining Electronic Health Records (EHRs): A Survey’, ACM Computing
Surveys, 2018, vol. 50, no. 6, pp. 1–40.
72 M.G. Fujie and B. Zhang, ‘State-of-the-art of intelligent minimally invasive surgical robots’,
Frontiers of Medicine, 2020, pp. 1–13.
73 A.S. Gorgey, ‘Robotic exoskeletons: The current pros and cons’, World journal of orthopedics,
2018, p. 112.
74 J.S. Hussain et al., ‘Recognition of new gestures using myo armband for myoelectric
prosthetic applications’, International Journal of Electrical & Computer Engineering, 2020,
vol. 9, no. 9, p. 2088.

Intersentia 17

https://doi.org/10.1017/9781839701047.002 Published online by Cambridge University Press


Rembrandt Devillé, Nico Sergeyssels and Catherine Middag

27. Although the development and use of AI in healthcare is taking


important steps and is indeed looking very promising, a human doctor remains
indispensable. To begin with, the doctor must always ensure that no ‘wrong’
decisions are made by the AI system. Moreover, acting in an empathetic way
remains very difficult for AI systems. Ensuring this human element, which is
crucial in a doctor-patient relationship, is not feasible by AI yet. Therefore, we
wish to emphasise that AI is there to serve humans and not to replace them. Or,
to cite professor Langlotz: ‘AI will not replace radiologists, but radiologists who
use AI will replace radiologists who do not’.75

5.1.4. Education

28. The need for personalised education has been steadily growing for decades.
AI is a powerful tool to satisfy this need. Already for a long time, researchers
focus on the development of interactive intelligent tutors. These systems are
capable of interacting with the student at his/her own pace and level, thereby
giving personal feedback and personalising the exercises. Nowadays, the internet
is full of Massive Online Open Courses (MOOCs) that provide direct and free
education to very large groups of people, including those who normally do not
have access to high-quality education. As a result, people from all over the world
can now follow courses offered by the world’s top universities. Participants are
induced to join group discussions and receive automated (AI powered) feedback
on their assignments.
On a smaller scale, speech recognition-based tutors teach children how
to read76 or help those with dyslexia by improving their reading skills.77 Both
ASRs and NLP are increasingly being used to teach students a second language
or a specific way of articulating.78 AI can also be used to automatically subtitle a
lecturer’s speech and translate his/her presentation. This way, lectures will in the
future become available even for those with another mother tongue.

5.1.5. Public Safety and Security

29. Already for some time now, AI is often relied upon in the safety and security
domain. Surveillance cameras can recognise license plates, even in suboptimal

75 As reported in S. Reardon, ‘Rise of Robot Radiologists’, Nature, 19 December 2019, https://


www.nature.com/articles/d41586–019–03847-z.
76 J. Duchateau et al., ‘Developing a reading tutor: Design and evaluation of dedicated speech
recognition and synthesis modules’, Speech Communication, 2009, vol. 51, no. 10, pp. 985–
994.
77 T. Athanaselis et al., ‘Making assistive reading tools user friendly: a new platform for Greek
dyslexic students empowered by automatic speech recognition’, Multimedia tools and
applications, 2014, vol. 68, no. 3, pp. 681–699.
78 C.S.C. Dalim et al., ‘Using augmented reality with speech input for non-native children’s
language learning’, International Journal of Human-Computer Studies, 2020, vol. 134,
pp. 44–64; H.T. Bunnell et al., ‘STAR: articulation training for young children’, Sixth
International Conference on Spoken Language Processing, 16–20 October 2020, pp. 1-4.

18 Intersentia

https://doi.org/10.1017/9781839701047.002 Published online by Cambridge University Press


Chapter 1. Basic Concepts of AI for Legal Scholars

circumstances.79 Facial recognition is also used in many places. China deploys this
technology in stores, schools, and airports.80 Computer vision is strong enough
to detect anomalies in human behaviour, potentially able to identify criminal
acts. In addition to computer vision, AI has proven to be very good and efficient
at fraud detection and combatting white-collar criminality in general.81 Another
application of AI in security relates to defence. While computer vision methods
can be used to save human lives, for example by deploying drones and robots to
explore the terrain, they can also easily be abused for warfare purposes.82

5.1.6. Arts and Entertainment

30. AI can also be relied upon to support or create art and entertainment.
Applications in this area are manifold. Think of all the video games that use AI
to enhance interaction with the player. AI can be used to create the story lines
or the game level generation as well. Moreover, AI may be able to learn a certain
painting style, create new paintings or learn to transfer styles from one painting
to another.83 AI can even learn how to create music in a specific style. Think of
the development of computer programme Experiments in Musical Intelligence
(EMI). It can analyse existing music and create new compositions in the style of
the original music without any human intervention.84
All these learners are based on recent developments in deep learning and
succeed quite well in mimicking a human style. Nevertheless, they only do so for
a short time (music) or on a particular surface (painting). Learners seem to lack
the knowledge of the broader picture. In a recent study, AI-created paintings
were evaluated significantly lower than those created by humans.85 This indicates
that the gap between human and computer art is far from bridged. Another
example is the movie Sunspring.86 The script – including stage directions – was
completely generated by NLP. With a score of 5.7 on the Internet Movie Database

79 W. Weihong and T. Jiaoyang, ‘Research on License Plate Recognition Algorithms Based on


Deep Learning in Complex Environment’, IEEE Access, 2020, vol. 8, pp. 91661–91675.
80 S. Shead, ‘Chinese residents worry about rise of facial recognition’, BBC News, 5 December
2019, https://www.bbc.com/news/technology-50674909.
81 Stone et al., ’Artificial Intelligence and Life in 2030. One hundred year study on artificial
intelligence: Report of the 2015–2016 Study Panel’, 52 p.
82 T. Walsh, ‘Autonomous Weapons: An Open Letter from AI & Robotics Researchers’, Future of
Life, 28 July 2015, https://futureofl ife.org/open-letter-autonomous-weapons/. Also see chapter
7 of this book.
83 L.A. Gatys et al., ‘Image style transfer using convolutional neural networks’, Proceedings of
the IEEE conference on computer vision and pattern recognition, 2016, pp. 2414–2423.
84 Also see chapter 9.
85 M. Ragot et al., ‘AI-generated vs. Human Artworks. A Perception Bias Towards Artificial
Intelligence?’, Extended Abstracts of the 2020 CHI Conference on Human Factors in
Computing Systems, 2020, pp. 1–10.
86 A. Newitz, ‘Movie written by algorithm turns out to be hilarious and intense’, Arstechnica,
9 June 2016, https://arstechnica.com/gaming/2016/06/an-ai-wrote-this-movie-and-its-
strangely-moving/.

Intersentia 19

https://doi.org/10.1017/9781839701047.002 Published online by Cambridge University Press


Rembrandt Devillé, Nico Sergeyssels and Catherine Middag

IMDB,87 it is certainly not the worst movie ever made. However, there is still a
major difference between movies created by AI and humans.

5.1.7. Law

31. The first applications of AI in the domain of law already stem from 1956.
This was the year in which logic was introduced as a tool for drafting and
interpreting legal documents.88 Scholars started to use the phrase AI and law in
the 1970s.89 This was also the moment that Taxman was developed to determine
whether or not a given re-organisation of companies was exempt from income
tax.90 Since then, many applications of AI in the legal sphere were developed,
eventually leading to the foundation of the International Conference on AI and
LAW (ICAIL), which was first held in Boston in May 1987.91 The International
Association for AI and Law (IAAIL) was established in 1991 followed by the
launch of the Journal of Artificial Intelligence and Law in 1992. In Europe,
the first conference of the ‘Stichting Juridische Kennissystemen’ (Foundation
for Legal Knowledge Systems) was held in 1988.92 A major overview of AI and
law can be found in the work of Bench-Capon and others. They summarise
the highlights of 25 years of the ICAIL.93 The many contributions in this book
illustrate that AI has a major impact on almost every legal domain ranging from
intellectual property and human rights to liability, labour, consumer protection,
legal procedure and data protection.

5.2. AI IN THE MOR E DISTANT FUTUR E

32. This chapter shows that AI is on the rise. We, therefore, expect great things
from this technology in the near and distant future. Yet, it remains difficult to
predict how AI will evolve and what limitations it may have. Some argue that AI
will soon reach the point of ‘singularity’.94 This is the point at which AI systems

87 IMDB, ‘Sunspring’, 9 June 2016, https://www.imdb.com/title/tt5794766/?ref_=fn_al_tt_1.


88 L.E. Allen, ‘Symbolic logic: A razor-edged tool for draft ing and interpreting legal documents’,
Yale L.J., 1957, vol. 66, no. 6., pp. 833–879.
89 B.G. Buchanan and T.E. Headrick, ‘Some Speculation about Artificial Intelligence and Legal
Reasoning’, Stanford Law Review, 1970, vol. 23, no. 1, pp. 40–62 as reported in F. Coenen and
T. Bench-Capon, ‘A Brief History of AI and Law’, 12 December 2017, AI & Law Workshop,
https://cgi.csc.liv.ac.uk/~frans/KDD/Seminars/historyOfAIandLaw_2017–12–12.pdf.
90 L.T. McCarty, ‘Reflections on TAXMAN: An experiment in artificial intelligence and legal
reasoning’, Harvard Law Review, 1977, vol. 90, no. 5, pp. 837–893.
91 See: https://dl.acm.org/conference/icail.
92 Coenen and Bench-Capon, ‘A Brief History of AI and Law’.
93 T. Bench-Capon et al., ‘A history of AI and law in 50 papers: 25 years of the international
conference on AI and law’, Artificial intelligence and Law, 2012, vol. 20, no. 3, pp. 215–319.
94 R. Kurzweil, The singularity is near: When humans transcend biology, 1st edn (New York:
Viking Penguin, 2005), 652 p.

20 Intersentia

https://doi.org/10.1017/9781839701047.002 Published online by Cambridge University Press


Chapter 1. Basic Concepts of AI for Legal Scholars

will outsmart humans. Others predict that by the end of the 21st century a
so-called superintelligence will emerge. Bostrom defines this as ‘any intellect that
greatly exceeds the cognitive performance of humans in virtually all domains
of interest’.95 To date, however, there are no indications that artificial general
intelligence (also referred to as strong AI) is possible. General AI refers to a
system that is intelligent in all domains just like humans. All currently developed
AI applications focus on just one domain. They are examples of what is called
narrow or weak AI. Such systems can perform a specific task very well, in some
cases even better than humans (e.g. facial recognition). All-round AI applications
perform very poorly and do not even come close to human performance level.
Let us refer to the Chinese room experiment again.96 The fact that DNNs mimic
human intelligence very well does not prove they are intelligent. So-called strong
AI that is able to think on its own is still science-fiction. Currently, only weak
AI exists. Put differently, ‘Unlike in the movies, there is no race of superhuman
robots on the horizon or probably even possible’.97

6. CONCLUSION

33. Although there is still much room for improvement in almost every
sub-discipline, AI has already proven to be a powerful, strengthening, and
complementary ally of mankind. This will probably not be different in the
future. This chapter started by acknowledging that artificial intelligence is a
hype nowadays. The many claims, statements, media coverage and opinions
regarding (the capacities of) AI are not necessarily scientifically true.98 This
shows the importance of proper education and the creation of awareness of AI.
More people need to be informed about the capacities as well as limitations and
challenges of AI.

34. While AI is becoming increasingly important in our society, it is essential


to address ethical and privacy issues as well. Although AI may become better
than humans for some specific actions, it will (still) make several mistakes a
human would never make. AI systems may also unintentionally discriminate or
even harm certain categories and groups of the population. A thorough debate
should be held regarding the responsibilities for such behaviour and the ethical

95 N. Bostrom, Superintelligence: Paths, dangers, strategies, (Oxford: Oxford University Press,


2014), 328 p.
96 See the discussion supra part 2.2.
97 See the many experts in: Stone et al., ’Artificial Intelligence and Life in 2030. One hundred
year study on artificial intelligence: Report of the 2015–2016 Study Panel’, 52 p.
98 R. Brooks, ‘The Seven Deadly Sins of AI Predictions’, MIT Technology Review, 6 October
2017, https://www.technologyreview.com/2017/10/06/241837/the-seven-deadly-sins-of-ai-
predictions/.

Intersentia 21

https://doi.org/10.1017/9781839701047.002 Published online by Cambridge University Press


Rembrandt Devillé, Nico Sergeyssels and Catherine Middag

consequences. Moreover, guidelines and codes of conduct should be adopted.


This should not only be done ex post once a system is built but should be included
right from the start as part of the system design (cf. the ‘by design’ approach).

22 Intersentia

https://doi.org/10.1017/9781839701047.002 Published online by Cambridge University Press

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy