Basic Concepts of Ai For Legal Scholars
Basic Concepts of Ai For Legal Scholars
BASIC CONCEPTS OF AI
FOR LEGAL SCHOLARS
1. INTRODUCTION
Intersentia 1
1 L. Steels, ‘Artificiële Intelligentie: Naar een vierde industriële revolutie’, Koninklijke Vlaamse
Academie van België voor Wetenschappen en Kunsten, KVAB standpunten, 2017, vol. 53,
44 p.
2 Wikipedia, ‘Artificial Intelligence’, http://en.wikipedia.org/wiki/Artificial_Intelligence.
3 Merriam-Webster Dictionary, ‘Intelligence’, https://www.merriam-webster.com/dictionary/
intelligence.
4 D. Wechsler, The measurement and appraisal of adult intelligence, 4th edn (Baltimore:
Williams & Wilkins Co, 1958), 297 p.
5 S. Russel and P. Norvig, Artificial intelligence: a modern approach, 2nd edn (New Jersey:
Prentice Hall, 2002), 1080 p.
2 Intersentia
6 I. Turing, ‘Computing machinery and intelligence-AM Turing’, Mind, 1950, vol. 59, no. 236,
p. 433.
7 A. Ayesh, ‘Turing Test Revisited: A Framework for an Alternative’, arXiv preprint, 2019,
arXiv:1906.11068.
8 J. Searle, ‘Minds, Brains and Programs’, Behavioral and Brain Science, 1980, vol. 3, pp. 417–
424.
9 J. Searle, ‘Chinese room argument’, Scholarpedia, 2009, vol. 4, no. 8, p. 3100.
Intersentia 3
7. Despite the controversial nature of the Turing test, it has been invaluable to
the development of AI. One of the main legacies of the test is the Loebner Prize.10
This annual competition uses the Turing test, albeit in modified form, to test the
capabilities of chatbots in several areas. The most human-like chatbot wins the
Prize. The ultimate goal is to find a chatbot that passes the Turing test and is thus
capable of making the judge believe that it is a human. However, this goal has not
been achieved so far. Although chatbots are increasingly more capable of having
a human-like conversation, research by Jacquet and others shows that they lack
the capability of answering complex questions and elaborating on their previous
answers.11 Following Turing’s arguments, we can thus conclude that machines
cannot think yet, although they are becoming increasingly good at learning how
to imitate humans.
3. BASIC PRINCIPLES OF AI
3.1. KNOWLEDGE-BASED VERSUS DATA-BASED
LEAR NING
10 H. Loebner, ‘How to hold a Turing test contest’, in R. Epstein, G. Roberts and G. Beber (eds),
Parsing the Turing test (Dordrecht: Springer, 2009), pp. 173–179.
11 B. Jacquet and J. Baratgin, ‘Mind-Reading Chatbots: We Are Not There Yet’, International
Conference on Human Interaction and Emerging Technologies, 2020, pp. 266–271.
12 Steels, ‘Artificiële Intelligentie: Naar een vierde industriële revolutie’, 44 p.
4 Intersentia
9. During the last decade, smart devices have become increasingly more
popular. Smart devices consist of a machine (e.g. watch, refrigerator or car)
connected to a smaller (mostly electronical) apparatus that is used to detect
changes in or around the machine (e.g. movement, heat, light, noise or pressure).
This apparatus, also known as a sensor, sends its findings or data to other
electronic devices, which can then rely on it. When this machine is able to
connect with the user or with other machines through a network such as the
internet, it is called a smart device. Typically, smart devices store their data or
information in a so-called cloud. This is a cluster of interconnected computers
with the purpose of providing massive data storage and processing power for
many smaller devices with less memory and weaker computing power.14
10. Nowadays, large groups of smart devices are in some way interconnected,
often through the cloud. This eventually leads to the Internet of Things in
which these devices exchange information with each other without requiring
a human to supervise each action.15 Examples are manifold. Think of a smart
watch monitoring the user’s health and presenting this data on the user’s
smartphone. Another example is a smart home in which climate, lights or music
are controlled by the user through an agent or smart device that knows all his/
her preferences.16 One can also think of smart cities where all traffic lights
are interconnected to optimise traffic flows.17 The IoT market is growing very
13 M.J. Wooldridge and N.R. Jennings, ‘Intelligent agents: Theory and practice’, The knowledge
engineering review, 1995, vol. 10, no. 2, pp. 115–152; L. Padgham and M. Winikoff, Developing
intelligent agent systems: A practical guide, 1st edn (Chichester: John Wiley & Sons, 2004), 240
p. Also see part 4.5.
14 See also: Z. Mahmood, Connectivity Frameworks for Smart Devices: The Internet of Things
from a Distributed Computing Perspective (Derby: Springer, 2016), 356 p.
15 L. Atzori et al., ‘The internet of things: A survey’, Computer networks, 2010, vol. 54, no. 15,
pp. 2787–2805.
16 D. Pavithra and R. Balakrishnan, ‘IoT based monitoring and control system for home
automation’, 2015 global conference on communication technologies, 2015, pp. 169–173.
17 P. Rizwan et al., ‘Real-time smart traffic management system for smart cities by using Internet
of Th ings and big data’, 2016 international conference on emerging technological trends, 2016,
pp. 1–7.
Intersentia 5
rapidly. A recent study cites staggeringly high numbers regarding the growth of
IoT. It also stresses the need to be mindful of privacy and security issues in order
to protect the data generated by IoT from being used for malign purposes.18 The
rise of IoT implies that much data is being generated, giving AI the opportunity
to learn from so-called big data. Everything that has to be learned can be found
in many examples (numerous input-output-pairs, called samples) or with a lot of
characteristics (called features).
6 Intersentia
is not told how to do this but only receives feedback in the form of rewards.
These rewards can be positive or negative experiences bringing the agent closer
to or further away from the (desired) goal. This can be compared to how children
learn to ride a bike. Parents typically do not explicitly tell their children how to
learn to cycle. Instead, children themselves explore which actions positively or
negatively contribute to learning how to ride the bike. They improve their biking
skills by learning from these experiences.
20 J. Randerson, ‘How many neurons make a human brain? Billions fewer than we thought’, The
Guardian, 28 February 2012, https://www.theguardian.com/science/blog/2012/feb/28/how-
many-neurons-human-brain.
Intersentia 7
a forecast for tomorrow. Note that the hidden layer is not user-defined: the user
cannot choose which intermediate results or steps are taken. Hence the name
hidden layer. One does not know exactly what happens in that layer. Only input,
output, and the number of neurons of the hidden layer are determined by the
user. The other aspects are determined by the training algorithm. The training
itself is done by giving the network many examples (e.g. which weather pattern
from a certain day corresponds to which weather pattern for the day after?). The
algorithm looks at every example and finetunes all linear combinations of the
ANN until it succeeds at simulating the examples as good as possible. At that
point, the linear combinations are fi xed, and the neural network is ready to be
used for new predictions.
Figure 1. Schematic of an artificial neural network with one hidden layer. Every circle
denotes a computing node (neuron), resulting in the value marked within the node.21
ANNs are used for a wide range of applications such as image recognition and
speech recognition. One could, for example, build an ANN to make a distinction
(‘classify’) between male and female voices. For this, thousands of examples of
female and male voices would have to be recorded and presented to the ANN.
A well-trained ANN will be able to distinguish correctly between all male
and female voices it has seen during training. It will also be able to correctly
8 Intersentia
classify new voices, which is of course the whole goal of training: learning how
to continue to correctly distinguish between new male and female voices after
training without any human guidance.
14. ANNs already exist since the 1940’s22 and have been improved ever since.
Starting from a two-layered network that was not able to learn logical basic
functions, robust training algorithms with a solid mathematical basis have been
introduced. The number of layers has also been increased. Several configurations
were introduced, including loops to make memory functions available. The
biggest improvement, however, occurred only one decade ago. The increase in
computing power and the rise of big data resulted in massive datasets to train
huge networks on. It suddenly became possible to train ANNs with more than
just a few layers. This is referred to as deep neural networks (DNNs). This deep
learning technology led to major breakthroughs in several fields within AI.23
Results in image and speech recognition based on deep learning were remarkably
better than any other method that was used before. The results are impressive.
Artificial intelligence based on deep learning now outclasses humans for some
specific tasks in image recognition,24 which has significant consequences for our
society. Suddenly self-driving cars, AI healthcare diagnosis tools and so many
other futuristic applications come within reach.25
15. Although deep learning is reshaping the future of AI, many experts raise
concerns about this technology. First of all, due to the high number of parameters
to be trained in a DNN, the tuning of the parameters can become very complex.
Finding the optimal values for all the linear combinations in the neural network
can be compared to searching for the lowest point on a very bumpy surface in an
extremely high-dimensional space. Often, only a local minimum will be found.
As a result, training a DNN on the same data multiple times, thereby starting
the search each time on a different random spot, often leads to finding another
local minimum and thus to different settings for the network. Consequently, the
behaviour of these slightly different networks can affect the overall performance.
A second weakness of DNNs is their brittleness. Slightly different input
can lead to another linear combination being the strongest. This can result in
a totally different output. Blurring a picture of a cat or slightly changing the
background and angle of the picture can (already) lead to a failure of the system
22 R.S. Lee, ‘AI Fundamentals’, Artificial Intelligence in Daily Life, 2020, pp. 19–37.
23 M.R. Minar and J. Naher, ‘Recent advances in deep learning: an overview’, arXiv preprint,
2018, arXiv:1807.08169.
24 A. Krizhevsky et al., ‘Imagenet classification with deep convolutional neural networks’,
Advances in neural information processing systems, 2012, pp. 1097–1105.
25 C. Badue et al., ‘Self-driving cars: A survey’, Expert Systems with Applications, 2020, vol.
165, p. 113816; K. H. Yu et al., ‘Artificial intelligence in healthcare’, Nature biomedical
engineering, 2018, vol. 2, no. 10, pp. 719–731. We will briefly discuss this further in part 5.1.
Intersentia 9
to recognise the cat in the picture.26 This of course makes it very easy to ‘fool’
a DNN. Changing only a few pixels in a picture of a lion, for instance, can
suddenly make the DNN recognise it as a library.27 This fragility of DNNs can
also lead to dangerous situations,28 which have to be prevented in all cases.
Another concern relates to the fact that DNNs are so-called ‘black box’
models. Despite their high accuracy, it is very difficult to explain why a certain
input leads to the predicted output. It can even be impossible to understand why
the output was calculated in that very specific way. This lack of interpretability
and explainability makes it sometimes ethically impossible to use these
methods. The only information that can be retrieved is a mathematical formula
consisting of non-linear combinations of the different inputs, which cannot
be converted into an explanation a human would understand. One can easily
imagine situations in which this explainability in human language of decisions
made by AI is crucial. Take the example of an AI system used in dermatology.
When a certain mole is identified as malignant, doctors would like to know why
it is identified that way (e.g. size, colour, irregularity, etc.). However, a DNN
only provides an answer in the form of benign/malign. Another example occurs
when clients of a financial institutions are refused a loan based on a DNN. They
of course would want to know the reason(s) of this refusal. In these and many
other cases, one needs so-called ‘explainable’ AI,29 which does not only provide
an output (a prediction) but also an explanation. This is a sort of ‘followed
path’ through the flow chart of the model to show how it came to the specific
prediction.
16. Another concern when using DNNs or, more generally, data-based
learning algorithm relates to the data which is used to train the model. One
simple rule is that the model cannot learn what it has never seen. Imagine
training a classifier for tomatoes. Pictures of all kinds of red tomatoes are used
to train this model. If this model has learned to identify tomatoes as being red
objects, it will not be able to classify yellow cherry tomatoes as actual tomatoes.
One should thus handle training data with care to prevent ‘data bias’. This is a
deviation of the true variety in the observed object’s nature. There are numerous
examples of problems associated with data bias, some of which having rather
26 M. Mitchell, Artificial intelligence: A guide for thinking humans, 1st edn (Pelican: Penguin UK,
2019), 336 p.
27 D. Heaven, ‘Why deep-learning Ais are so easy to fool’, Nature, 2019, vol. 574, pp. 163–166.
28 See in this regard part 5.1.1.
29 A.B. Arrieta et al., ‘Explainable Artificial Intelligence (XAI): Concepts, taxonomies,
opportunities and challenges toward responsible AI’, Information Fusion, 2020, vol. 58,
pp. 82–115.
10 Intersentia
4. AI SUB-DISCIPLINES
17. Before we delve deeper into its numerous applications, we briefly touch
upon a few sub-disciplines of AI to clarify a number of concepts. As ML has
already been addressed above, we will not discuss it again in the following
paragraphs.
30 S. Cheng, ‘An algorithm rejected an Asian man’s passport photo for having “closed eyes”’,
Quartz, 7 December 2016, https://qz.com/857122/an-algorithm-rejected-an-asian-mans-
passport-photo-for-having-closed-eyes/.
31 J. Vincent, ‘Google ‘fi xed’ its racist algorithm by removing gorillas from its image-labeling
tech’, The Verge, 12 January 2018, https://www.theverge.com/2018/1/12/16882408/google-
racist-gorillas-photo-recognition-algorithm-ai.
32 J. Dastin, ‘Amazon scraps secret AI recruiting tool that showed bias against women’, Reuters,
10 October 2018, https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/
amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK0
8G.
33 T. Doshi, ‘Introducing the Inclusive Images Competition’, Google AI blog, 6 September 2018,
https://ai.googleblog.com/2018/09/introducing-inclusive-images-competition.html. Also see
chapter 6 for more information.
34 P. Mehta et al., ‘A high-bias, low-variance introduction to machine learning for physicists’,
Physics reports, 2019, vol. 810, pp. 1–124.
Intersentia 11
18. Search algorithms were among the first AI applications to break through.
As the word suggests, search algorithms look for the optimal path to reach
a goal, thereby starting from an initial state. These mostly knowledge-based
algorithms are used to, for example, solve games (chess, checkers, Go, …), find
optimal routes or schedule classes/industrial pipelines. They are also of major
importance considering that they are often used to find the best configuration of
parameter settings for many ML algorithms.
19. Computer vision can be seen as a machine learning problem. The data
consists of images or movies. This data is subsequently analysed by using machine
learning techniques to understand what can be seen on the image/movie. With
the introduction of deep learning, a lot of progress has been made. The progress
made since 2015 is even so extensive that computer vision is (already) performing
as well as humans for some task. The error rate ‘of the top 5’35 in naming objects
in images (with thousand classes) decreased from 28.2% in 2010 to 6.7% in
2014.36 Automatic facial recognition systems are now reliably used in China to
provide access to buildings or to transfer money.37 Moreover, research shows that
computer vision can detect skin cancer more accurately than dermatologists.38
Similarly, it outperforms medical doctors at detecting breast cancer.39
20. Although DNNs have resulted in major leaps in the capabilities of AI,
understanding human language remains a great challenge. The field of Natural
Language Processing (NLP), which focuses on processing and understanding
written text, is still one of the most challenging sub-disciplines of AI. In addition
35 Top 5 means that the correct object has to be in the top 5 of suggested possibilities of the
learner.
36 O. Russakovsky et al., ‘Imagenet large scale visual recognition challenge’, International
journal of computer vision, 2015, vol. 115, no. 3, pp. 211–252.
37 W. Knight, ‘Paying with your face: 10 Breakthrough Technologies’, MIT Technology Review,
22 February 2017, https://mittr-frontend-prod.herokuapp.com/s/603494/10-breakthrough-
technologies-2017-paying-with-your-face/.
38 T.J. Brinker et al., ‘Deep learning outperformed 136 of 157 dermatologists in a head-to-head
dermoscopic melanoma image classification task’, European Journal of Cancer, 2019, vol. 113,
pp. 47–54.
39 P. Galey, ‘AI Is Now Officially Better at Diagnosing Breast Cancer Than Human’, Science Alert,
3 January 2020, https://www.sciencealert.com/ai-is-now-officially-better-at-diagnosing-
breast-cancer-than-human-experts.
12 Intersentia
4.4. SPEECH
21. Whereas NLP deals with the processing of written text, automatic speech
recognisers (ASRs) are used to understand spoken language. Similarly to NLP/
NLG, speech recognition has a counterpart, namely speech synthesis. It aims
to produce speech that is as intelligible and natural as possible. Once again,
the use of deep learning has led to a major advance in the field. Recognition
errors dropped from 23% to 8% for the English Google ASR.43 Similar trends
are visible in Microsoft’s ASR system.44 As is the case for NLP, however,
speech recognition is still far away from achieving a human level. It is also
very sensitive to many external factors, which merely are the consequences of
training material and data bias.45 As an ASR learns through data, its results
ultimately depend on the quality and variation within the training dataset that
is used. An ASR trained on clean speech of native English-speaking adults, for
instance, will have difficulties understanding children (who have a higher pitch
and less articulation skills) or dialects (cf. unseen is unknown). Such systems
will also perform poorly in noisy conditions.
Starting in the 1970s with so-called formant-based synthesisers producing
the typical robot-like mechanical voices,46 the clarity and naturalness of the
produced speech has been an issue for a long time. A first improvement was the
introduction of the waveform concatenation-based methods concatenating little
Intersentia 13
4.5. AGENTS
23. As already mentioned, AI is not new. It has been evolving since the
1940s with ups and downs (cf. AI ‘winters’ and ‘summers’).53 Due to recent
14 Intersentia
5.1.1. Transportation
24. Experts agree that future transportation will be electric and autonomous.56
Computer vision used in self-driving cars is less prone to errors than human
vision thanks to deep learning. The same goes for a vehicle’s responses to traffic
risks. Self-driving cars will bring commuters to their workplace allowing them
to relax or prepare their working day. This will lead to less stress and safer traffic.
Youths, the elderly and disabled will also benefit from an increased mobility.
According to Stone and others, autonomous transportation will not be limited
to cars but will also involve trucks or drones.57 As powerful as current computer
vision algorithms may be, they still have vulnerabilities that can potentially
be very dangerous. These should definitely be addressed before the mass
introduction of self-driving cars. Research has shown that simple adversarially
crafted stickers can cause errors in road sign and image recognition models.58
Sometimes, a few pixels can make the difference between a correct and an
incorrect recognition.59 Current research focuses on the defence against these
and other so-called adversarial attacks.
54 Stanford University, ‘One Hundred Year Study on Artificial Intelligence (AI100)’, https://
ai100.stanford.edu/.
55 P. Stone et al., ‘Artificial Intelligence and Life in 2030. One hundred year study on artificial
intelligence: Report of the 2015–2016 Study Panel’, Stanford University, September 2016, 52 p.
56 See: Steels, ‘Artificiële Intelligentie: Naar een vierde industriële revolutie’, 44 p.; Stone et al.,
’Artificial Intelligence and Life in 2030. One hundred year study on artificial intelligence:
Report of the 2015–2016 Study Panel’, 52 p.
57 Stone et al., ’Artificial Intelligence and Life in 2030. One hundred year study on artificial
intelligence: Report of the 2015–2016 Study Panel’, 52 p.
58 K. Eykholt et al., ‘Robust physical-world attacks on deep learning visual classification’,
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018,
pp. 1625–1634.
59 T. Maliamanis and G.A. Papakostas, ‘Adversarial computer vision: a current snapshot’,
Twelfth International Conference on Machine Vision, 2020, vol. 11433, p. 1143328.
Intersentia 15
5.1.2. Robots
60 E.A. Lee, ‘Cyber physical systems: Design challenges’, 2008 11th IEEE International
Symposium on Object and Component-Oriented Real-Time Distributed Computing, 2008,
pp. 363–369.
61 Stone et al., ’Artificial Intelligence and Life in 2030. One hundred year study on artificial
intelligence: Report of the 2015–2016 Study Panel’, 52 p.
62 Steels, ‘Artificiële Intelligentie: Naar een vierde industriële revolutie’, 44 p.
63 H. Van Brussel et al., ‘Naar een inclusieve Robotsamenleving: Robotisering, automatisering
en werkgelegenheid’, KVAB Press, 2016, 47 p. Also see chapter 11.
64 See for more information J.E. Michaelis et al., ‘Collaborative or Simply Uncaged?
Understanding Human-Cobot Interactions in Automation’, Proceedings of the 2020 CHI
Conference on Human Factors in Computing Systems, 2020, pp. 1–12.
16 Intersentia
5.1.3. Healthcare
65 J. Li et al., ‘Decoding the genomics of abdominal aortic aneurysm’, Cell, 2018, vol. 174, no. 6,
pp. 1361–1372.
66 E.C. Nice, ‘The status of proteomics as we enter the 2020s: Towards personalised/precision
medicine’, Analytical Biochemistry, 2020, p. 113840.
67 C. Middag et al., ‘Robust automatic intelligibility assessment techniques evaluated on
speakers treated for head and neck cancer’, Computer speech & language, 2014, vol. 28, no. 2,
pp. 467–482.
68 B. Desplanques and K. Demuynck, ‘Cross-lingual speech emotion recognition through factor
analysis’, 19th Annual Conference of the International-Speech-Communication-Association,
2018, pp. 3648–3652.
69 C. Luque et al., ‘An advanced review on text mining in medicine’, Wiley Interdisciplinary
Reviews: Data Mining and Knowledge Discovery, 2019, vol. 9, no. 3, e1302.
70 A.S. Miner et al., ‘Chatbots in the fight against the COVID-19 pandemic’, npj Digital
Medicine, 2020, vol. 3, no. 1, pp. 1–4.
71 P. Yadav et al., ‘Mining Electronic Health Records (EHRs): A Survey’, ACM Computing
Surveys, 2018, vol. 50, no. 6, pp. 1–40.
72 M.G. Fujie and B. Zhang, ‘State-of-the-art of intelligent minimally invasive surgical robots’,
Frontiers of Medicine, 2020, pp. 1–13.
73 A.S. Gorgey, ‘Robotic exoskeletons: The current pros and cons’, World journal of orthopedics,
2018, p. 112.
74 J.S. Hussain et al., ‘Recognition of new gestures using myo armband for myoelectric
prosthetic applications’, International Journal of Electrical & Computer Engineering, 2020,
vol. 9, no. 9, p. 2088.
Intersentia 17
5.1.4. Education
28. The need for personalised education has been steadily growing for decades.
AI is a powerful tool to satisfy this need. Already for a long time, researchers
focus on the development of interactive intelligent tutors. These systems are
capable of interacting with the student at his/her own pace and level, thereby
giving personal feedback and personalising the exercises. Nowadays, the internet
is full of Massive Online Open Courses (MOOCs) that provide direct and free
education to very large groups of people, including those who normally do not
have access to high-quality education. As a result, people from all over the world
can now follow courses offered by the world’s top universities. Participants are
induced to join group discussions and receive automated (AI powered) feedback
on their assignments.
On a smaller scale, speech recognition-based tutors teach children how
to read76 or help those with dyslexia by improving their reading skills.77 Both
ASRs and NLP are increasingly being used to teach students a second language
or a specific way of articulating.78 AI can also be used to automatically subtitle a
lecturer’s speech and translate his/her presentation. This way, lectures will in the
future become available even for those with another mother tongue.
29. Already for some time now, AI is often relied upon in the safety and security
domain. Surveillance cameras can recognise license plates, even in suboptimal
18 Intersentia
circumstances.79 Facial recognition is also used in many places. China deploys this
technology in stores, schools, and airports.80 Computer vision is strong enough
to detect anomalies in human behaviour, potentially able to identify criminal
acts. In addition to computer vision, AI has proven to be very good and efficient
at fraud detection and combatting white-collar criminality in general.81 Another
application of AI in security relates to defence. While computer vision methods
can be used to save human lives, for example by deploying drones and robots to
explore the terrain, they can also easily be abused for warfare purposes.82
30. AI can also be relied upon to support or create art and entertainment.
Applications in this area are manifold. Think of all the video games that use AI
to enhance interaction with the player. AI can be used to create the story lines
or the game level generation as well. Moreover, AI may be able to learn a certain
painting style, create new paintings or learn to transfer styles from one painting
to another.83 AI can even learn how to create music in a specific style. Think of
the development of computer programme Experiments in Musical Intelligence
(EMI). It can analyse existing music and create new compositions in the style of
the original music without any human intervention.84
All these learners are based on recent developments in deep learning and
succeed quite well in mimicking a human style. Nevertheless, they only do so for
a short time (music) or on a particular surface (painting). Learners seem to lack
the knowledge of the broader picture. In a recent study, AI-created paintings
were evaluated significantly lower than those created by humans.85 This indicates
that the gap between human and computer art is far from bridged. Another
example is the movie Sunspring.86 The script – including stage directions – was
completely generated by NLP. With a score of 5.7 on the Internet Movie Database
Intersentia 19
IMDB,87 it is certainly not the worst movie ever made. However, there is still a
major difference between movies created by AI and humans.
5.1.7. Law
31. The first applications of AI in the domain of law already stem from 1956.
This was the year in which logic was introduced as a tool for drafting and
interpreting legal documents.88 Scholars started to use the phrase AI and law in
the 1970s.89 This was also the moment that Taxman was developed to determine
whether or not a given re-organisation of companies was exempt from income
tax.90 Since then, many applications of AI in the legal sphere were developed,
eventually leading to the foundation of the International Conference on AI and
LAW (ICAIL), which was first held in Boston in May 1987.91 The International
Association for AI and Law (IAAIL) was established in 1991 followed by the
launch of the Journal of Artificial Intelligence and Law in 1992. In Europe,
the first conference of the ‘Stichting Juridische Kennissystemen’ (Foundation
for Legal Knowledge Systems) was held in 1988.92 A major overview of AI and
law can be found in the work of Bench-Capon and others. They summarise
the highlights of 25 years of the ICAIL.93 The many contributions in this book
illustrate that AI has a major impact on almost every legal domain ranging from
intellectual property and human rights to liability, labour, consumer protection,
legal procedure and data protection.
32. This chapter shows that AI is on the rise. We, therefore, expect great things
from this technology in the near and distant future. Yet, it remains difficult to
predict how AI will evolve and what limitations it may have. Some argue that AI
will soon reach the point of ‘singularity’.94 This is the point at which AI systems
20 Intersentia
will outsmart humans. Others predict that by the end of the 21st century a
so-called superintelligence will emerge. Bostrom defines this as ‘any intellect that
greatly exceeds the cognitive performance of humans in virtually all domains
of interest’.95 To date, however, there are no indications that artificial general
intelligence (also referred to as strong AI) is possible. General AI refers to a
system that is intelligent in all domains just like humans. All currently developed
AI applications focus on just one domain. They are examples of what is called
narrow or weak AI. Such systems can perform a specific task very well, in some
cases even better than humans (e.g. facial recognition). All-round AI applications
perform very poorly and do not even come close to human performance level.
Let us refer to the Chinese room experiment again.96 The fact that DNNs mimic
human intelligence very well does not prove they are intelligent. So-called strong
AI that is able to think on its own is still science-fiction. Currently, only weak
AI exists. Put differently, ‘Unlike in the movies, there is no race of superhuman
robots on the horizon or probably even possible’.97
6. CONCLUSION
33. Although there is still much room for improvement in almost every
sub-discipline, AI has already proven to be a powerful, strengthening, and
complementary ally of mankind. This will probably not be different in the
future. This chapter started by acknowledging that artificial intelligence is a
hype nowadays. The many claims, statements, media coverage and opinions
regarding (the capacities of) AI are not necessarily scientifically true.98 This
shows the importance of proper education and the creation of awareness of AI.
More people need to be informed about the capacities as well as limitations and
challenges of AI.
Intersentia 21
22 Intersentia