Artificial Intelligence in The Healthcare Space
Artificial Intelligence in The Healthcare Space
Faculty Scholarship
2019
Ehrik Aldana
UC Hastings College of the Law, aldanaehrik@uchastings.edu
Kara Stein
UC Hastings College of the Law, steinkara@uchastings.edu
Recommended Citation
Robin Feldman, Ehrik Aldana, and Kara Stein, Artificial Intelligence in the Health care Space: How We Can
Trust What We Cannot Know, 30 Stan. L.& Pol'y Rev. 399 (2019).
Available at: https://repository.uchastings.edu/faculty_scholarship/1753
This Article is brought to you for free and open access by UC Hastings Scholarship Repository. It has been
accepted for inclusion in Faculty Scholarship by an authorized administrator of UC Hastings Scholarship
Repository. For more information, please contact wangangela@uchastings.edu.
ARTIFICIAL INTELLIGENCE IN THE
HEALTH CARE SPACE: How WE CAN
TRUST WHAT WE CANNOT KNow
Robin C. Feldman*, Ehrik Aldana** & Kara Stein***
* Arthur J. Goldberg Distinguished Professor of Law and Director of the Center for
Innovation, University of California Hastings College of the Law.
** Research Fellow, Center for Innovation, University of California Hastings College
of the Law.
*** Senior Research Fellow, Center for Innovation, University of California Hastings
College of the Law; former Commissioner of the Securities and Exchange Commission.
399
400 STANFORD LAW & POLICY REVIEW [Vol. 30:399
INTRODUCTION
Artificial Intelligence (Al) is moving rapidly into the health care field.
Personalized medicine, faster and more accurate diagnostics, and accessible
health apps boost access to quality medical care for millions. In a similar vein,
data from doctor visits, clinical treatments, and wearable biometric monitors
are collected and fed back into ever-learning and ever-improving Al systems.
As Al advances, it promises to revolutionize and transform our approach to
medical treatment. As any cancer specialist will attest, however, transformation
1
may be for the good, or it may not. Such is the case with Al. On the one hand,
Al may have the ability to revolutionize society's discovery of disease
treatment-as well as enhance our ability to rapidly deliver that treatment in a
manner tailored to an individual's needs. On the other hand, the black box
nature of Al produces a shiver of discomfort for many people. How can we
trust our health, let alone our very lives, to decisions whose pathways are
unknown and impenetrable?
The black box nature of artificial intelligence raises concern whenever such
technology is in play. For example, suppose an autonomous car makes a
decision that leads to injury or death. If we do not understand the pathways that
led to the choices made, we may be reluctant to trust the decision. Moreover, if
an algorithm is used by a court to determine whether a defendant should
receive bail, without the factors and analysis transparently available to the
public, we may be reluctant to trust that decision as well. Of course, when a
human driver makes a choice that leads to injury or death, we may not fully
understand the decision pathway either. It would be a stretch to say that any
reconstruction of an event could accurately dissect the mental pathways. After
all, human memory is frail, and humans are remarkably able to remember
events in a way that casts them in the most favorable light. Nevertheless, our
legal system is grounded in the concept of open deliberation, and the notion
that one cannot even try to unravel the reason for a decision creates discomfort.
More important, although the notion may be somewhat misguided, individuals
may be more likely to trust those who are similar to them. And nothing seems
more different from a human being than an algorithm.
As challenging as these questions may be, they are not insurmountable. We
suggest that the health care field provides the perfect ground for finding our
way through these challenges. How can that be? Why would we suggest that a
1. See Robin Feldman, Cultural Property and Human Cells, 21 INT'L J. CULTURAL
PROP. 243, 248 (2014) (explaining that tumors can operate in a systems approach; if treat-
ments cut off one approach to the tumor's growth, the tumor may develop work-arounds
which can be more dangerous and damaging than the original pathway).
2019] Al IN THE HEALTH CARE SPACE 401
circumstance in which we are putting our lives on the line is the perfect place to
learn to trust Al? The answer is quite simple. Receiving health care treatment
always has been, and always will be, one of the moments in life when
individuals must put their faith in that which they cannot fully understand.
Consider the black box nature of medicine itself. Although there is much
we understand about the way in which a drug or a medical treatment works,
there is still much that we do not. In modem society, however, most people
have little difficulty trusting their lives to often incomprehensible treatments.
Such trust is all the more important given evidence that confidence in medical
treatment affects treatment outcome. In other words, those who believe their
medicine will work stand a better chance of being healed.2
Trust is vital to developing and adopting health care Al systems, especially
for health (medicine you trust works better) and Al (the more adoption, the
better it becomes). The "black box" mentality we use to conceptualize Al
reduces trust, as well as stalling the development and adoption of potentially
life-saving treatments discovered or powered by Al. However, just because we
can't completely understand something doesn't mean we shouldn't trust it.
Medicine is a prime example of this-the original "black box." Despite the
challenges, medicine has overcome the "black box" problem with the help of
policy and regulatory bodies. This Article suggests that the pathways we use to
place our trust in medicine provide useful models for learning to trust Al. The
question isn't whether we know everything about how a particular drug might
work or how an Al reaches its decision; the question is whether there are rules,
systems, and expertise in place that give us confidence. As we stand on the
brink of the Al revolution, our challenge is to create the architecture that will
give all of society confidence in Al decision-making. And of course, society
must ensure that such confidence is deserved-that we can trust the integrity of
the information being used by Al and the reliability Al decisions.
In recent years, we have seen that Al systems in a variety of fields are able
to match, and in some cases exceed, the ability of humans to perform specific
tasks. Widely known for defeating humanity's best in games like Chess ,3
Jeopardy,4 and Go;5 Al systems now are expanding into more practical and
2. Johanna Birkhiiuer et al., Trust in the Health Care Professional and Health
Outcome: A Meta-analysis, PLoS ONE 9 (Feb. 7, 2017), https://www.ncbi.nlm.nih.gov/
pmc/articles/PMC5295692.
3. IBM, DEEP BLUE - OVERVIEW, IBM100, https://www.ibm.com/ibm/history/ibm
100/us/en/icons/deepblue.
4. John Markoff, Computer Wins on 'Jeopardy!': Trivial, It's Not, N.Y. TIMES (Feb.
16, 2011), https://www.nytimes.com/2011/02/17/science/17jeopardy-watson.html.
402 STANFORD LAW & POLICY REVIEW [Vol. 30:399
8
professional endeavors such as driving, 6 law, 7 labor hiring and management,
9 10
furniture assembly, and even investment advice.
In the health care space, Al also is making waves, and is involved in drug
discovery, diagnostics, surgery, patient information management, and
psychological treatment. For example, automated medical-image software
1
systems are beginning to arrive at expert-level diagnostic accuracy, ' impacting
medical specialties such as radiology, ophthalmology, dermatology, and
pathology.1 2 Al also has been deployed to discover drugs to treat particular
5. Artificial Intelligence: Google's AlphaGo Beats Go Master Lee Se-dol, BBC NEWS
(Mar. 12, 2016), https://www.bbc.com/news/technology-35785875.
6. BRANDON SCHOETTLE & MICHAEL SIVAK, A PRELIMINARY ANALYSIS OF REAL-
WORLD CRASHES INVOLVING SELF-DRIVING VEHICLES, UNIVERSITY OF MICHIGAN
TRANSPORTATION RESEARCH INSTITUTE No. UMTRI-2015-34, 15 (2015), http://umich.edu/
~umtriswt/PDF/UMTRI-2015-34.pdf (showing crash rates between self-driving and human-
driven vehicles are within the margin of error of one another, suggesting "[the investigators]
currently cannot rule out, with a reasonable level of confidence, the possibility that the actual
[crash] rates for self-driving vehicles are lower than for conventional vehicles.").
7. LAWGEEX, COMPARING THE PERFORMANCE OF ARTIFICIAL INTELLIGENCE TO HUMAN
LAWYERS IN THE REVIEW OF STANDARD BUSINESS CONTRACTS 2 (2018) (comparing the
contract analysis abilities of a legal Al platform LawGeex against human lawyers. In the
study, human lawyers achieved an eighty-five percent average accuracy rate, while Al
achieved ninety-five percent accuracy. Moreover, the Al completed the task in only twenty-
six seconds, while the humans took an average ninety-two minutes).
8. See Sean Captain, This Al Factory Boss Tells Robots & Humans How to Work To-
gether, FAST COMPANY (Aug. 7, 2017), www.fastcompany.com/3067414/robo-foremen-
could-direct-human-and-robot-factory-workers-alike (describing a "Boss Al" project Sie-
mens is working on in which jobs are assigned to human workers and robotic workers based
on the worker's skill and the job requirements); Don Nicastro, 5 Things to Consider When
Using Al for Hiring, CMS WIRE (Nov. 8, 2018), https://www.cmswire.com/digital-
workplace/5-things-to-consider-when-using-ai-for-hiring (explaining that nearly all Fortune
500 companies use automation to support the hiring process and citing a 2018 LinkedIn re-
port that seventy-six percent feel Al's impact on recruiting will be at least somewhat signifi-
cant).
9. Francisco Sudrez-Ruiz et al., Can Robots Assemble an [KEA Chair? 3 Sci.
ROBOTICS 2 (2018).
10. See Swapna Malekar, Ethics of Using Al in the Financial/BankingIndustry, DATA
DRIVEN INVESTOR (Sept. 16, 2018), https://medium.com/datadriveninvestor/ethics-of-using-
ai-in-the-financial-banking-industry-fa93203f6f25 (describing Royal Bank of Canada's ex-
periments with using personal, social, commercial and financial customer data to provide
personalized recommendations to end users).
11. See Geert Litjens et al., A Survey on Deep Learning in Medical Image Analysis, 42
MED. IMAGE ANAL. 60, 68-69 (2017). available at https://arxiv.org/pdf/1702.05747.pdf
(reviewing over 300 research contributions to medical image analyses, finding that
"[e]specially CNNs [convolutional neural networks] pretrained on natural images have
shown surprisingly strong results, challenging the accuracy of human experts in some
tasks."). See also ULTROMICS, http://www.ultromics.comltechnology (last visited May 14,
2019) (describing their Al diagnostics system for heart disease).
12. See Kun-Hsing Yu et al., Artificial Intelligence in Healthcare, 2 NATURE
BIOMEDICAL ENGINEERING, 719, 722-725 (2018); see also Huiying Liang et al., Evaluation
and Accurate Diagnoses of Pediatric Diseases Using Artificial Intelligence, 25 NATURE
MED. 433, 433 (2019), https://www.nature.com/articles/s41591-018-0335-9 (Chinese Al
system consistently outperformed humans in pediatric diagnoses); see also Azad Shademan
2019] Al IN THE HEALTH CARE SPACE 403
et al., Supervised Autonomous Robotic Soft Tissue Surgery, 8 Sci. TRANSLATIONAL MED.
337, 342 (2016) (detailing a 2016 study where an autonomous robotic system during an in
vivo intestinal procedure showed, in a laboratory setting, better suturing quality than human
surgeons).
13. See Evan N. Feinberg, AI for Drug Discovery in Two Stories, MEDIUM (Mar. r4,
2018), https://medium.com/@pandelab/ai-for-drug-discovery-in-two-stories-49d7blff19f3.
14. See id.
15. See Federal Trade Commission, Hearings on Emerging Competition, Innovation,
and Market Structure Questions Around Algorithms, Artificial Intelligence, and Predictive
Analytics (2018) (statement of Robin Feldman, Professor of Law, University of California
Hastings Law), available at https://www.ftc.gov/news-events/audio-video/video/ftc-hearing-
7-nov-14-session-2-emerging-competition-innovation-market.
16. See Miguel Hueso et al., Progressin the Development and Challengesfor the Use
of Artifical Kidneys and Wearable Dialysis Devices, 5 KIDNEY DISEASES 3,4 (2018), availa-
ble at https://www.karger.com/Article/FullText/492932 (discussing the potential for novel
wearable dialysis devices with contributions from Al); see also Erin Brodwin, I Spent 2
Weeks Texting a Bot About My Anxiety-and Found It to Be Surprisingly Helpful, Bus.
INSIDER (Jan. 30, 2018), https://www.businessinsider.com/therapy-chatbot-depression-app-
what-its-like-woebot-2018-1; Daniel Kraft, 12 Innovations that Will Revolutionize the
Future of Medicine, NAT'L GEOGRAPHIC (January 2019), https://www.national
geographic.com/magazine/2019/01/12-innovations-technology-revolutionize-future-
medicine (discussing smart contact lenses with biosensors).
17. IHS INC., THE COMPLEXITIES OF PHYSICIAN SUPPLY AND DEMAND: PROJECTIONS
FROM 2013 To 2015, 28 (2015), available at https://www.aamc.org/download/426248/
data/thecomplexitiesofphysiciansupplyanddemandprojectionsfrom2o13to2.pdf.
18. INTEL CORP., Overcoming Barriers in Al Adoption in Healthcare,
https://newsroom.intel.com/news-releases/u-s-healthcare-leaders-expect-widespread-
adoption-artificial-intelligence-2023/ (last visited 2018).
404 STANFORD LAW & POLICY REVIEW [Vol. 30:399
believe their medicine and treatment will work stand a better chance of being
healed.
Finally, because clinical trials and enrollment rely on the trusting
participation of patients, trust is a necessary component of innovation and
research in the health care space. Patients play a vital role in medical
innovation. They are the ones who participate in clinical trials that allow
doctors and scientists to experiment and develop new treatments. If patients do
not trust their providers in particular, 27 or their health care in general, they will
be unlikely to participate in relevant studies of new treatments and
technologies.
The fact that lack of trust may stifle participation in clinical trials is
especially relevant to health care Al systems, which require diverse and large
amounts of data (from people) in order to optimize outcomes. 28 The more that
people use Al, the more data can be fed back into Al systems to iterate and
improve them. Without sufficient and representative amounts of training data,
current Al systems cannot function effectively-in some cases affecting certain
populations disproportionately.29
Finally, some types of Al systems can only provide maximum benefit if
participation is widespread or even universal. 30 Consider autonomous cars. One
of the greatest impediments to successful utilization of autonomous cars, and
one of the greatest dangers on the road, is the fact that human drivers are
puzzlingly irrational. 3 1 Imagine a world in which a majority of cars are
autonomous and networked together, while a few cars are driven by irrational
humans. To put it simply, those humans are likely to gum up the works.
In the health care context, imagine a system in which Al bots perform
certain tiny, delicate functions in a complex operation requiring different
surgical specialties. If some of those doctors decline to participate so that the
Al system is working partially with (1) human doctors who will work with an
Al surgical device; and (2) human doctors who will work only with manual
devices-analogous to humans who will drive in a driverless car and those who
wish to drive on their own-the Al cannot operate to its highest potential. For
these circumstances and many others, trust will be essential in ensuring the full
benefits. To begin that process, one must understand why the public might not
trust an Al health care system.
A primary concern with the use of health care Al systems (and Al systems
generally) is their potential "black box" nature.32 Al systems are often labeled
black boxes by both the media 33 and academics 34 because while their inputs
and outputs are visible, the internal process of getting from the input to the
output remains opaque. People can describe the degree of accuracy of an Al
system for its given purpose, but given the current state of the field, they cannot
explain or recreate the system's reasoning for its decision. The degree to which
a human observer can intrinsically understand the cause of a decision is
described by the machine learning community as an Al system's
35
"interpretability" or "explainability."
technology/personaltech/google-says-its-not-the-driverless-cars-fault-its-other-
drivers.html? r-0.
32. See Intel Corporation, supra note 18.
33. See, e.g., Jeff Larson, Julia Angwin, & Terry Parris Jr., Breaking the Black Box:
How Machines Learn to Be Racist, PROPUBLICA (Oct. 19, 2016),
https://www.propublica.org/article/breaking-the-black-box-how-machines-learn-to-be-racist?
word=Clinton; Will Knight, The Dark Secret at the Heart ofAI, MIT TECH REVIEw (Apr. 11,
2017), https://www.technologyreview.com/s/604087/the-dark-secret-at-the-heart-of-ai.
34. See, e.g., Yavar Bathaee, The Artificial Intelligence Black Box and the Failure of
Intent and Causation, 31 HARV. J. L. TECH 889, 891 (2018), https://jolt.law.harvard.edul
assets/articlePDFs/v3 1/The-Artificial-Intelligence-Black-Box-and-the-Failure-of-Intent-and-
Causation-Yavar-Bathaee.pdf.
35. Richard Tomsett et al, "Interpretable to Whom? A Role Based Model for
Interpretable Machine Learning Systems," 2018 ICML Workshop on Human Interpretability
in Machine Learning (2018) at 9. For a discussion of the discourse surrounding the definition
of "interpretability" in the context of AI/machine learning, see Zachary C. Lipton, "The
Mythos of Model Interpretability," 2016 ICML Workshop on Human Interpretability in
Machine Learning (2016).
2019] Al IN THE HEALTH CARE SPACE 407
36. The description of Al and deep learning in this paragraph is adapted from Feld-
man, supra note 30, at 202-03 and FEDERAL TRADE COMMISSION HEARINGS (statement of
Professor Robin Feldman), supra note 15.
37. See, e.g., Cliff Kuang, Can A.I. be Taught to Explain Itself?, N.Y. TIMES MAG.
(Nov. 21, 2017), https://www.nytimes.com/2017/11/21/magazine/can-ai-be-taught-to-
explain-itself.html; Ian Sample, Computer Says No: Why Making Als Fair, Accountable, and
TransparentIs Crucial, GUARDIAN (Nov. 5, 2017, 7:00 AM), https://perma.cc/5H25-AQC7.
See also Zachary C. Lipton, The Doctor Just Won't Accept That! (Dec. 7, 2017)
(unpublished submission, presented at Interpretable ML Symposium (NIPS 2017)),
https://arxiv.org/abs/1711.08037.
38. Anand Rao and Chris Curran, Al Is Coming. Is Your Business Ready? (Sept. 26,
2017), http://usblogs.pwc.com/emerging-technology/artificial-intelligence-is-your-business-
ready.
39. See Kuang, supranote 37. See also Feldman, supra note 30, at 206-07.
40. See Muhammad Aurangzeb Ahmad et al., Interpretable Machine Learning in
Healthcare, 19 IEEE INTELLIGENT INFORMATIcS BULL. 1, 1 (2018), http://www.comp.hkbu.
edu.hk/-cib/2018/Aug/articlel/iib vol 19no I-articlel .pdf.
408 STANFORD LAW & POLICY REVIEW [Vol. 30:399
41. See Rich Caruana et al., Intelligible Models for HealthCare: Predicting Pneumonia
Risk and Hospital 30-day Readmission 1721, 1721 (2015) (unpublished submission, 21st
ACM SIGKDD International Conference on Knowledge Discovery and Data Mining),
https:/scinapse.io/papers/ 1996796871.
42. Id.
43. See The Artifical Intelligence Channel, The Great Al Debate - NIPS 2017 - Yann
LeCun, YouTUBE (Dec. 9, 2017), https://www.youtube.com/watch?v=93Xv8vJ2acI
(discussing how interpretability is necessary for machine learning).
44. See Jenna Burrell, How the Machine 'Thinks': UnderstandingOpacity in Machine
Learning Algorithms, BIG DATA & Soc'y, Jan.-June 2016, at 1, 10,
https://joumals.sagepub.com/doi/abs/10.1177/2053951715622512, cited in NUFFIELD
COUNCIL ON BIOETHICS, BIOETHICS BRIEFING NOTE: ARTIFICIAL INTELLIGENCE (At) IN
HEALTHCARE IN RESEARCH (2019), http://nuffieldbioethics.org/wp-content/uploads/Artificial-
Intelligence-Al-in-healthcare-and-research.pdf.
20191 AI IN THE HEALTH CARE SPACE 409
until it reaches a conclusion. These are the so-called "layers" of the deep
learning. 45
Cognizant of Al's black box problem, researchers have begun trying to
remedy the issue. For example, in 2016, the Defense Advanced Research
Projects Agency (DARPA) launched the Explainable Artificial Intelligence
(XAI) Initiative. The Initiative hopes to translate decisions made by machine
learning systems into something accessible to human understanding.4 6
Although these are important steps for Al systems, researchers currently do not
know when or if the field might succeed in achieving widespread
interpretability to a degree that adequately satisfies stakeholders such as
patients in the health care context-or even to a degree that experts can
understand. Solving the tension that exists regarding whether deep learning Al
systems can eventually be explained or if they are truly too complex to be
knowable, however, is not necessary to resolve at this point. In either case, we
need to construct a path forward, so that the development of potentially life-
saving technologies does not stall unnecessarily. Such a path forward should
accommodate for both realities.
Moreover, even if it becomes technologically feasible to open the black
box so that people can better peer in, there may be societal and legal
constraints. For example, private sector stakeholders are likely to worry that
detailed explanations about the inner workings of a proprietary machine
learning system could undermine the intellectual property interests they hold in
the technology. 4 7 As one of the authors has suggested, "a company's first
instinct is unlikely to encompass throwing open the doors to its technology,
particularly if competitors are peering into the open doorway. "48 The same may
hold true for post-hoc examination and explanation of an Al system's decision
pathway. If the explanations of decisions are too thorough, it would potentially
be possible to reverse-engineer the product. In fact, protection of intellectual
property has always been the major argument against those advocating for
"opening the black box."49
For this reason, one of the authors has suggested that intellectual property
protection for Al inventions should follow the pathway of data protection for
45. See supra text accompanying notes 36-39 (briefly explaining deep learning and
neural nets).
46. See Sara Castellanos & Steven Norton, Inside Darpa's Push to Make Artificial
Intelligence Explain Itself, WALL ST. J. (Aug 10, 2017), https://blogs.wsj.com/cio/
2017/08/1 0/inside-darpas-push-to-make-artificial-intelligence-explain-itself; David Gunning,
Explainable Artificial Intelligence (XAI), DARPA.MIL (last visited 2019),
https://www.darpa.mil/program/explainable-artificial-intelligence.
47. For discussion of the rush to patent Al, see Tom Simonite, Despite Pledging
Openness, Companies Rush to Patent Al Tech, WIRED (July 31, 2018, 7:00 AM),
https://www.wired.com/story/despite-pledging-openness-companies-rush-to-patent-ai-tech.
48. Feldman, supra note 30, at 206.
49. See Sarah Wachter, Brent Mittelstadt, & Luciano Floridi, Why a Right to
Explanation of Automated Decision-Making Does Not Exist in the GeneralData Protection
Regulation, 7 INT'L DATA PRIVACY L. 76, 85-86.
410 STANFORD LAW & POLICY REVIEW [Vol. 30:399
new pharmaceuticals. For example, in exchange for sharing clinical trial data
on safety and efficacy with government regulators and allowing generic
competitors, branded drugs receive four or five years of intellectual property
protection before a generic competitor can file for approval using that data.
Similarly, in the Al field, innovators could receive a short period of time for
protection-four to five years-in exchange for allowing safety and efficacy
50
data to be used by both the government and competitors. No such change in
the law, however, is currently on the horizon.
As society waits for technological and legal innovation to solve these
challenges, Al need not necessarily sit idle. Scientific innovation marches
forward at its own pace, regardless of whether law and society are ready for it.
And as counter-intuitive as it may sound, the health care system may provide
the ideal pathway for thinking through a framework that establishes greater
trust in Al, despite the lack of clarity in Al processes.
50. See Robin C. Feldman & Nick Thieme, Competition at the Dawn of Artificial In-
telligence, in THE EFFECTS OF DIGITALIZATION, GLOBALIZATION AND NATIONALISM ON
COMPETITION (forthcoming Edward Elgar Publishing 2019), available at
https://papers.ssrn.com/sol3/papers.cfm?abstractid=3218559.
20191 Al IN THE HEALTH CARE SPACE 411
researchers aim to understand the cause behind a drug's effects based on the
drug's chemical properties and how those properties physiologically interact
with living organisms. 5 1 Put another way, researchers aim to find a drug's
mechanism of action. For example, the mechanism of action for selective
serotonin reuptake inhibitors. (SSRIs)-commonly used to treat depression and
one of the most prescribed therapeutic treatments in medicine-is well known.
SSRIs inhibit the reuptake of serotonin, which increases the level of serotonin
in the brain and improves a person's mood. 52
At the same time, there are many common drugs that do not have a known
mechanism of action and yet are regularly prescribed by medical providers and
used by patients. 53 Take for example the widely prescribed drug compound
acetaminophen, more commonly recognized by its brand name Tylenol. If
someone has a headache or fever, there would be little hesitation in going to a
pharmacy or grocery store to pick up a bottle, even though consumers and
researchers don't know how the drug works. 54 Of course, we all grew up with
acetaminophen, and it seems deeply familiar and trustworthy to us. But
someone had to start using it sometime. Moreover, there are numerous drugs
today whose mechanisms of action are unknown, including the muscle relaxant
metaxalone, the diabetes-related drug metformin, and the cough suppressant
guifenesin. Government regulatory bodies determine whether many drugs and
treatments are safe and effective, but the answer of how the drug works is not a
necessary condition.55 Thus, no one knows how they really work, yet doctors
are quite comfortable prescribing them, and patients are quite comfortable
taking them.
So how has society reached this level of trust around drugs whose
mechanism of action is unknown? At the moment, we convince patients of a
drug's safety and efficacy through rigorous testing procedures. These
mechanisms of trust are further mediated by expert regulatory bodies-the
FDA in the United States-and by doctors-whose focus on patient care and
patient bonding help bridge the understanding gap and establish trust. Of
course, one could ask which came first: Did we develop regulatory systems and
expert mediators because they were the only ways to engender trust, or did we
trust black box medical treatments because regulatory systems and expert
mediators exist? Mimicking these mechanisms of trust, however, does not
require answering that question. The fact that this avenue exists-however it
may have developed-makes it easier to travel down the same road again.
We should also note at this point that the existence of trust, without clarity,
in the health care system may not be perfectly parallel in other arenas in which
Al may be used. For much of history, medicine operated on the model that "the
doctor knows best," and it is only in more recent memory that the field has
evolved so that patients play a more active role in their health care. The same
cannot necessarily be said of all areas in which Al may operate.
Although the costs and benefits of regulatory and expert mediation for Al
systems may not be exactly commensurate with those involving the FDA and
doctors, there is much that is similar. For example, the FDA must monitor and
respond to concerns that emerge long after a drug has been approved. The FDA
also must continually monitor whether pharmaceutical manufacturing facilities
meet proper safety regulations. 56 That being said, Al systems evolve far more
rapidly and constantly than pharmaceuticals, shifting according to the Al's
learning and experience. Moreover, the field itself is evolving at light speed.
The foundation for all of modern neural networks emerged only five years
ago.57 One might compare this dynamic to a drug manufacturer constantly
adjusting a drug's formula as it was simultaneously undergoing testing.
Despite these distinctions, at least in the health care field, clarity (or even
the possibility of clarity) is not absolutely fundamental in order for stakeholders
to trust in the effectiveness of the medical treatment. Health care has proven
that it can overcome both scenarios described at the beginning of this section-
that scientists may never be able to fully understand Al decision pathways and
that, even if they do, experts may not be able to explain them to the public.
These scenarios also suggest a framework for developing trust in Al health care
systems, one that could eventually expand to Al systems in general.
As a starting point, one might ask what features exist that allow societal
trust in health care to thrive despite lack of interpretability, and how we might
apply similar features in the context of health care Al systems. Although trust
in health care exists along many different dimensions, a few aspects stand out
that could help in establishing regulatory pathways or an architecture of trust in
health care Al systems. This section will look at the following elements of
trust: (1) provider competence, (2) patient interest, and (3) information
integrity. As a starting point, however, the authors would like to point out the
importance of linguistic framing. Language such as "black box" or even
"artificial intelligence" itself can evoke frightening images of technology run
amok. In a similar vein, we use the frightening terms such as "dark pools" to
describe alternative trading venues for trading stocks and other investment
contracts. Perhaps society would be better served by less evocative language.
This is really about computer software that can help doctors be better doctors.
These computers are diagnostic tools-albeit highly sophisticated and dynamic
ones. Our desire to romanticize may actually be helping create barriers to trust
as well.
Provider competence. In medicine, patients widely trust and expect that
their health care providers have a high degree of competence. 58 Given that most
patients may not be able to directly assess the competence of their providers, 59
trust must be established through other mechanisms. For example, the medical
profession is highly selective. People who want to become doctors must have
outstanding academic achievement, take and pass difficult entrance
examinations, and demonstrate interpersonal competence. Medical training
itself is highly rigorous. Even after training, providers are expected by the
profession to maintain high licensing standards and are subjected to regular
certification testing throughout their careers in order to maintain the ability to
practice. In light of these mechanisms, doctors have been extraordinarily
successful, compared to other professions, in establishing a deep level of trust
in their abilities and competence.
Just as it often is difficult for patients to determine the actual competence
of a physician, the actual competence of a health care Al system will be
difficult for patients to measure as well. Although we will discuss in a moment
the potential for creating pathways similar to those that have allowed patients to
develop trust in their doctors, there may be a simpler and easier route to take
right off the bat. Specifically, can society count on or utilize the trust that
already exists in physician decision making? In other words, will physicians
themselves be able to sprinkle the fairy dust necessary to establish a patient's
comfort with an Al system?
58. See Mechanic, supra note 23, at 664; Hall et al., supra note 20, at 621-22.
59. Hall et al., supra note 20, at 62.
414 STANFORD LAW & POLICY REVIEW [Vol. 30:399
recent survey of public attitudes toward medicine, more than two-thirds of the
public (sixty-nine percent) rated the honesty and ethical standards of physicians
as "very high" or "high." 60 A key component of this trust in medicine is the
belief that medical professionals will put the bests interest of the patient first.61
A potential threat to this trust is the idea that medical providers would prioritize
their own financial interest over the interests of patients. For example,
physicians' financial relationships with pharmaceutical companies might bring
into question the intent, judgment, and effectiveness of a prescribed drug
treatment.
In the U.S., there have been various policy mechanisms put into place to
ensure that a patient's interests are protected from perverse incentives and
outside financial interests, thus reinforcing trust in the medical provider. One of
the oldest and most well-known examples of such a mechanism is the federal
Anti-Kickback Statute (AKS).62 The AKS prohibits remuneration -broadly
defined as anything of value including direct payment, excessive compensation
for consulting services, and expensive hotel stays and dinners63 -that would
incentivize medical providers to recommend products or services which are
paid for or covered by federal health care programs (e.g., Medicare, Medicaid).
In addition to criminal penalties, the Office of the Inspector General for the
Department of Health and Human Services can pursue additional civil
penalties. Such kickbacks can lead to negative patient outcomes including
overutilization, unfair competition, excessive treatment costs, and reduced
agency in treatment plans for patients. By eliminating opportunities for corrupt
decision making on the part of medical providers, the AKS limits these
potential negative patient outcomes and helps reinforce trust by patients in their
medical professionals.
Another more recent example of a policy that promotes trust is the
Physician Payments Sunshine Act, 64 enacted by Congress in 2010 as part of the
Affordable Care Act. 6 5 The policy requires that manufacturers of drugs,
medical devices, and other medical supplies covered by federal health care
programs collect and track financial relationships with physicians and report
these data to the Centers for Medicare and Medicaid Services (CMS). As part
of the Sunshine Act, CMS created the Open Payments data platform, 66 which
60. Robert J. Blendon et al., Public Trust in Physicians - US. Medicine in Interna-
tional Perspective, THE NEw ENG. J. OF MED. (Oct. 23, 2014), pnhp.org/news/improving-
trust-in-the-profession.
61. See Mechanic, supra note 23, at 666.
62. 42 U.S.C. § 1320a-7b (2018).
63. A Roadmap for Physicians: Fraud & Abuse Laws, OFF. OF INSPECTOR GEN. U.S.
DEP'T. OF IEALTH & HUM. SERV., https://oig.hhs.gov/compliance/physician-
education/01laws.asp (last visited Apr. 14, 2019).
64. 42 C.F.R. §§ 403.900-403.914 (2019).
65. The Patient Protection and Affordable Care Act, Pub. L. No. 111-148, 124
Stat. 119 (2010) (enacted).
66. See CTRS. FOR MEDICARE AND MEDICAID SYs. OPEN PAYMENTS DATA,
https://openpaymentsdata.cms.gov (last visited Apr. 14, 2019).
416 STANFORD LAW & POLICY REVIEW [Vol. 30:399
67. Alison Hwong & Lisa Lehmann, Putting the Patientat the Center of the Physician
Payment Sunshine Act, HEALTH AFF. BLOG (June 13, 2012), https://www.healthaffairs.org/
do/10.1377/hblog20120613.020227/full.
68. Alison R. Hwong et al., The Effects of Public Disclosure of Industry Payments to
Physicians on Patient Trust: A Randomized Experiment, 32 J. GEN. INTERNAL MED. 1186,
1188 (2017).
2019] Al IN THE HEALTH CARE SPACE 417
examples discussed here are the barest of starting points. The integrity of data,
what we are calling information integrity, is a challenge that all fields-public
and private, Al-related and not-will have to master in the Digital Age.
Nevertheless, the two examples worth contemplating in the current health
care field are: (1) electronic source data in clinical investigations and (2) data
integrity in current good manufacturing practice (CGMP) for pharmaceuticals.
Both of these involve the FDA in some role. 69
As computerization allows for more and more clinical data (such as
electronic lab reports, digital media from devices, and electronic diaries
completed by study subjects) to be captured electronically, the FDA has taken a
role in ensuring the integrity of clinical investigation data by publishing
guidance for the industry on electronic source data in clinical investigations. 70
This guidance included concrete expectations for clinicians handling electronic
data, including the creation of data element identifiers to facilitate examination
of the audit trail of data sources and changes, as well as outlining the
responsibilities of clinical investigator(s) to review and retain electronic data
throughout a clinical trial. Although the guidance is not binding, this document
is useful because it sets forward good industry standards and practices. In the
pharmaceutical context, regulation is more concrete as a result of the Food,
Drug, and Cosmetic Act (FD&C Act)7' which requires that drugs meet baseline
standards of safety and quality. Examples of these more concrete requirements
in the FD&C Act include:
* requiring that "backup data are exact and complete" and "secure from
alteration, inadvertent erasures, or loss" and that "output from the
computer . . . be checked for accuracy".72
73
* requiring that data be "stored to prevent deterioration or loss."
* requiring that production and control records be "reviewed" 74 and that
laboratory records be "reviewed for accuracy, completeness, and
compliance with established standards." 75
The FDA also has provided industry guidance in this area to ensure that drug
companies take concrete steps to protect the integrity of data.76 The FDA's
69. U.S. FOOD AND DRUG ADMIN., GUIDANCE FOR INDUSTRY: ELECTRONIC SOURCE
DATA IN CLINICAL INVESTIGATIONS (2013), https://www.fda.gov/downloads/drugs/guidances/
ucm328691.pdf; U.S. FOOD AND DRUG ADMIN., FACTS ABOUT CURRENT GOOD
MANUFACTURING PRACTICES https://www.fda.gov/drugs/developmentapprovalprocess/
manufacturing/ ucml69105.htm (last updated June 25, 2018).
70. Data Integrity and Compliance with Drug CGMP: Questions and Answers, 83 Fed.
Reg. 64, 132 (Dec. 13, 2018).
71. Federal Food, Drug, and Cosmetic Act, 21 U.S.C. §§ 301-399 (2018).
72. 21 CFR § 211.68
73. 21 CFR § 212.110(b)
74. 21 CFR §§ 211.22, 211.192
75. 21 CFR § 211.194(a)
76. Data Integrity and Compliance with Drug CGMP: Questions and Answers, 83 Fed.
Reg. 64,132.
418 STANFORD LAW & POLICY REVIEW [Vol. 30:399
77. Timnit Gebru et al., Datasheets for Datasets, PROCS. OF THE 5TH WORKSHOP ON
FAIRNESS, ACCOUNTABILITY, AND TRANSPARENCY IN MACHINE LEARNING, PROCS. OF
MACHINE LEARNING 80 (2018), https://arxiv.org/pdf/1803.09010.pdf.
78. JASON, Artificial Intelligence for Health and Health Care, THE MITRE
CORPORATION 43 (2017), https://www.healthit.gov/sites/default/files/jsr-17-task-002_aifor
healthandhealthcarel2 122017.pdf.
2019] Al IN THE HEALTH CARE SPACE 419
CONCLUSION