0% found this document useful (0 votes)
32 views7 pages

A Gendered Perspective On Artificial Intelligence

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
32 views7 pages

A Gendered Perspective On Artificial Intelligence

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

A GENDERED PERSPECTIVE ON ARTIFICIAL INTELLIGENCE

Smriti Parsheera1
1
National Institute of Public Finance and Policy, New Delhi

ABSTRACT Today, we are in a phase of AI boom. As per most accounts,


AI based systems will play a much greater role in the
Availability of vast amounts of data and corresponding coming decades, redefining business models, job markets
advances in machine learning have brought about a new and overall human development. In all the euphoria
phase in the development of artificial intelligence (AI). surrounding AI and its future, not enough was being said
While recognizing the field’s tremendous potential we must about the underlying processes that drive research in this
also understand and question the process of knowledge- field. This has begun to change in the last few years as
making in AI. Focusing on the role of gender in AI, this countries begin to adopt national or regional AI strategies,
paper discusses the imbalanced power structures in AI many of which incorporate an inclusion and ethics
processes and the consequences of that imbalance. We dimension in them (Dutton, 2018[2]).
propose a three-stage pathway towards bridging this gap.
The first, is to develop a set of publicly developed standards Like most human creations, AI artifacts tend to reflect the
on AI, which should embed the concept of “fairness by goals, knowledge and experience of their creators. They
design”. Second, is to invest in research and development also draw from the strengths and weaknesses of the data
in formulating technological tools that can help translate that is used to train them. It is therefore natural to expect
the ethical principles into actual practice. The third, and the limitations and biases of the creators and their datasets
perhaps most challenging, is to strive towards reducing to be reflected in their results. This leads us to ask some
gendered distortions in the underlying datasets to reduce basic questions. First, what is regarded as AI, who designs
biases and stereotypes in future AI projects. it and to what end? Second, what is the basis for
determining the elements of intelligence that are found
Keywords – Artificial intelligence, gender, ethics, fairness worth replicating in machines? Finally, to what extent do
these decisions reflect the diverse experience and needs of
1. INTRODUCTION human society?

The term artificial intelligence (AI) was coined in a These are complex questions, and the answers will
Dartmouth summer research proposal in 1955 that necessarily vary based on the respondent’s standpoint --
described itself as a “2 month, 10 man study of artificial education, gender, race, class, religion, nationality and the
intelligence”. John McCarthy, Marvin Minsky and their intersectionality of these factors. Despite recent attempts to
fellow drafters explained it as a “proposal to find how to “diversify” AI research, and more generally research in the
make machines use language, form abstractions and fields of science, technology, engineering and mathematics
concepts, solve kinds of problems now reserved for humans, (STEM), the discipline has retained a male-oriented focus.
and improve themselves” (McCarthy et al, 1955[1]). They It is telling that when the Institute of Electrical and
highlighted these as problems that needed a carefully Electronics Engineers (IEEE) instituted a Hall of Fame to
selected group of scientists to work on them and there acknowledge the leading contributors to AI, not one of the
seemed to be no doubt about the gender of those ten persons on the list was a woman (Wang, 2010[3]).
researchers.
A research environment that fails to account for the
Sixty years hence, AI is seen as one of the most promising worldview of one entire gender group is clearly lacking in
fields of computer science. Its latest boom is fueled by the many respects. In making this claim, we are cognizant of
availability of vast amounts of data and corresponding the fact that just as there is no universal “human
advances in machine learning and neural technology. Self- knowledge”, it is also not possible to classify “men’s
driving vehicles, cancer detection technologies, image knowledge” and “women’s knowledge” into distinct
recognition tools, language translation and virtual assistants buckets. There exist a multiplicity of viewpoints within
are some of the many AI applications that we encounter in these groups. A more inclusive, and indeed more fruitful,
everyday conversations. The field has, however, gone research agenda should ultimately be able to overcome
through its share of “AI winters”, characterized by cutbacks these binaries. Recognizing the existence of a gendered
in funding when research outcomes failed to keep up with perspective on AI is, however, the starting point for this
the claimed progress. conversation. While this paper uses the role of gender in AI

Electronic copy available at: https://ssrn.com/abstract=3374955


research as its lens of enquiry, the issues that it poses and serves as a tool to study the mind and “strong AI” where the
the solutions that it suggests are also relevant to broader computer itself can be said to possess a mind. He focuses
pursuits of inclusiveness in AI-based systems. his criticisms on the latter by arguing that in order to
constitute strong AI a machine would need to satisfy the
2. DEFINING AI AND ITS “INTELLIGENCE” tests of consciousness and intentionality or causal powers
that are possessed by the human brain (Searle, 1980[8]).
John McCarty, one of the founders of this field, described Similar debates on the “intelligence” of AI have also
AI as the “the science and engineering of making emerged from other fields like psychology, economics,
intelligent machines” where intelligence refers to “the biology, neuro-science, engineering and linguistics (Russell
computational part of the ability to achieve goals in the and Norvig, 2010 [6]).
world” (McCarty, 2007[4]). Another suggestion is to look at
intelligence as a “quality that enables an entity to function Feminist epistemologist Alison Adam notes that these
appropriately and with foresight in its environment” popular criticisms of are lacking in two major respects.
(Nilsson, 2010[5]). Both these definitions, forwarded by First, they gauge the success or failure of AI based on
practitioners of AI, refer to intelligence in rather broad philosophical tests of ideal intelligence, which for Adam is
terms, as qualities which can be possessed by humans, less relevant than understanding how AI is actually being
animals and machines, albeit, at different levels. put to use. For her, the success of AI lies in its widespread
adoption in everyday life. Second, she notes that the
Russell and Norvig (2010)[6] present a classification of the traditional critiques of AI completely ignore how AI
available definitions of AI along two lines -- (i) based on systems reinforce existing power structures. AI research has
the function expected to be performed (thought processes/ failed to represent the knowledge of certain social groups,
reasoning of the machine versus the outcome/ behaviour such as women (Adam,2005[9]). This has worked to the
that it exhibits); or (ii) the metrics used for assessing the disadvantage of society as well as the field itself.
success of AI (human performance versus an ideal standard
of “rationality”). The Turing test, developed by British 3. GENDER OF AI DEVELOPERS AND THEIR
mathematician and cryptographer Alan Turing in 1950, ARTIFACTS
reflects a combination of the behavioral element and
human-like performance in the above classification. If upon While the contours of what constitutes intelligence in AI
the exchange of a series of questions with a person and has remained contested, a more operational understanding
machine, a human interrogator is unable to distinguish of AI has also emerged. As per some researchers, AI can
between the two, the Turing test would regard the machine simply be defined as “what AI researchers do” (Grosz et al,
to be an intelligent, thinking entity (Copeland, 2018[7]). 2016[10]). This approach clearly gives the practitioners in
this field immense power, not just in defining their own
Despite its continued relevance over the years, the Turing agenda but also the contours of the discipline that they
test has also come under attack for its attempt to define the represent. It therefore becomes pertinent to discuss who are
intelligence of machines by replicating human behaviour. these researchers and what is it that they do?
Russell and Norvig (2010)[6] point to this as a limitation by
saying, “Aeronautical engineering texts do not define the 3.1 Early choices in AI research
goal of their field as making machines that fly so exactly
like pigeons that they can fool even other pigeons”. Interestingly, even though we have seen significant
advances in AI applications in recent years, the fundamental
AI’s claims of building intelligence in machines have also elements of what constitutes AI research have not changed
faced strong philosophical criticisms. These criticisms stem very significantly. In 1955, the Dartmouth College proposal
from arguments about the lack of a mind, of consciousness identified the following as some of the components of the
and intentionality in machines, features which some AI problems that needed further research: programming a
philosophers regard as essential for establishing true computer to use a language (natural language processing);
intelligence. John Searle illustrated this through his famous self-improvement by machines (machine learning); and
Chinese room thought experiment. As per this, person who neuron nets (neural networks and deep learning)
does not know any Chinese can follow a set of rules on how (McCarthy et al, 1955[1]). The text in parenthesis reflects
to correlate Chinese symbols and produce a response to the currently in vogue terminology for these processes.
questions that may convince an outsider that the person is While these areas of research still remain relevant, newer
acting intelligently. Producing meaningful replies in sub-areas like computer vision and robotics have also been
Chinese would however not mean that the person has any added along the way (Grosz, 2016[10]).
actual understanding of the language.
This leads us to ask – on what basis did AI researchers
In making the claim that similar behavior by a computer decide that certain elements of intelligence (versus others)
programme cannot be equated with intelligence, Searle were worth replicating in machines? In 1950, Alan Turing
draws a distinction between “weak AI”, where the computer admitted that he did not know the right answer. He

Electronic copy available at: https://ssrn.com/abstract=3374955


proposed that it would be prudent to try both the A study involving computer science PhD graduates in India
approaches that were being suggested at that time. The first found that 32 percent of the graduating PhD students in
would be to choose an abstract activity like playing chess 2016 were women (Parkhi and Shroff, 2016[15]). This
and teach machines to do it. The second would be to equip figure is closer to the world average of women in science
machines with sense organs and teach them the right although it is also worth noting that a majority of the PhD
answers, like teaching a child (Turing, 1950[11]). graduates opted for teaching jobs and only a small number
went on to join research labs. Therefore the percentage of
Adam, 1998[12] uses the fascination with chess in early AI women from this pool who might have gone on to engage
works to demonstrate how the interests and worldview of in applied research is likely to be much smaller.
AI researchers influenced their conception of what
amounted to intelligent behavior. She refers to the The under-representation of women in AI research has the
following quote from Rob Wilnesky, an AI researcher, to corresponding effect of under-representation of their ideas
illustrate the point: in setting AI agendas. This imbalance also manifests itself
in other forms that go beyond issues of direct
“They were interested in intelligence, and they representation. Firstly, the few women who do manage to
needed somewhere to start. So they looked enter this field have reported systematic discrimination in
around at who the smartest people were, and terms of salaries, promotions and incidents of sexual
they were themselves, of course. They were all harassment (Vasallo et al, 2015[16]). This contributes to the
essentially mathematicians by training, and leaky pipe problem in STEM. Secondly, the AI industry is
mathematicians do two things - they prove also replete with examples of gender based stereotypes
theorems and play chess. And they said, hey, if it being reflected in the identities of AI artifacts, their
proves a theorem or plays chess, it must be functions and outputs. To some extent this can be attributed
smart.” to the lack of diverse perspectives in the designing and
testing of these artifacts.
The choice of chess and theorem proving, both being
activities predominantly associated with men, therefore For instance, virtual assistants like Apple’s Siri, Amazon’s
became a natural choice for early AI researchers (Adam, Alexa, Google’s assistant and Microsoft’s Cortana
1998[12]). The choice of chess as a metric for proving commonly come with female sounding voices (although in
machine intelligence is particularly interesting given that some cases like Apple's Siri users were later given the
the game still continues to suffer from a significant gender option to change the default voice). This is also the case
problem, resulting in the under-inclusion and under- with most GPS assistants. Several factors may contribute to
performance of women (Maass et al, 2007[13]). Yet, it this. On one hand, it could be a conscious business
would be hard to claim that that the early choices of AI decision, based on physical and psychological reasons for
researchers stemmed from any malice against women or preferring a woman's voice for such machines. On the
their role is society. Instead, these decisions reflected the other, it may be a case of unconscious reiteration of
researchers’ own experiences, interests and social society’s existing gender stereotypes -- a woman’s voice
conditioning. being regarded as more suitable for roles that demand
obedience (Glenn, 2017[17]. Similarly, the names and body
The “context” of AI researchers, which includes their shapes given to robots and other AI solutions have also
gender, has therefore defined the directions in which the been known to reflect the prevalent socio-cultural norms
field has progressed. It is possible to imagine that if the and gender identities (Bowick, 2009[18]).
group contemplating early ideas for testing machine
intelligence included some women, an entirely different set Another dimension of the gender problem in AI comes from
of ideas may have emerged. the perceptions and stereotypes of the real world, the data
that emerges from there and its use in training algorithms.
3.2 Different dimensions of gender bias This can be illustrated with a few examples. When
translation services, like the one offered by Google,
It has been over seven decades since AI first emerged as a translate text from gender neutral languages like Turkish
discipline and yet the gender imbalance in AI, and more and Finnish to a gendered one like English, the algorithm
broadly in the fields of STEM, still remains significant. As tends to attribute a gender to the subject. This classification
per data released by the UNESCO Institute for Statistics, may be based on the profession being described –
women constitute less than 29 percent of scientific engineers, doctors, soldiers are generally described as “he”
researchers globally (UNESCO, 2017[14]). Further, there while teachers, nurses and secretaries would be “she”. It
are many inter regional differences, with many developing could also relate to the activities or emotions in question –
countries showing a lower percentage of women in science. happiness and hardwork are associated with “he” while
For instance, in India's case the figure of women in science terms like lazy and unhappy with “she” (Morse, 2017[19]).
was only about 14.3 percent (UNESCO, 2017[14]).

Electronic copy available at: https://ssrn.com/abstract=3374955


Bolukbasi et al, 2016[20] explain that this problem can be be solved and their optimum solutions. In the long run, this
attributed to the blind adoption of “word embedding” could very well lead to the development of breakthrough
techniques. Word embedding enables the mapping of the technologies, the benefits of which may ultimately trickle
affinity or relationship between different words, where a down the marginalized sections of society. However, there
public resource like Google News serves as the training is a distinction between retrofitting newer objectives into
dataset. The researchers illustrate how this could influence available technologies versus a ground up approach of
the search results for a person looking for a computer identifying specific problems and developing solutions for
science researcher in a particular university because the them.
words “computer science” are more commonly associated
with men -- “between two pages that differ only in the The latter approach would require a more meaningful
names Mary and John, the word embedding would engagement by businesses, governments and the public in
influence the search engine to rank John’s web page higher identifying AI research agendas and supplying resources to
than Mary ” (Bolukbasi et al, 2016[20]). Similar findings of pursue them. These resources could be in the form of
gender biases been also been made in case of visual financial support, ethical frameworks, as well as making
recognition tasks like captioning of images (Zhou et al., available open data resources that can feed into the design
2018[21]) and display of image search results based on of AI solutions. For instance, the development of AI
occupations (Kay, 2015[22]). applications that are useful for addressing the health
concerns of rural women in a developing country like India
These examples demonstrate that AI applications can often may not be an obvious interest area for many AI
end up strengthening and reinforcing society's existing researchers. This may stem both from the lack of funding
biases. For instance, Zhou et al., 2018[21] found that where for sustained research in such areas and also the lack of
training images for the activity of cooking contained 33% access to the data that is necessary for enabling this
more females, the trained model for captioning images research. Similarly, the ways in which algorithmic credit
amplified the disparity to 68%. This seems to run contrary will work out in the Indian setting may be very different
to Donna Haraway’s vision of a cyborg universe where from what happens in other parts of the world. Agenda
technology would offer a tool to break away from the setting for future AI research must therefore be rooted in
dualities of human-machine and male-female identities the social and cultural backdrop and institutional context of
(Haraway, 1991[23]). This is an inspiring idea and one that each society.
we still have an opportunity to fix. Concepts of equity,
fairness and non-discrimination have been well entrenched Having said that, there is also a case for evolving a robust
in the human rights discourse for the past several decades. set of ethical standards for AI research and the tools for
Yet, conscious and unconscious human biases often prevent translating those principles into tangible outcomes.
these values from translating into actual outcomes. How Questions of bias and ethics have already found a place in
then can we re-envision AI research in ways that could many national AI strategies. For instance, the United
move us closer to this ideal? Kingdom has noted that although it cannot match countries
like the United States and China in terms of AI spending, it
4. RE-ENVISIONING AI FROM A GENDERED intends to play a greater role in AI's ethical development
PERSPECTIVE (House of Lords, 2018[25]). In India, a discussion paper
issued by the Government think tank NITI Aayog (NITI
Improving the representation of women in AI research, both Aayog, 2018[26]) as well as an AI Task Force set up by the
as researchers and as beneficiaries of the research is seen as Indian Government have spoken about the need for ethical
a first step towards a gendered re-envisioning of AI. This standards, including auditing of AI to check that it is not
has led to initiatives like having specialized programmes contaminated by human biases (AI Task Force, 2018)[27].
for women, funding support, mentorship initiatives, Both these documents are, however, conspicuously silent on
increased intake in educational institutions and promoting the gender dimensions of AI education and research in the
equal opportunities in the job market. However, even if country. Most large technology companies also have
such initiatives were to succeed, it is questionable whether internal ethics policies to govern their research initiatives.
merely increasing the number of women can bring the Moving from these siloed structures to a collectively
desired level of diversity in AI knowledge-making. designed set of global minimum standards for AI
development should be the next goal. These principles can
In her work on objectivity and diversity, Sandra Harding then be applied based on each region’s own context.
notes that although increasing the physical presence of
excluded groups is an important first step, the real issue This above proposal comes with the worry that absent strict
goes beyond that of participation. It involves questioning enforcement, producers would tend to interpret any ethical
whose agendas should be pursued by science? (Harding, guidelines in a flexible manner. This could resulting in the
2015[24]). A research agenda that is primarily funded under-production of “fairness” in the system. The opacity of
through private resources will logically rely on market AI algorithms and possibility of diverse interpretations on
mechanisms to decide on the kind of problems that need to what constitutes fairness in any given situation only

Electronic copy available at: https://ssrn.com/abstract=3374955


compound the problem. But trying to solve this issue agencies to invest in more research and development on this
through heavy-handed regulation and strict ex-ante controls front.
would present its own set of challenges. Such interventions
may come at the cost of stifling efficiency and innovation. Finally, we must remember that the datasets being used for
This also presumes a certain level of state capacity to training machine learning algorithms are created in the real-
effectuate the regulation, which is often not available in world, i.e. outside the AI ecosystem. Therefore, while
reality. How then can we strike a balance between these building reactive use-case based solutions (NITI Aayog,
positions to make sure that AI research evolves in a socially 2018[26]) may solve some of our immediate needs, the
and ethically responsible direction? We propose a three step larger agenda must be to correct the training dataset itself.
approach towards this goal. To take an example, the outcomes of natural language
processing can be made more inclusive if the persons
The first step would be to embed the concept of “fairness generating the underlying text (writers, researchers,
by design” in AI frameworks (Abbasi et al, 2018[28]). This policymakers, journalists, publishers and other creators of
draws from the concept of “privacy by design” that has digital content) work towards the feminization (using words
evolved in the context of data protection debates like she and her) and neutralization (chairperson instead of
(Cavoukian, 2011[29]). Fairness by design should compel chairman) of the language that they use (Sczesny et al,
developers to ensure that the very conception and design of 2016[33]). Here again, there is a role for the State to use
AI systems is done in a manner that prioritizes fairness. awareness, education and, if required, other policy tools to
Abbasi et al, 2018[28] propose that the components of such promote the use of gender fair language. Similar solutions
a framework would include: need to considered for other fields of AI research,
(i) creating cross-disciplinary teams of data scientists accompanied by the identification of the persons and
and social scientists; processes needed to effectuate the desired changes.
(ii) identifying and addressing the biases brought in by
human annotators; 5. CONCLUSION
(iii) building fairness measures into the assessment
metrics of the program; From its very inception, the field of AI has largely remained
(iv) ensuring that there is a critical mass of training the domain of men. This paper illustrates how the gender of
samples so as to meet fairness measures; and its founders and subsequent researchers has played a role in
(v) adopting debiasing techniques. determining the course of AI research. While efforts are
now being made to fill this gap, including by promoting
A fair amount of research has been done on building more women in STEM, the gender problem of AI is not just
solutions for gender biases in natural language processing. about the representation of women. It is also about
For instance, Bolukbasi et al, 2016[20] use debiased word understanding whose agendas are being pursued in AI
embeddings for removing negative gender associations research and what is the process through which that
from word embeddings generated from a dataset. Another knowledge is being created.
strategy is to use gender swap techniques to remove any
correlation between gender and the classification decision Research has shown that AI’s reliance on real-world data,
made by an algorithm (Park et al, 2018[30]). A variation to which is fraught with gender stereotypes and biases, can
this would be to conduct “stress tests” where certain parts result in solutions that end up reinforcing or even
of the data (such as the gender of some candidates in a exacerbating existing biases. While fairness and non-
selection process) can be randomly altered to check discrimination are well recognized principles in the human
whether the randomization has an effect on the final rights discourse, these principles often fail to translate into
outcome that is generated, i.e. the number of women being practice, often on account of the conscious and unconscious
shortlisted (Economist, 2018)[31]. biases. The challenge therefore is to find ways to bundle the
technological progress of AI with the objectives of pursuing
While encouraging further research of this nature, a lot greater fairness in society -- for machines to eliminate
more needs to be done in terms of mainstreaming these rather than reinforce human biases.
solutions and making them readily available to smaller
developers. Google’s “What-If” tool offers a useful We propose a three step process towards this end. First, we
example. It is an open source tool that allows users to need to develop a set of publicly developed AI ethics that
analyze machine learning models against different embed the concept of “fairness by design”. To travel the
parameters of fairness. For instance, the data can be sorted distance from formulating ethical principles to their actual
to make it “group unaware” or to ensure “demographic implementation is another challenge. We find that
parity” (Weinberger, 2018[32]). Given the many positive “fairness” as a concept is prone to diverse interpretations,
externalities to be gained from the creation and opening up which can result in its under-production in the system.
of such fairness enhancing tools, the second step of the re-
envisioning AI project would be for governments and other The second step would therefore be to invest in research
and development in formulating technological tools to

Electronic copy available at: https://ssrn.com/abstract=3374955


implement AI ethics. This would, for instance, include [8] Searle, 1980: John Searle, Minds, brains, and programs,
further work on developing debiasing and fairness testing Behavioral and Brain Sciences, 3 (3), 417-457.
techniques. Open dissemination of such solutions to make
them readily available for adoption by the AI community, [9] Adam 2005: Alison Adam, Gender, Ethics and
will generate positive externalities for the system as a Information Technology, Palgrave Macmillan, 2005.
whole. This will require cooperation among a range of
stakeholders, including governments, corporations, [10] Grosz, 2016: Barbara Grosz et al, Artificial
universities and researchers working in the fields of intelligence and life in 2030, One hundred year study on
computer science, social science and data science. artificial intelligence, September 2016.

Finally, we need to think about deeper solutions for [11] Turing, 1950: Alan Turing, Computing Machinery and
cleaning up the gender biases and stereotypes in the Intelligence, Mind, 49, 433-460.
underlying datasets that serve as fodder for training AI
algorithms. For instance, feminization and neutralization of [12] Adam, 1998: Alison Adam, Artificial Knowledge -
language have been suggested as solutions to enhance fairer Gender and the Thinking Machine, Routledge, 1998.
outcomes in natural language processing. Similar solutions
need to considered for other fields of AI research along [13] Maass et al, 2007: Anne Maass, Claudio D’Ettole and
with an identification of the persons and processes that are Mara Cadinu, Checkmate? The role of gender stereotypes
necessary to effectuate the desired changes. in the ultimate intellectual sport, European Journal of
Social Psychology, Volume 38, Issue 2, March/April 2008,
REFERENCES 231-245.

[1] McCarthy et al, 1955: John McCarthy, Marvin Minsky, [14] UNESCO, 2017: UNESCO Institute of Statistics, Fact
Nathaneil Rochester and Claude Shannon, A proposal for Sheet No. 43, March, 2017, available at
the Dartmouth summer research project on artificial http://uis.unesco.org/sites/default/files/documents/fs43-
intelligence, August 31, 1955, available at women-in-science-2017-en.pdf
https://www.cs.swarthmore.edu/~meeden/cs63/f11/AIpropo
sal.pdf [15] Parkhi and Shroff, 2016: Sachin Parkhi and Gautam
Shroff, ACM Survey on PhD Production in India for
[2] Dutton, 2018: Tim Dutton, Artificial Intelligence Computer Science and Information Technology, 2015-16,
Strategies, Medium, 28 Jun 2018, available at available at
https://medium.com/politics-ai/an-overview-of-national-ai- http://india.acm.org/PhDProductionReport2015_16.pdf
strategies-2a70ec6edfd
[16] Glenn, 2017: Marie Glenn, 2017, Few good men: Why
[3] Wang, 2010: Fei-Yue Wang, IEEE Intelligent Systems, is the growing population of AI voices predominantly
Volume: 26, Issue: 4, July-Aug, 2011, available at female?, IMB Blog, 2 Mar, 2017, available at
https://ieeexplore.ieee.org/document/5968105/ https://www.ibm.com/blogs/insights-on-
business/ibmix/good-men-growing-population-ai-voices-
[4] McCarty, 2007: John McCarthy, What is artificial predominantly-female/
intelligence?, 11 December, 2007, available at
http://jmc.stanford.edu/artificial-intelligence/what-is- [17] Bowick, 2009: Micol Marchetti-Bowick, Is Your
ai/index.html Roomba Male or Female? The Role of Gender Stereotypes
and Cultural Norms in Robot Design, Intersect, Volume 2,
[5] Nilsson, 2010: Nils J. Nilsson, The quest for artificial Number 1, 2009.
intelligence - A history of ideas and achievements,
Cambridge University Press, 2010. [18] Vassallo et al, 2015: Trae Vassallo et al, Elephant in
the Valley Survey 2015, available at
[6] Russell and Norvig (2010): Stuart Russell and Peter https://www.elephantinthevalley.com/
Norvig, Artificial Intelligence: A Modern Approach, 3rd Ed,
Prentice Hall Series in Artificial Intelligence, 2010. [19] Morse, 2017: Jack Morse, Google Translate might
have a gender problem, Mashable, 1 Dec 2017, available at
[7] Copeland, 2018: BJ Copeland, Artificial Intelligence, https://mashable.com/2017/11/30/google-translate-sexism/
Britannica Encyclopedia, available at
https://www.britannica.com/technology/artificial- [20] Bolukbasi, 2016: Tolga Bolukbasi, Kai-Wei Chang,
intelligence James Zou, Venkatesh Saligrama and Adam Kalai, Man is
to Computer Programmer as Woman is to Homemaker?

Electronic copy available at: https://ssrn.com/abstract=3374955


Debiasing Word Embeddings, 2016, available at [30] Park et al, 2018: Ji Ho Park, Jamin Shin and Pascale
https://arxiv.org/pdf/1607.06520.pdf Fung, Reducing Gender Bias in Abusive Language
Detection, August, 2018, https://arxiv.org/abs/1808.07231
[21] Zhao, 2017: Jieyu Zhao, Tianlu Wang, Mark Yatskar,
Vicente Ordonez and Kai-Wei Chang, Men Also Like [31] Economist, 2018: For artificial intelligence to thrive,
Shopping: Reducing Gender Bias Amplification using it must explain itself, 15 Feb 2018, available at
Corpus-level Constraints, 29 Jul 2017, available at https://www.economist.com/science-and-
https://arxiv.org/abs/1707.09457 technology/2018/02/15/for-artificial-intelligence-to-thrive-
it-must-explain-itself
[22] Kay, 2015: Matthew Kay, Cynthia Matuszek, and Sean
A Munson, Unequal Representation and Gender [32] Weinberger, 2018: David Weinberger, Playing with AI
Stereotypes in Image Search Results for Occupations, ACM Fairness, People+AI Research Initiative, available at
CHI Conference on Human Factors in Computing Systems, https://pair-code.github.io/what-if-tool/ai-fairness.html
Apr 2015, available at
https://www.researchgate.net/publication/271196763_Uneq [33] Sczesny et al, 2016: Sabine Sczesny, Magda
ual_Representation_and_Gender_Stereotypes_in_Image_Se Formanowicz, and Franziska Moser, Can Gender-Fair
arch_Results_for_Occupations Language Reduce Gender Stereotyping and
Discrimination?, Frontiers in Psychology, 2016; 7: 25,
[23] Haraway, 1991: Donna Haraway, A Cyborg Manifesto: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4735429/
Science, Technology, and Socialist-Feminism in the Late
Twentieth Century, in Simians, Cyborgs and Women: The
Reinvention of Nature, Routledge, 1991.

[24] Harding, 2015: Sandra Harding, Objectivity and


Diversity: Another Logic of Scientific Research, University
of Chicago Press, 2015.

[25] House of Lords, 2018: House of Lords, Select


Committee on Artificial Intelligence, AI in the UK: ready,
willing and able?, 16 Apr 2018, available at
https://publications.parliament.uk/pa/ld201719/ldselect/ldai
/100/100.pdf

[26] NITI Aayog, 2018: NITI Aayog, Discussion Paper -


National Strategy for Artificial Intelligence, Jun 2018,
available at
http://niti.gov.in/writereaddata/files/document_publication/
NationalStrategy-for-AI-Discussion-Paper.pdf

[27] AI Task Force, 2018: Artificial Intelligence Task Force,


Constituted by the Ministry of Commerce and Industry,
Government of India, available at https://www.aitf.org.in/

[28] Abbasi, 2018: Ahmed Abbasi, Jingjing Li, Gari


Clifford and Herman Taylor, Make “Fairness by Design”
Part of Machine Learning, Harvard Business Review, 1
Aug, 2018, available at https://hbr.org/2018/08/make-
fairness-by-design-part-of-machine-learning

[29] Cavoukian, 2011: Ann Cavoukian, Privacy by Design -


The 7 Foundational Principles, 2011, available at
https://www.ipc.on.ca/wp-
content/uploads/Resources/7foundationalprinciples.pdf

Electronic copy available at: https://ssrn.com/abstract=3374955

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy