0% found this document useful (0 votes)
7 views7 pages

Robotics 2

Uploaded by

hanif38233
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views7 pages

Robotics 2

Uploaded by

hanif38233
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

Ethics and Risks of Developing AI:

If the effects of AI technology are more likely to be negative than positive, then it would be the moral
responsibility of workers in the field to redirect their research

Many new technologies have had unintended negative side effects: nuclear fission brought Chernobyl
and the threat of global destruction; the internal combustion engine brought air pollution, global
warming, and the paving-over of paradise

All scientists and engineers face ethical considerations of how they should act on the job, what projects
should or should not be done, and how they should be handled.

AI, however, seems to pose some fresh problems beyond that of, say, building bridges that don’t fall
down:

• People might lose their jobs to automation.

• People might have too much (or too little) leisure time.

• People might lose their sense of being unique.

• AI systems might be used toward undesirable ends.

• The use of AI systems might result in a loss of accountability.

• The success of AI might mean the end of the human race

People might lose their jobs to automation


The modern industrial economy has become dependent on computers in general, and select AI
programs in particular.

For example, much of the economy, especially in the United States, depends on the availability of
consumer credit. Credit card applications, charge approvals, and fraud detection are now done by AI
programs.

One could say that thousands of workers have been displaced by these AI programs, but in fact if you
took away the AI programs these jobs would not exist, because human labor would add an unacceptable
cost to the transactions.

So far, automation through information technology in general and AI in particular has created more jobs
than it has eliminated, and has created more interesting, higher-paying jobs.

a challenge the creation of human-level AI that could pass the employment test rather than the Turing
Test—a robot that could learn to do any one of a range of jobs. We may end up in a future where
unemployment is high, but even the unemployed serve as managers of their own cadre of robot workers

People might have too much (or too little) leisure time.
Arthur C. Clarke (1968b) wrote that people in 2001 might be “faced with a future of utter boredom,
where the main problem in life is deciding which of several hundred TV channels to select.”

people working in knowledge-intensive industries have found themselves part of an integrated


computerized system that operates 24 hours a day

AI increases the pace of technological innovation and thus contributes to this overall trend, but AI also
holds the promise of allowing us to take some time off and let our automated agents handle things for a
while

Tim Ferriss (2007) recommends using automation and outsourcing to achieve a four-hour work week

People might lose their sense of being unique


t AI research makes possible the idea that humans are automata—an idea that results in a loss of
autonomy or even of humanity

Humanity has survived other setbacks to our sense of uniqueness

De Revolutionibus Orbium Coelestium (Copernicus, 1543) moved the Earth away from the center of the
solar system, and Descent of Man (Darwin, 1871) put Homo sapiens at the same level as other species.

AI, if widely successful, may be at least as threatening to the moral assumptions of 21st-century society
as Darwin’s theory of evolution was to those of the 19th century.

AI systems might be used toward undesirable ends


. Advanced technologies have often been used by the powerful to suppress their rivals
A science is said to be useful if its development tends to accentuate the existing inequalities in the
distribution of wealth, or more directly promotes the destruction of human life.”
no one would have moral objections to a soldier wanting to wear a helmet when being attacked by
large, angry, axe-wielding enemies, and a teleoperated robot is like a very safe form of armor. On the
other hand, robotic weapons pose additional risks.

To the extent that human decision making is taken out of the firing loop, robots may end up making
decisions that lead to the killing of innocent civilians

AI must be balancing of privacy and security; individual rights and community

The use of AI systems might result in a loss of accountability


legal liability becomes an important issue

When a physician relies on the judgment of a medical expert system for a diagnosis, who is at fault if the
diagnosis is wrong?
issues are beginning to arise regarding the use of intelligent agents on the Internet.

Some progress has been made in incorporating constraints into intelligent agents so that they cannot,
for example, damage the files of other users

To our knowledge, no program has been granted legal status as an individual for the purposes of
financial transactions; at present, it seems unreasonable to do so.

Programs are also not considered to be “drivers” for the purposes of enforcing traffic regulations on real
highways

As with human reproductive technology, the law has yet to catch up with the new developments

The success of AI might mean the end of the human race


Almost any technology has the potential to cause harm in the wrong hands, but with AI and robotics, we
have the new problem that the wrong hands might belong to the technology itself

The question is whether an AI system poses a bigger risk than traditional software. We will look at three
sources of risk

First, the AI system’s state estimation may be incorrect, causing it to do the wrong thing. For example,
an autonomous car might incorrectly estimate the position of a car in the adjacent lane, leading to an
accident that might kill the occupants.

Second, specifying the right utility function for an AI system to maximize is not so easy. For example, we
might propose a utility function designed to minimize human suffering, expressed as an additive reward
function over time

Third, the AI system’s learning function may cause it to evolve into a system with unintended behavior

some of the threats are either unlikely or differ little from threats posed by “unintelligent” technologies.
One threat in particular is worthy of further consideration: that ultraintelligent machines might lead to a
future that is very different from today—we may not like it, and at that point we may not have a choice.
Such considerations lead inevitably to the conclusion that we must weigh carefully, and soon, the
possible consequences of AI research

If robots become conscious, then to treat them as mere “machines” (e.g., to take them apart) might be
immoral. Science fiction writers have addressed the issue of robot rights

The movie A.I. (Spielberg, 2001) was based on a story by Brian Aldiss about an intelligent robot who was
programmed to believe that he was human and fails to understand his eventual abandonment by his
owner–mother. The story (and the movie) argue for the need for a civil rights movement for robots.

Agent Components:
the state of the art stands for each of the components

Interaction with the environment through sensors and actuators:


AI systems were built in such a way that humans had to supply the inputs and interpret the outputs,
while robotic systems focused on low-level tasks in which high-level reasoning and planning were largely
absent.

This was due in part to the great expense and engineering effort required to get real robots to work at
all.

Keeping track of the state of the world:


This is one of the core capabilities required for an intelligent agent. It requires both perception and
updating of internal representations

It is possible that a new focus on probabilistic rather than logical representation coupled with aggressive
machine learning (rather than hand encoding of knowledge) will allow for progress

Projecting, evaluating, and selecting future courses of action:


The basic knowledge representation requirements here are the same as for keeping track of the world;
the primary difficulty is coping with courses of action—such as having a conversation or a cup of tea—
that consist eventually of thousands or millions of primitive steps for a real agent

Utility as an expression of preferences:


In principle, basing rational decisions on the maximization of expected utility is completely general and
avoids many of the problems of purely goal-based approaches, such as conflicting goals and uncertain
attainment

One reason may be that preferences over states are really compiled from preferences over state
histories, which are described by reward functions (see Chapter 17). Even if the reward function is
simple, the corresponding utility function may be very complex

Learning:
learning in an agent can be formulated as inductive learning (supervised, unsupervised, or
reinforcement-based) of the functions that constitute the various components of the agent.

Very powerful logical and statistical techniques have been developed that can cope with quite large
problems, reaching or exceeding human capabilities in many tasks—as long as we are dealing with a
predefined vocabulary of features and concepts.

The vast majority of machine learning research today assumes a factored representation, learning a
function h : Rn → R for regression and h : Rn → {0, 1} for classification.

Learning researchers will need to adapt their very successful techniques for factored representations to
structured representations, particularly hierarchical representations

so far machine learning algorithms are limited in the amount of organized knowledge they can extract
from these sources

Agent Architecture:
Both time and knowledge based essence are important for an agent.

A complete agent must be able to do both, using a hybrid architecture.

One important property of hybrid architectures is that the boundaries between different decision
components are not fixed.

Agents also need ways to control their own deliberations. They must be able to cease deliberating when
action is demanded, and they must be able to use the time available for deliberation to execute the
most profitable computations.

For example, a taxi-driving agent that sees an accident ahead must decide in a split second either to
brake or to take evasive action. It should also spend that split second thinking about the most important
questions, such as whether the lanes to the left and right are clear and whether there is a large truck
close behind, rather than worrying about wear and tear on the tires or where to pick up the next
passenger.

A technique for controlling deliberation is decision-theoretic metareasoning.

This method applies the theory of information value (Chapter 16) to the selection of individual
computations. The value of a computation depends on both its cost (in terms of delaying action) and its
benefits (in terms of improved decision quality).

Metareasoning is one specific example of a reflective architecture—that is, an architecture that enables
deliberation about the computational entities and actions occurring within the architecture itself. A
theoretical foundation for reflective architectures can be built by defining a joint state space composed
from the environment state and the computational state of the agent itself. Decision-making and
learning algorithms can be designed that operate over this joint state space and thereby serve to
implement and improve the agent’s computational activities

Are We Going In The Right Direction:


For this we have to consider again what exactly the goal of AI is. We want to build agents, but with what
specification in mind? Here are four possibilities:

Perfect rationality.
A perfectly rational agent acts at every instant in such a way as to maximize its expected utility, given
the information it has acquired from the environment. We have seen that the calculations necessary to
achieve perfect rationality in most environments are too time consuming, so perfect rationality is not a
realistic goal.

Calculative rationality.
This is the notion of rationality that we have used implicitly in designing logical and decision-theoretic
agents, and most of theoretical AI research has focused on this property. A calculatively rational agent
eventually returns what would have been the rational choice at the beginning of its deliberation. This is
an interesting property for a system to exhibit, but in most environments, the right answer at the wrong
time is of no value
Bounded rationality.
rationality works primarily by satisficing—that is, deliberating only long enough to come up with an
answer that is “good enough.”

It appears to be a useful model of human behaviors in many cases.

It is not a formal specification for intelligent agents, however, because the definition of “good enough”
is not given by the theory.

Furthermore, satisficing seems to be just one of a large range of methods used to cope with bounded
resources

Bounded optimality (BO).


A bounded optimal agent behaves as well as possible, given its BOUNDED OPTIMALITY computational
resources. That is, the expected utility of the agent program for a bounded optimal agent is at least as
high as the expected utility of any other agent program running on the same machine

Of these four possibilities, bounded optimality seems to offer the best hope for a strong theoretical
foundation for AI.

It has the advantage of being possible to achieve: there is always at least one best program—something
that perfect rationality lacks. Bounded optimal agents are actually useful in the real world, whereas
calculatively rational agents usually are not, and satisficing agents might or might not be, depending on
how ambitious they are.

the concept of bounded optimality is proposed as a formal task for AI research that is both well defined
and feasible.

Bounded optimality specifies optimal programs rather than optimal actions.

Actions are, after all, generated by programs, and it is over programs that designers have control

What If AI Does Succeed?


A confusion can be evoked by asking AI researchers, “What if you succeed?”

Those who strive to develop AI have a responsibility to see that the impact of their work is a positive
one.

The scope of the impact will depend on the degree of success of AI.

AI has made possible new applications such as speech recognition systems, inventory control systems,
surveillance systems, robots, and search engines.

computerized communication networks, such as cell phones and the Internet, have had this kind of
pervasive effect on society, but AI has not

AI has been at work behind the scenes—for example, in automatically approving or denying credit card
transactions for every purchase made on the Web—but has not been visible to the average consumer
AI has been at work behind the scenes—for example, in automatically approving or denying credit card
transactions for every purchase made on the Web—but has not been visible to the average consumer.

We can imagine that truly useful personal assistants for the office or the home would have a large
positive impact on people’s lives, although they might cause some economic dislocation in the short
term.

Automated assistants for driving could prevent accidents, saving tens of thousands of lives per year.

A technological capability at this level might also be applied to the development of autonomous
weapons, which many view as undesirable.

Some of the biggest societal problems we face today—such as the harnessing of genomic information
for treating disease, the efficient management of energy resources, and the verification of treaties
concerning nuclear weapons—are being addressed with the help of AI technologies

AI seems to fit in with other revolutionary technologies (printing, plumbing, air travel, telephony) whose
negative repercussions are outweighed by their positive aspects.

In conclusion, we see that AI has made great progress in its short history, but the final sentence of Alan
Turing’s (1950) essay on Computing Machinery and Intelligence is still valid today:

We can see only a short distance ahead, but we can see that much remains to be done

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy