0% found this document useful (0 votes)
8 views15 pages

AI Lecture

The document discusses various issues related to artificial intelligence (AI), including OpenAI's financial challenges, energy efficiency concerns, and the implications of AI bias and job displacement. It highlights ongoing litigation, particularly the New York Times case against Microsoft/OpenAI regarding copyright infringement, and emphasizes the importance of ethical considerations in AI development. The document also touches on the potential for new job roles in prompt engineering and the slowing rate of AI model improvements.

Uploaded by

bob
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views15 pages

AI Lecture

The document discusses various issues related to artificial intelligence (AI), including OpenAI's financial challenges, energy efficiency concerns, and the implications of AI bias and job displacement. It highlights ongoing litigation, particularly the New York Times case against Microsoft/OpenAI regarding copyright infringement, and emphasizes the importance of ethical considerations in AI development. The document also touches on the potential for new job roles in prompt engineering and the slowing rate of AI model improvements.

Uploaded by

bob
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 15

MIST 2090

Introduction to
MIS
Andrew Miller, PhD
Agenda
• Return to OpenAI
• Some well-trodden issues in the discourse
 Energy
 Bias
 Jobs
 Liability
 Scalability/Slowdown
 Prompt Engineering

• Active litigation in this area


• Ethical scenarios

2
OpenAI (review from Tech
Entrepreneurship session)
• In 2024, OpenAI is projected to lose as much as $5b
 Workforce costs about $1.5b/year (on average,
employees are getting paid $1m/year salaries)
 Algorithm training costs could be as high as $3b
 Microsoft Azure servers to power ChatGPT costs ~$4b,
despite preferential pricing
 OpenAI receiving about $4b in revenue from customers

• The firm needs at least $5b (and perhaps closer to


$10b) in new capital to survive another year.
• Thrive Capital (VC) is leading a $6.5-7 billion
round of funding for the company
 Participation from Valley as well as Apple and Nvidia
 “Equity” is a little wishy-washy here b/c of governance

• OpenAI also trying to raise $5b in debt from banks


3
OpenAI (update)
On October 2nd, OpenAI closed a $6.6b round of
funding at a $157b valuation
• Thrive Capital led the round, pledging $1b
• All funds are contingent on OpenAI becoming a “for
profit” company within two years
• Funding values OpenAI at 40x revenue, which is an
unprecedented multiple
• Funds will go towards training bigger and better
models (CEO of a rival AI firm has been quoted
saying that it costs about $1b to train a current
model and $100b models aren’t far behind)
 But also, on November 6th, OpenAI spent an undisclosed
amount (>$15.5m) to purchase the chat.com domain

4
Energy Efficiency
• AI models are energy-expensive for both training
and querying
 Training AI models is particularly expensive—GPT-3
consumed about as much power as 130 households
annually (and newer models are more expensive)
 Very difficult to quantify “true” energy costs—especially
true for FAANG companies

• Search engines are including AI results at the top of


search pages (AI results cost ~10x energy as a
normal search)
• Google’s carbon emissions have increased 48% in
the last five years; Microsoft around 30%

5
Energy Efficiency
• New data center construction for AI can threaten
power grid reliability; increase energy prices for
households.
 Amazon trying to build a datacenter in a joint proposal
with an existing nuclear power plant in Pennsylvania
 Amazon also took an ownership stake in nuclear reactor
company X-energy
 Microsoft; Google all also interested in nuclear deals as
well
 Microsoft and Constellation in joint agreement to restart
Three Mile Island
 Google signs “world’s first corporate agreement” to purchase
nuclear energy from Kairos Power
 FERC (Federal Energy Regulatory Commission) is
prohibiting Amazon’s construction on national security
grounds [Nov. 4th 2024]

6
Bias and Fairness
• AI bias refers to the occurrence of biased results
due to human biases that skew the original training
data or AI algorithm—leading to distorted outputs
and potentially harmful outcomes.
• AI Ethics discourse primarily revolves around hiring,
healthcare, facial recognition, criminal justice, etc.
but can be an issue in many arenas
• In addition to fairness implications, using biased
artificial intelligence can result in reputational
damage and legal liability

7
Job Displacement
• Early predictions into the role of AI in employment
suggested a bifurcation of jobs: Workers who tell AI
what to do and workers who are told what to do by
AI.
 The potential employment threat here includes “white
collar” workers, rather than industrial
automation/outsourcing, which did not

• Recent studies suggest that AI-related job


displacement is likely to be fairly gradual, not
immediate.
 AI can automate productivity in certain types of tasks
but we are far away from full automation.

• Still, my own opinion is that it’s useful to take a


long view of potential outcomes and be deliberate
about building skills that will support a working
career that lasts
8
Prompt Engineering
• Some futurists/technologists believe that prompt
engineering in generative AI systems will eventually
become a viable career opportunity
 Bloomberg report of job title “AI Whisperer” with salary
of $355,000 annually (Anthropic)

• Predictions made by AI/Marketing experts at the


Technology Association of Georgia meeting last
month:
 More visible job openings will emerge
 Schools will teach prompt engineering as a language
(!!!)

• Reality is that it may likely be a role/responsibility


but my editorial opinion is that it is not likely to be
a viable career.

9
Scalability/Slowdown
• The rate of improvement of AI models is slowing
down (new reporting in this area within the last
week!)
 Reporting focuses on OpenAI and Google

• This is a big deal because so much of the hype (and


valuation) around AI is based around things that it
will be able to accomplish, not things that it can
currently accomplish
 Companies are investing heavily in adopting AI now but
the products may not be ready to go live for 5-10 years
(or more)

• AI companies are blaming a lack of sufficient


training data for this problem
• There is also the potential that models will get even
worse over time as more “synthetic” AI generated
content sneaks into new AI training datasets 10
Active Litigation Example
NYT vs. Microsoft/OpenAI
• NYT’s case centers around the use of copyrighted
works in the development of generative AI tools
 The training set for advanced GPT models can be in the
trillions of words—the equivalent of a Microsoft Word
document 3.7 billion pages long!
 NYT alleges that this dataset contains a mass of Times
copyrighted content

• The case is amplified by the fact that GPT also


“memorizes” NYT articles and can reproduce them
nearly verbatim and allow users to use GPT to
circumvent the NYT paywall
• NYT is asking for the destruction of any model that
uses its data in a training set

11
Ethics Considerations
(KPMG)
• Fairness: AI-enabled processes are free of bias and
do not reinforce existing inequalities
• Accountability: Users—not just the technology—are
responsible for the outcomes
• Sustainability: Consider the long-term impacts of AI
technologies on individuals, society, and the
environment
• Transparency: Make sure that users of AI-enabled
solutions know how decisions are made and can
trust the outcomes
 Must balance the need for protecting trade secrets;
avoiding risk of manipulating the model

12
AI Bill of Rights (2022 White
House release)
• AI systems should be developed with input from the
communities they impact to enhance both safety and
effectiveness
• Companies should avoid AI discrimination and ensure that
AI systems treat everyone fairly
• Companies should strive to ensure privacy—give
individuals greater control over their information and get
consent before engaging in data collection
• People should be notified when AI is involved in decisions
that affect them and they should be provided with
explanations of the role of AI
• Human alternatives should exist to allow people to opt out
of AI systems in favor of human evaluation where possible

13
Ethics Scenario
• From 2016-2022, a major social media company allows users
to place advertisements next to certain types of content.
Landlords use this tool to advertise their housing stock next to
content that is highly correlated with protected characteristics
 Fair Housing Act prohibits direct providers of housing (e.g., landlords
and real estate companies) from discriminating on the basis of
protected classes such as race, sex, religion, disability status etc.

• For example, advertisers could prevent their ads from being


seen by users with an interest in wheelchair ramps or users
with Spanish-dominant language settings that placed them
into a “Hispanic affinity group” category on the platform.
 At no point did the platform allow ads with explicit discriminatory
intent

• Now, the company classifies any ads for housing as a “special


ad category” and adds additional targeting restrictions for
those ads (ideally preventing discriminatory behavior)

14
Considerations
• Effects on society
• Company data

15

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy