0% found this document useful (0 votes)
285 views10 pages

Exploring Opportunities in The Generative Ai Value Chain

The document discusses the emerging value chain for generative AI. It identifies six main components of the value chain: computer hardware, cloud platforms, foundation models, model hubs and machine learning operations, applications, and services. While opportunities exist across the chain, building end-user applications that leverage foundation models is seen as the area with the most significant opportunities, as it is more accessible for new entrants compared to other parts of the chain like developing foundation models or computer hardware.

Uploaded by

Andreea Piron
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
285 views10 pages

Exploring Opportunities in The Generative Ai Value Chain

The document discusses the emerging value chain for generative AI. It identifies six main components of the value chain: computer hardware, cloud platforms, foundation models, model hubs and machine learning operations, applications, and services. While opportunities exist across the chain, building end-user applications that leverage foundation models is seen as the area with the most significant opportunities, as it is more accessible for new entrants compared to other parts of the chain like developing foundation models or computer hardware.

Uploaded by

Andreea Piron
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

Exploring opportunities in the

generative AI value chain


Generative AI is giving rise to an entire ecosystem, from hardware providers to application
builders, that will help bring its potential for business to fruition.

This article is a collaborative effort by Tobias Härlin, Gardar Björnsson Rova, Alex Singla, Oleg Sokolov, and Alex Sukharevsky,
representing views from McKinsey Digital.

© Getty Images

April 2023
Over the course of 2022 and early 2023, tech in this fast-paced space. Our assessments are
innovators unleashed generative AI en masse, based on primary and secondary research, including
dazzling business leaders, investors, and society at more than 30 interviews with business founders,
large with the technology’s ability to create entirely CEOs, chief scientists, and business leaders
new and seemingly human-made text and images. working to commercialize the technology; hundreds
of market reports and articles; and proprietary
The response was unprecedented. McKinsey research data.

In just five days, one million users flocked to ChatGPT,


OpenAI’s generative AI language model that creates A brief explanation of generative AI
original content in response to user prompts. It took To understand the generative AI value chain,
Apple more than two months to reach the same level it’s helpful to have a basic knowledge of what
of adoption for its iPhone. Facebook had to wait ten generative AI is⁵ and how its capabilities differ from
months and Netflix more than three years to build the the “traditional” AI technologies that companies
same user base. use to, for example, predict client churn, forecast
product demand, and make next-best-product
And ChatGPT isn’t alone in the generative AI industry. recommendations.
Stability AI’s Stable Diffusion, which can generate
images based on text descriptions, garnered more A key difference is its ability to create new content.
than 30,000 stars on GitHub within 90 days of This content can be delivered in multiple modalities,
its release—eight times faster than any previous including text (such as articles or answers to
package.¹ questions), images that look like photos or paintings,
videos, and 3-D representations (such as scenes
This flurry of excitement isn’t just organizations and landscapes for video games).
kicking the tires. Generative AI use cases are already
taking flight across industries. Financial services Even in these early days of the technology’s
giant Morgan Stanley is testing the technology to help development, generative AI outputs have been jaw-
its financial advisers better leverage insights from droppingly impressive, winning digital-art awards
the firm’s more than 100,000 research reports.² The and scoring among or close to the top 10 percent of
government of Iceland has partnered with OpenAI test takers in numerous tests, including the US bar
in its efforts to preserve the endangered Icelandic exam for lawyers and the math, reading, and writing
language.³ Salesforce has integrated the technology portions of the SATs, a college entrance exam used
into its popular customer-relationship-management in the United States.⁶
(CRM) platform.⁴
Most generative AI models produce content in one
The breakneck pace at which generative AI format, but multimodal models that can, for example,
technology is evolving and new use cases are create a slide or web page with both text and
coming to market has left investors and business graphics based on a user prompt are also emerging.
leaders scrambling to understand the generative AI
ecosystem. While deep dives into CEO strategy and All of this is made possible by training neural
the potential economic value that the technology networks (a type of deep learning algorithm)
could create globally across industries are on enormous volumes of data and applying
forthcoming, here we share a look at the generative “attention mechanisms,” a technique that helps AI
AI value chain composition. Our aim is to provide models understand what to focus on. With these
a foundational understanding that can serve as a mechanisms, a generative AI system can identify
starting point for assessing investment opportunities word patterns, relationships, and the context of a

1
Guido Appenzeller, Matt Bornstein, Martin Casado, and Yoko Li, “Art isn’t dead; it’s just machine generated,” Andreessen Horowitz, November 16, 2022.
2
Hugh Son, “Morgan Stanley is testing an OpenAI-powered chatbot for its 16,000 financial advisors,” CNBC, March 14, 2023.
3
“Government of Iceland: How Iceland is using GPT-4 to preserve its language,” OpenAI, March 14, 2023.
4
“Salesforce announces Einstein GPT, the world’s first generative AI for CRM,” Salesforce, March 7, 2023.
5
“What is generative AI?” McKinsey, January 19, 2023.
6
“GPT-4,” OpenAI, March 14, 2023.

2 Exploring opportunities in the generative AI value chain


user’s prompt (for instance, understanding that think it’s quite similar to a traditional AI value
“blue” in the sentence “The cat sat on the mat, which chain. After all, of the six top-level categories—
was blue” represents the color of the mat and not computer hardware, cloud platforms, foundation
of the cat). Traditional AI also might use neural models, model hubs and machine learning
networks and attention mechanisms, but these operations (MLOps), applications, and services—
models aren’t designed to create new content. They only foundation models are a new addition
can only describe, predict, or prescribe something (Exhibit 1).
based on existing content.
However, a deeper look reveals some significant
differences in market opportunities. To begin
The value chain: Six links, but one with, the underpinnings of generative AI systems
outshines them all are appreciably more complex than most
As the development and deployment of generative traditional AI systems. Accordingly, the time,
AI systems gets under way, a new value chain cost, and expertise associated with delivering
is emerging to support the training and use of them give rise to significant headwinds for new
this powerful technology. At a glance, one might entrants and small companies across much of the

Exhibit 1
Web <year>
<Title>
There
Exhibit <x> are
of <x>opportunities across the generative AI value chain, but the most

There are opportunities


significant across applications.
is building end-user the generative AI value chain, but the most
significant is building end-user applications.
Opportunity size for new entrants
Generative AI value chain in next 3–5 years, scale of 1–5

Services

Services around specialized knowledge on how to leverage


generative AI (eg, training, feedback, and reinforcement learning)

Applications

B2B or B2C products that use foundation models either largely as is


or fine-tuned to a particular use case

Model hubs and MLOps

Tools to curate, host, fine-tune, or manage the foundation models


(eg, storefronts between applications and foundation models)

Foundation models

Core models on which generative AI applications can be built

Cloud platforms

Platforms to provide access to computer hardware

Computer hardware

Accelerator chips optimized for training and running the models

McKinsey & Company

Exploring opportunities in the generative AI value chain 3


value chain. While pockets of value exist throughout, specialized skills, knowledge, and computational
our research suggests that many areas will continue capabilities necessary to serve the generative AI
to be dominated by tech giants and incumbents for market.
the foreseeable future.

The generative AI application market is the section Cloud platforms


of the value chain expected to expand most rapidly GPUs and TPUs are expensive and scarce, making it
and offer significant value-creation opportunities difficult and not cost-effective for most businesses
to both incumbent tech companies and new to acquire and maintain this vital hardware platform
market entrants. Companies that use specialized on-premises. As a result, much of the work to
or proprietary data to fine-tune applications can build, tune, and run large AI models occurs in the
achieve a significant competitive advantage over cloud. This enables companies to easily access
those that don’t. computational power and manage their spend as
needed.

Computer hardware Unsurprisingly, the major cloud providers have


Generative AI systems need knowledge—and the most comprehensive platforms for running
lots of it—to create content. OpenAI’s GPT-3, the generative AI workloads and preferential access
generative AI model underpinning ChatGPT, for to the hardware and chips. Specialized cloud
example, was trained on about 45 terabytes of text challengers could gain market share, but not in the
data (akin to nearly one million feet of bookshelf near future and not without support from a large
space).⁷ enterprise seeking to reduce its dependence on
hyperscalers.
It’s not something traditional computer hardware
can handle. These types of workloads require
large clusters of graphic processing units (GPUs) Foundation models
or tensor processing units (TPUs) with specialized At the heart of generative AI are foundation models.
“accelerator” chips capable of processing all that These large deep learning models are pretrained
data across billions of parameters in parallel. to create a particular type of content and can
be adapted to support a wide range of tasks. A
Once training of this foundational generative AI foundation model is like a Swiss Army knife—it can
model is completed, businesses may also use such be used for multiple purposes. Once the foundation
clusters to customize the models (a process called model is developed, anyone can build an application
“tuning”) and run these power-hungry models within on top of it to leverage its content-creation
their applications. However, compared with the capabilities. Consider OpenAI’s GPT-3 and GPT-4,
initial training, these latter steps require much less foundation models that can produce human-quality
computational power. text. They power dozens of applications, from the
much-talked-about chatbot ChatGPT to software-
While there are a few smaller players in the mix, as-a-service (SaaS) content generators Jasper and
the design and production of these specialized Copy.ai.
AI processors is concentrated. NVIDIA and
Google dominate the chip design market, and one Foundation models are trained on massive data sets.
player, Taiwan Semiconductor Manufacturing This may include public data scraped from Wikipedia,
Company Limited (TSMC), produces almost all of government sites, social media, and books, as well
the accelerator chips. New market entrants face as private data from large databases. OpenAI, for
high start-up costs for research and development. example, partnered with Shutterstock to train its
Traditional hardware designers must develop the image model on Shutterstock’s proprietary images.⁸

7
“What is generative AI?” January 19, 2023; and Kindra Cooper, “OpenAI GPT-3: Everything you need to know,” Springboard, November 1, 2021.
8
“Shutterstock partners with OpenAI and leads the way to bring AI-generated content to all,” Shutterstock, October 25, 2022.

4 Exploring opportunities in the generative AI value chain


Developing foundation models requires deep start-ups backed by significant investment
expertise in several areas. These include (Exhibit 2). However, there is work in progress
preparing the data, selecting the model toward making smaller models that can deliver
architecture that can create the targeted output, effective results for some tasks and training that
training the model, and then tuning the model to is more efficient, which could eventually open
improve output (which entails labeling the quality the market to more entrants. We already see that
of the model’s output and feeding it back into the some start-ups have achieved certain success in
model so it can learn). developing their own models—Cohere, Anthropic,
and AI21, among others, build and train their
Today, training foundation models in particular own large language models (LLMs). Additionally,
comes at a steep price, given the repetitive nature there is a scenario where most big companies
of the process and the substantial computational would want to have LLMs working in their
resources required to support it. In the beginning environments—such as for a higher level of data
of the training process, the model typically security and privacy, among other reasons—and
produces random results. To improve its next some players (such as Cohere) already offer this
output so it is more in line with what is expected, kind of service around LLMs.
the training algorithm adjusts the weights of the
underlying neural network. It may need to do It’s important to note that many questions have
this millions of times to get to the desired level yet to be answered regarding ownership and
of accuracy. Currently, such training efforts can rights over the data used in the development
cost millions of dollars and take months. Training of this nascent technology—as well as over the
OpenAI’s GPT-3, for example, is estimated to cost outputs produced—which may influence how the
$4 million to $12 million.⁹ As a result, the market technology evolves (see sidebar, “Some of the
is currently dominated by a few tech giants and key issues shaping generative AI’s future”).

Some of the key issues shaping generative AI’s future


Amid the enormous enthusiasm, many questions have emerged surrounding generative AI technology, the answers to which will likely shape
future development and use. Following are three of the most important questions to consider when evaluating how the generative AI
ecosystem will evolve:

— Can copyrighted or personal data be used to train models? When training foundation models, developers typically “scrape” data
from the internet. This can sometimes include copyrighted images, news articles, social media data, personal data protected by
the General Data Protection Regulation (GDPR), and more. Current laws and regulations are ambiguous in terms of the implications
of such practices. Precedents will likely evolve to place limits on scraping proprietary data that may be posted online or enable
data owners to restrict or opt out of search indexes so their data can’t easily be found online. New compensation models for data
owners will also likely emerge.

— Who owns the creative outputs? Current laws and regulations also do not clearly answer who owns the copyright on the final
“output” of a generative AI system. Several potential actors can share or own exclusive rights to the final outputs, such as the data
set owner, model developer, platform owner, prompt creator, or the designer who manually refines and delivers the final generative
AI output.

— How will organizations manage the quality of generative AI outputs? We have already seen numerous examples of systems
providing inaccurate, inflammatory, biased, or plagiarized content. It’s not clear whether models will be able to eliminate such
outputs. Ultimately, all companies developing generative AI applications will need processes for assessing outputs at the use case
level and determining where the potential harm should limit commercialization.

9
Kif Leswing and Jonathan Vanian, “ChatGPT and generative AI are booming, but the costs can be extraordinary,” CNBC, March 13, 2023; and
Toby McClean, “Machines are learning from each other, but it’s a good thing,” Forbes, February 3, 2021.

Exploring opportunities in the generative AI value chain 5


Exhibit 2
Examples of generative AI models from some of the early providers show there
are many options available for each modality, several of which are open source.
Examples of generative AI models from some of the early providers show there
are many options available for each modality, several of which are open source.

Closed source¹ Closed source, available through APIs² Open source³

Protein structures
Text Image Audio or music 3-D Video or DNA sequences

RODIN
Microsoft VALL-E Diffusion GODIVA MoLeR

OpenAI⁴ GPT-4 DALL-E 2 Jukebox Point-E

Meta LLaMA Make-a-scene AudioGen Builder Bot Make-a-video ESMFold

Google/ LaMDA Imagen MusicLM DreamFusion Imagen Video AlphaFold2


DeepMind

Stable Dance
Stability AI StableLM Diffusion 2 Diffusion LibreFold

Amazon Lex DeepComposer

Apple GAUDI

NVIDIA MT-NLG Edify Edify Edify MegaMolBART

Cohere Family of LLMs

Anthropic Claude

AI21 Jurassic-2

Note: List of products are provided for informational purposes only and do not reflect an endorsement from McKinsey & Company.
1
“Closed source” defined as: model not publicly available, access is typically granted through strict process, and usage may be governed by NDA or other
contract.
2
“Closed source, available through APIs” defined as: source code of model is not available to the public, but the model is often accessible via API, where usage is
typically governed by licensing agreements.
3
“Open source” defined as: code of models available to the public and can be either freely used, distributed, and modified by anyone or restricted for non-
commercial use.
4
OpenAI is backed by significant Microsoft investments.

McKinsey & Company

6 Exploring opportunities in the generative AI value chain


Model hubs and MLOps existing solution provider working to add innovative
To build applications on top of foundation models, capabilities to its current offerings, or a business
businesses need two things. The first is a place to looking to build a competitive advantage in its
store and access the foundation model. Second, they industry.
may need specialized MLOps tooling, technologies,
and practices for adapting a foundation model and There are many ways that application providers
deploying it within their end-user applications. This can create value. At least in the near term, we
includes, for example, capabilities to incorporate and see one category of applications offering the
label additional training data or build the APIs that greatest potential for value creation. And we expect
allow applications to interact with it. applications developed for certain industries and
functions to provide more value in the early days of
Model hubs provide these services. For closed- generative AI.
source models in which the source code is not
made available to the public, the developer of the Applications built from fine-tuned models
foundation model typically serves as a model hub. It stand out
will offer access to the model via an API through a Broadly, we find that generative AI applications
licensing agreement. Sometimes the provider will fall into one of two categories. The first represents
also deliver MLOps capabilities so the model can be instances in which companies use foundation
tuned and deployed in different applications. models largely as is within the applications they
build—with some customizations. These could
For open-source models, which provide code that include creating a tailored user interface or adding
anyone can freely use and modify, independent guidance and a search index for documents that
model hubs are emerging to offer a spectrum of help the models better understand common
services. Some may act only as model aggregators, customer prompts so they can return a high-quality
providing AI teams with access to different output.
foundation models, including those customized
by other developers. AI teams can then download The second category represents the most attractive
the models to their servers and fine-tune and part of the value chain: applications that leverage
deploy them within their application. Others, such fine-tuned foundation models—those that have
as Hugging Face and Amazon Web Services, may been fed additional relevant data or had their
provide access to models and end-to-end MLOps parameters adjusted—to deliver outputs for a
capabilities, including the expertise to tune the particular use case. While training foundation
foundation model with proprietary data and deploy models requires massive amounts of data, is
it within their applications. This latter model fills extremely expensive, and can take months, fine-
a growing gap for companies eager to leverage tuning foundation models requires less data, costs
generative AI technology but lacking the in-house less, and can be completed in days, putting it within
talent and infrastructure to do so. reach of many companies.

Application builders may amass this data from


Applications in-depth knowledge of an industry or customer
While one foundation model is capable of performing needs. For example, consider Harvey, the
a wide variety of tasks, the applications built on top generative AI application created to answer legal
of it are what enable a specific task to be completed— questions. Harvey’s developers fed legal data sets
for example, helping a business’s customers with into OpenAI’s GPT-3 and tested different prompts
service issues or drafting marketing emails (Exhibit to enable the tuned model to generate legal
3). These applications may be developed by a new documents that were far better than those that the
market entrant seeking to deliver a novel offering, an original foundation model could create.

Exploring opportunities in the generative AI value chain 7


Exhibit 3
There are many applications of generative AI across modalities.
There are many applications of generative AI across modalities.

Modality Application Example use cases

Text Content writing • Marketing: creating personalized emails and posts


• Talent: drafting interview questions, job descriptions

Chatbots or assistants • Customer service: using chatbots to boost conversion on websites

Search • Making more natural web search


• Corporate knowledge: enhancing internal search tools

Analysis and synthesis • Sales: analyzing customer interactions to extract insights


• Risk and legal: summarizing regulatory documents

Code Code generation • IT: accelerating application development and quality with automatic code
recommendations

Application prototype • IT: quickly generating user interface designs


and design

Data set generation • Generating synthetic data sets to improve AI models quality

Image Stock image generator • Marketing and sales: generating unique media

Image editor • Marketing and sales: personalizing content quickly

Audio Text to voice generation • Trainings: creating educational voiceover

Sound creation • Entertainment: making custom sounds without copyright violations

Audio editing • Entertainment: editing podcast in post without having to rerecord

3-D 3-D object generation • Video games: writing scenes, characters


or other • Digital representation: creating interior-design mockups and virtual
staging for architecture design

Product design and • Manufacturing: optimizing material design


discovery • Drug discovery: accelerating R&D process

Video Video creation • Entertainment: generating short-form videos for TikTok


• Training or learning: creating video lessons or corporate presentations
using AI avatars

Video editing • Entertainment: shortening videos for social media


• E-commerce: adding personalization to generic videos
• Entertainment: removing background images and background noise
in post

Voice translation • Video dubbing: translating into new languages using AI-generated or
and adjustments original-speaker voices
• Live translation: for corporate meetings, video conferencing
• Voice cloning: replicating actor voice or changing for studio effect such
as aging

Face swaps and • Virtual effects: enabling rapid high-end aging; de-aging; cosmetic, wig,
adjustments and prosthetic fixes
• Lip syncing or “visual” dubbing in post-production: editing footage to
achieve release in multiple ratings or languages
• Face swapping and deep-fake visual effects
• Video conferencing: real-time gaze correction

Note: This list is not exhaustive.

McKinsey & Company

8
Organizations could also leverage proprietary — Information technology. Generative AI can help
data from daily business operations. A software teams write code and documentation. Already,
developer that has tuned a generative AI chatbot automated coders on the market have improved
specifically for banks, for instance, might partner developer productivity by more than 50 percent,
with its customers to incorporate data from call- helping to accelerate software development. 10
center chats, enabling them to continually elevate
the customer experience as their user base grows. — Marketing and sales. Teams can use generative
AI applications to create content for customer
Finally, companies may create proprietary data outreach. Within two years, 30 percent of all
from feedback loops driven by an end-user rating outbound marketing messages are expected to
system, such as a star rating system or a thumbs- be developed with the assistance of generative
up, thumbs-down rating system. OpenAI, for AI systems.11
instance, uses the latter approach to continuously
train ChatGPT, and OpenAI reports that this helps — Customer service. Natural-sounding,
to improve the underlying model. As customers personalized chatbots and virtual assistants
rank the quality of the output they receive, that can handle customer inquiries, recommend
information is fed back into the model, giving it more swift resolution, and guide customers to the
“data” to draw from when creating a new output— information they need. Companies such as
which improves its subsequent response. As the Salesforce, Dialpad, and Ada have already
outputs improve, more customers are drawn to use announced offerings in this area.
the application and provide more feedback, creating
a virtuous cycle of improvement that can result in a — Product development. Companies can use
significant competitive advantage. generative AI to rapidly prototype product
designs. Life sciences companies, for instance,
In all cases, application developers will need to keep have already started to explore the use of
an eye on generative AI advances. The technology is generative AI to help generate sequences of
moving at a rapid pace, and tech giants continue to amino acids and DNA nucleotides to shorten the
roll out new versions of foundation models with even drug design phase from months to weeks.12
greater capabilities. OpenAI, for instance, reports
that its recently introduced GPT-4 offers “broader In the near term, some industries can leverage
general knowledge and problem-solving abilities” these applications to greater effect than others.
for greater accuracy. Developers must be prepared The media and entertainment industry can become
to assess the costs and benefits of leveraging these more efficient by using generative AI to produce
advances within their application. unique content (for example, localizing movies
without the need for hours of human translation)
Pinpointing the first wave of application impact and rapidly develop ideas for new content and
by function and industry visual effects for video games, music, movie
While generative AI will likely affect most business story lines, and news articles. Banking, consumer,
functions over the longer term, our research telecommunications, life sciences, and technology
suggests that information technology, marketing companies are expected to experience outsize
and sales, customer service, and product operational efficiencies given their considerable
development are most ripe for the first wave of investments in IT, customer service, marketing and
applications. sales, and product development.

10 GitHub Product Blog, “Research: Quantifying GitHub Copilot’s impact on developer productivity and happiness,” blog entry by Eirini
Kalliamvakou, September 7, 2022.
11 Jackie Wiles, “Beyond ChatGPT: The future of generative AI for enterprises,” Gartner, January 26, 2023.
12 NVIDIA Developer Technical Blog, “Build generative AI pipelines for drug discovery with NVIDIA BioNeMo Service,” blog entry by Vanessa
Braunstein, March 21, 2023; and Alex Ouyang and Abdul Latif Jameel, “Speeding up drug discovery with diffusion generative models,” MIT
News, March 31, 2023.

Exploring opportunities in the generative AI value chain 9


Services While generative AI technology and its supporting
As with AI in general, dedicated generative AI ecosystem are still evolving, it is already quite clear
services will certainly emerge to help companies that applications offer the most significant value-
fill capability gaps as they race to build out their creation opportunities. Those who can harness
experience and navigate the business opportunities niche—or, even better, proprietary—data in fine-tuning
and technical complexities. Existing AI service foundation models for their applications can expect to
providers are expected to evolve their capabilities to achieve the greatest differentiation and competitive
serve the generative AI market. Niche players may advantage. The race has already begun, as evidenced
also enter the market with specialized knowledge by the steady stream of announcements from software
for applying generative AI within a specific function providers—both existing and new market entrants—
(such as how to apply generative AI to customer bringing new solutions to market. In the weeks and
service workflows), industry (for instance, guiding months ahead, we will further illuminate value-creation
pharmaceutical companies on the use of generative prospects in particular industries and functions as well
AI for drug discovery), or capability (such as how to as the impact generative AI could have on the global
build effective feedback loops in different contexts). economy and the future of work.

Tobias Härlin and Gardar Björnsson Rova are partners in McKinsey’s Stockholm office, where Oleg Sokolov is an associate
partner; Alex Singla is a senior partner in the Chicago office; and Alex Sukharevsky is a senior partner in the London office.

Copyright © 2023 McKinsey & Company. All rights reserved.

10 Exploring opportunities in the generative AI value chain

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy