0% found this document useful (0 votes)
23 views2 pages

Summary Module 3

Uploaded by

ptkhang17122005
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views2 pages

Summary Module 3

Uploaded by

ptkhang17122005
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 2

1.

Understanding Large Language Models (LLMs)


 What LLMs Do: Large language models (LLMs) are AI systems trained on vast datasets
that include books, articles, and other text sources. The goal of training these models is
for them to understand patterns in human language, making them capable of generating
coherent text when prompted.
 How LLMs Generate Responses: LLMs use statistics to predict the next word in a
sequence. For example, if you input a sentence like "After it rained, the street was...", the
model calculates the probability of various words fitting that sentence. The word "wet" is
likely to be chosen because it makes the most sense based on the training data.
 Why Training Matters: The quality of the model’s responses depends on the diversity
and amount of data used to train it. If it is trained on a wide variety of text, it can
understand many different contexts. However, limitations like biased or insufficient
training data can lead to inaccurate or irrelevant results(Google AI Essentials).

2. Prompt Engineering
 What is Prompt Engineering?: Prompt engineering is the practice of crafting questions
or instructions (prompts) that guide the LLM to produce the desired outcome. Just like in
human interactions, how you phrase the question can significantly affect the response.
 Example of a Good Prompt: If you own a clothing store and want marketing ideas, you
wouldn’t just say, "Give me marketing ideas." Instead, a more specific prompt like, "I
own a clothing store that sells high-fashion womenswear. Help me brainstorm marketing
ideas to reach young professionals," will give better results. The AI will have a clearer
idea of what you're asking for(Google AI Essentials).
 Why Iteration is Important: In many cases, the first response may not be perfect. You
might need to refine the prompt based on the initial output. For example, if you ask the
AI for marketing strategies and get vague or unrelated responses, you can add more
context or examples to narrow down the result(Google AI Essentials).

3. Few-shot Prompting
 What is Few-shot Prompting?: This technique involves giving the model a few
examples of the kind of response you expect, so it understands the task better. For
example, if you want the AI to generate product descriptions, you might first provide two
or three examples of product descriptions that fit the style you want.
 Benefits and Limits: Few-shot prompting can help the model mimic the structure and
tone you’re looking for. However, giving too many examples might constrain the model’s
creativity or flexibility, as it will try too hard to fit the patterns from the examples,
making it less adaptable to new scenarios(Google AI Essentials).

4. Limitations of LLMs
 Biases in Training Data: Since LLMs are trained on data from various sources, they can
inherit biases from that data. For example, if an LLM is trained on articles that associate
certain professions with particular genders, it might generate text that reflects those
biases. These biases can manifest in subtle ways, such as stereotyping in the output.
 Hallucinations: Sometimes, LLMs generate incorrect or nonsensical information,
referred to as "hallucinations." For example, if you ask the AI to summarize an event and
it doesn't have sufficient data, it might fabricate details that seem plausible but are
entirely false.
 Knowledge Cutoff: LLMs only know information up until the point they were last
trained. This means they might not be aware of recent events or developments, leading to
outdated responses in certain scenarios(Google AI Essentials)(Google AI Essentials).

5. Iterative Process in Prompt Engineering


 Why Iteration Matters: Getting good results from an LLM isn’t always immediate. You
often need to go through a cycle of refining the prompt based on the AI’s responses. This
is called the iterative process.
 Example of Iteration: Suppose you want the AI to generate a list of universities offering
animation programs, but the initial response is a long paragraph without structure. You
can revise the prompt to say, “Please present the universities in a table format with
columns for name, location, and tuition fees.” With each iteration, you can make the
request clearer and more specific(Google AI Essentials)(Google AI Essentials).
 Learning Through Refinement: Each time you adjust the prompt, you improve the
model's ability to give you what you want. Iteration is particularly useful in creative tasks
like content writing, where feedback and refinement lead to better outcomes(Google AI
Essentials).

Key Takeaways:
 LLMs are powerful tools for generating human-like responses, but their effectiveness
depends on how well they are trained and the quality of the input prompt.
 Prompt engineering is an art that involves asking the right questions in the right way to
get the best results.
 Few-shot prompting can be a useful strategy when you need more precise results,
especially for tasks like writing or brainstorming.
 LLMs have limitations such as biases and hallucinations, so human oversight is always
required.
 Iteration is a core part of using LLMs effectively, allowing you to refine the model’s
output over multiple attempts.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy