Summary Module 3
Summary Module 3
2. Prompt Engineering
What is Prompt Engineering?: Prompt engineering is the practice of crafting questions
or instructions (prompts) that guide the LLM to produce the desired outcome. Just like in
human interactions, how you phrase the question can significantly affect the response.
Example of a Good Prompt: If you own a clothing store and want marketing ideas, you
wouldn’t just say, "Give me marketing ideas." Instead, a more specific prompt like, "I
own a clothing store that sells high-fashion womenswear. Help me brainstorm marketing
ideas to reach young professionals," will give better results. The AI will have a clearer
idea of what you're asking for(Google AI Essentials).
Why Iteration is Important: In many cases, the first response may not be perfect. You
might need to refine the prompt based on the initial output. For example, if you ask the
AI for marketing strategies and get vague or unrelated responses, you can add more
context or examples to narrow down the result(Google AI Essentials).
3. Few-shot Prompting
What is Few-shot Prompting?: This technique involves giving the model a few
examples of the kind of response you expect, so it understands the task better. For
example, if you want the AI to generate product descriptions, you might first provide two
or three examples of product descriptions that fit the style you want.
Benefits and Limits: Few-shot prompting can help the model mimic the structure and
tone you’re looking for. However, giving too many examples might constrain the model’s
creativity or flexibility, as it will try too hard to fit the patterns from the examples,
making it less adaptable to new scenarios(Google AI Essentials).
4. Limitations of LLMs
Biases in Training Data: Since LLMs are trained on data from various sources, they can
inherit biases from that data. For example, if an LLM is trained on articles that associate
certain professions with particular genders, it might generate text that reflects those
biases. These biases can manifest in subtle ways, such as stereotyping in the output.
Hallucinations: Sometimes, LLMs generate incorrect or nonsensical information,
referred to as "hallucinations." For example, if you ask the AI to summarize an event and
it doesn't have sufficient data, it might fabricate details that seem plausible but are
entirely false.
Knowledge Cutoff: LLMs only know information up until the point they were last
trained. This means they might not be aware of recent events or developments, leading to
outdated responses in certain scenarios(Google AI Essentials)(Google AI Essentials).
Key Takeaways:
LLMs are powerful tools for generating human-like responses, but their effectiveness
depends on how well they are trained and the quality of the input prompt.
Prompt engineering is an art that involves asking the right questions in the right way to
get the best results.
Few-shot prompting can be a useful strategy when you need more precise results,
especially for tasks like writing or brainstorming.
LLMs have limitations such as biases and hallucinations, so human oversight is always
required.
Iteration is a core part of using LLMs effectively, allowing you to refine the model’s
output over multiple attempts.