0% found this document useful (0 votes)
580 views1 page

Prompt Engineering by Google - Cheat Sheet - April 2025

The document outlines various prompting techniques for AI models, including zero-shot, few-shot, system, role, contextual, step-back, chain of thought, self-consistency, tree of thoughts, ReAct, and automatic prompt engineering. Each technique is described with examples, best use cases, and specific tasks it is suited for, such as summarizing text, translating languages, or solving mathematical problems. The document serves as a guide for effectively utilizing different prompting strategies to achieve desired outcomes in AI interactions.

Uploaded by

graymouser
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
580 views1 page

Prompt Engineering by Google - Cheat Sheet - April 2025

The document outlines various prompting techniques for AI models, including zero-shot, few-shot, system, role, contextual, step-back, chain of thought, self-consistency, tree of thoughts, ReAct, and automatic prompt engineering. Each technique is described with examples, best use cases, and specific tasks it is suited for, such as summarizing text, translating languages, or solving mathematical problems. The document serves as a guide for effectively utilizing different prompting strategies to achieve desired outcomes in AI interactions.

Uploaded by

graymouser
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 1

Prompt Name How to Use it Pattern Prompt Example 1 Prompt Example 2 Best For Use case 1 Use case

ern Prompt Example 1 Prompt Example 2 Best For Use case 1 Use case 2 Use case 3
Describe the task you want to complete Perform task X on input Y Summarize the following news article in one Translate to Spanish: Simple tasks, direct questions, common Classifying movie review sentiment Answering a simple factual question Summarizing a short, straightforward text.
sentence: instructions where model has strong directly. ("What is the capital of France?").
Zero-shot Prompting Supply the single input to operate on Input: Where is the nearest pharmacy? prior knowledge.
Input: The city council of Riverton voted 6‑1 on
Tuesday to …
Describe the task. Task X Convert movie titles to emoji. Categorize emails as “Work”, “Personal”, or “Spam” Guiding output format/structure, tasks Parsing pizza orders into a specific Translating sentences using a very Performing a novel classification task
needing specific patterns, improving JSON format (PDF example). specific, less common style based on provided examples.
Provide one or more EXAMPLE input → output EXAMPLE: EXAMPLE: EXAMPLE: accuracy over zero-shot, adapting to demonstrated in examples.
pairs. A₁ → B₁ “Jaws” -> 🦈🌊 “50% off shoes this weekend only!” -> Spam novel tasks.
Few-shot Prompting
End with the Actual Input followed by an EXAMPLE: EXAMPLE: EXAMPLE:
arrow, signalling the model to continue. A₂ → B₂ “Titanic” -> 🚢🧊💔 “Can you send the Q2 budget file?” -> Work

Y→ “The Matrix” -> ? “Grandma’s apple pie recipe” ->


Provide an overall instruction or constraint that System instruction: Always System instruction: “You are a meticulous System instruction: “Always answer in Setting overall model Specifying output must be returned in Requiring all output to be valid JSON Instructing the model to always answer
governs every reply (often set in the system apply rule X. fact‑checker who always cites sources.” Shakespearean English.” behavior/constraints, defining uppercase (PDF example). objects following a schema (PDF respectfully.
System Prompting channel). mandatory output formats (like JSON), example).
User prompt: Y User prompt: “List three surprising facts about User prompt: “Explain photosynthesis.” enforcing safety/tone guidelines across
Follow with the user’s actual prompt. honeybees.” interactions.
Act as a specific role or profession. Act as X Act as a medieval blacksmith. Describe how you Act as a NASA flight director. Walk me through the Controlling output tone, style, persona; Acting as a travel guide to suggest Acting as an expert Python Acting as a skeptical historian to analyze a
would forge a longsword launch go/no‑go poll leveraging role-specific knowledge locations (PDF example). programmer to explain complex code. document.
Role Prompting
Perform a task that naturally belongs to that Perform task Y patterns; framing the interaction
role context.
Supply background information or context. Context: X Context: Context: Providing specific background for a task, Suggesting blog post topics based on Answering questions based on a Summarizing a meeting based on
You are reviewing a grant proposal that seeks The user’s dietary restrictions: vegan, allergic to tailoring responses to current the blog's specific theme (PDF provided document snippet within the previously supplied meeting notes.
Ask the model to perform a task that depends Task: Y $50 000 to build a community garden. almonds. situation/conversation, clarifying example). prompt.
Contextual Prompting on that context. nuances based on provided info.
Task: Task:
Write a 200‑word critique highlighting Propose a three‑course dinner menu.
strengths and weaknesses.
Ask for a high‑level or abstract answer to a Broadly, what is X? 1. In general, what factors determine whether a 1. Broadly, what makes a job offer attractive to Improving reasoning on complex tasks, Generating a game storyline by first Solving a physics problem by first Evaluating a complex policy decision by
general version of the problem. coastal city floods during a hurricane? software engineers? activating broader knowledge, reducing asking for key elements of the genre asking for the underlying principles first asking about general relevant criteria.
Step-back Prompting Using X, solve Y bias by focusing on principles first (PDF example). involved.
Immediately use that general answer as 2. Using those factors, assess the flood risk for 2. Apply those criteria to critique this offer from before specifics.
context for solving the specific instance Wilmington, NC, given a Category‑3 storm. ByteForge Inc.
Present the problem or question. Problem X If a train leaves Chicago at 60 mph and another Few‑shot math word‑problem prompt that shows Arithmetic, commonsense reasoning, Solving math word problems Explaining the logical steps required to Planning a sequence of actions to achieve a
leaves St Louis at 45 mph heading toward worked solutions, then ends with a new problem symbolic reasoning tasks where requiring intermediate calculations reach a specific conclusion. goal.
Chain of Thought (CoT) Add an explicit nudge to reason step‑by‑step or Let’s think step by step to Chicago on the same track 300 miles apart, and “Let’s think step by step intermediate steps are crucial for (PDF example).
supply few‑shot demonstrations that include reach Y when will they meet? accuracy. Improving interpretability.
full reasoning traces. Let’s think step by step
Issue a Chain‑of‑Thought prompt. Estimate X. Show your Solve the puzzle below. Show your reasoning Estimate the monthly LinkedIn Ads budget required Improving accuracy and robustness of Getting a more reliable classification Verifying the result of a multi-step Increasing confidence in the answer to a
reasoning step by step to to generate 500 qualified leads for our B2B SaaS CoT results, especially for tasks with a for ambiguous inputs (PDF email mathematical calculation. complex reasoning question.
Internally (or via scripting) run it several times derive Y. (Run 10 times at temperature = 1.2, then product single correct answer but multiple example).
with higher randomness. majority‑vote the numeric answer.) possible reasoning paths.
(Run this prompt multiple Show your reasoning step by step
Self-consistency Aggregate the diverse answers, choosing the times at higher temperature
one that appears most often or is best justified. and aggregate the answers.) (Run this prompt 8 times at temperature = 1.0 and
report the median budget estimate.)
(Implementation detail—aggregation—usually
handled in code rather than in a single written
prompt.)
Instruct the model to explore multiple Generate several reasoning Generate three distinct high‑level strategies for Generate three distinct growth strategies for a Complex problem-solving requiring Creative writing tasks exploring Solving complex logical puzzles or Generating diverse potential solutions to
reasoning branches, evaluating each branches to accomplish X. For reducing urban traffic congestion. bootstrapped e‑commerce brand entering the EU exploration and lookahead, planning, different plot developments. planning problems (e.g., Game of 24). an open-ended design problem.
intermediate “thought” before deciding to each branch, evaluate Y; market tasks where a single CoT path might be
expand or prune. expand the best branch into a For each strategy, list pros and cons. suboptimal.
detailed plan For each strategy, list key steps, required resources,
Tree of Thoughts (ToT) Often implemented with an external controller After evaluating, choose the most promising risks, and projected ROI
loop strategy and elaborate a detailed 10‑step action
plan Evaluate all strategies and choose the one with the
best ROI‑to‑risk ratio...

then provide a detailed 90‑day execution roadmap


Alternate between Thought: (reflection) and Thought: I need X to achieve Y Thought: I need the current weather in Paris to Thought: I need the current AWS price for t3. Tasks requiring external information Answering questions needing current Using a calculator tool for precise Interacting with a calendar API to schedule
Action: (call to a tool, search, calculation, etc.). recommend attire. medium instances in us‑east‑1 to estimate hosting retrieval, interaction with APIs/tools, information via web search (PDF mathematical operations within a an event based on natural language
Action: … costs grounding responses in real-time or Metallica example). larger task. request.
Continue looping until the task is complete. Action: weather_api("Paris") external data. Agent-like behavior.
Observation: … Action: aws_pricing_api("t3.medium","us-east-1")
Observation: 18 °C, light rain. Observation: $0.0416 per hour
Thought: …
Thought: Light rain jacket is advisable. Thought: Now calculate monthly cost at 70 %
Final Answer: … utilization across 4 instances
ReAct (Reason & Act) Final Answer: Pack a waterproof jacket and an
umbrella Action: calculator("0.0416*24*30*0.7*4")
Observation: 83.9

Thought: Add a 20 % buffer for bandwidth and


storage.

Final Answer: Budget about $100 per month for


compute; with bandwidth and storage, plan for
roughly $120
Ask the model to invent several candidate Design N candidate prompts Meta‑Prompt: Meta‑Prompt: Automating prompt Generating various ways a user might Creating diverse prompts for fine- Optimizing the instructional prompt for a
prompts for a task. that perform task X on data Y; discovery/optimization, generating phrase an e-commerce order (PDF tuning a sentiment analysis model. complex data extraction task.
return them ranked by You are designing prompts that convert product You are designing prompts that convert raw diverse phrasing for training data example).
Evaluate each candidate on sample inputs. expected performance reviews into concise pros/cons lists. customer‑support chat logs into a JSON object with augmentation, finding effective
Generate five diverse prompts that could 'issue_type', 'priority', and 'next_action' instructions for complex tasks.
Automatic Prompt Engineering (APE) Select (or ensemble) the highest‑performing accomplish this
prompt. Generate six diverse candidate prompts suitable for
[…model returns Prompt A … Prompt E…] busy SMB support teams

Evaluate Prompt A‑E on held‑out reviews, pick


the best F1, deploy.
State the code‑related instruction (write, fix, Write X code that import requests Write a Python function that pulls the last 30 days Code generation, code explanation, Generating a bash script based on Explaining what a specific Python Debugging a Python script and suggesting
explain, translate, etc.). accomplishes Y of Stripe payments using the Stripe API translation between programming requirements (PDF example). function does (PDF example). fixes (PDF example).
def fetch(urls): languages, debugging errors, code
Provide any relevant snippet or specification. return [requests.get(u).text for u in urls] ... aggregates revenue by day review.

Optionally include constraints (language, style, ```”* ... and returns a pandas DataFrame ready for
performance, libraries) plotting
Code Prompting *“Translate this Bash one‑liner into Windows
PowerShell.”*

---

Feel free to copy these “pattern sheets” into


your own playbook or tweak the example
prompts to suit your domain

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy