0% found this document useful (0 votes)
130 views16 pages

The Prompt Canvas

The Prompt Canvas A literature-based practitioner guide for creating effective prompts in large language models Michael Hewing & Vincent Leinhos University of Applied Sciences, 2024

Uploaded by

oknewsme
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
130 views16 pages

The Prompt Canvas

The Prompt Canvas A literature-based practitioner guide for creating effective prompts in large language models Michael Hewing & Vincent Leinhos University of Applied Sciences, 2024

Uploaded by

oknewsme
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

The Prompt Canvas: A Literature-Based Practitioner Guide for

Creating Effective Prompts in Large Language Models

Michael Hewing and Vincent Leinhos


FH Münster – University of Applied Sciences
arXiv:2412.05127v1 [cs.AI] 6 Dec 2024

December 6, 2024

A BSTRACT
The rise of large language models (LLMs) has highlighted the importance of prompt engineering as
a crucial technique for optimizing model outputs. While experimentation with various prompting
methods, such as Few-shot, Chain-of-Thought, and role-based techniques, has yielded promising
results, these advancements remain fragmented across academic papers, blog posts and anecdotal
experimentation. The lack of a single, unified resource to consolidate the field’s knowledge impedes
the progress of both research and practical application. This paper argues for the creation of
an overarching framework that synthesizes existing methodologies into a cohesive overview for
practitioners. Using a design-based research approach, we present the Prompt Canvas (Figure 1),
a structured framework resulting from an extensive literature review on prompt engineering that
captures current knowledge and expertise. By combining the conceptual foundations and practical
strategies identified in prompt engineering, the Prompt Canvas provides a practical approach for
leveraging the potential of Large Language Models. It is primarily designed as a learning resource
for pupils, students and employees, offering a structured introduction to prompt engineering. This
work aims to contribute to the growing discourse on prompt engineering by establishing a unified
methodology for researchers and providing guidance for practitioners.

The Prompt Canvas


The Prompt Canvas is designed as a learning resource for you and your team, providing
a structured approach into Prompt Engineering for Large Language Models like ChatGPT. Prompt Name: Date: Owner:

Persona/Role Task and Intent Context Output


Reflect on which roles and personas are relevant for your Describe the task precisely, starting with action verbs. Also Give detailed information about the current situation , Tell the model how long the response should be. Specify the
organization. Ask the model to adopt a persona. Integrate describe the intent and objective you are pursuing. including background details; the model knows almost content structure of the text. Choose the format you want
company values and culture into the persona description. Example: Summarize the key points of the attached document, focusing on the main everything, but best what you tell the AI. (table, text, markdown, code). Ask the model to include
arguments and supporting evidence. The goal is to provide a concise and accurate
Example: You are a skilled summarizer and editor. Your role is to distill complex
article that captures the essence of the document, making it easy for readers to
Example: You are writing for Art Horizon, a cutting-​edge online art magazine dedicated quotes from a reference or to add sources.
information into clear and concise summaries while ensuring the text is polished and to exploring emerging trends, groundbreaking movements, and cultural phenomena in
Example: The text should be no more than 200 words and divided into three main
engaging. understand the core message quickly., focusing on the main arguments and supporting the global art world. Known for its vibrant design and engaging storytelling, Art
sections: Introduction, Core Content and Conclusions. Write the text in Markdown
evidence. The goal is to provide a concise and accurate article that captures the essence Horizon appeals to a diverse audience of young creatives, collectors, and art
format.
of the document, making it easy for readers to understand the core message quickly. enthusiasts. The magazine focuses on topics such as ...

Audience Step-​by-​Step References Tonality


Develop detailed personas that represent typical users or Break down the task into clear, sequential steps required for Share essential data, facts, or figures that are important for Think about which attributes you want to communicate
customers. completion. the response. Mention p  ast events or decisions that could (luxury, quality…). Specify a brand or writing style to inspire
Example: Create this content for a young, tech-​savvy audience aged 18-25. Use casual Example: Follow these steps to complete the task: Read and Understand, Identify Key have an impact. Refer to specific documents, reports, or the tone.
and relatable language, incorporating trending references or examples. Draft the Summary, Edit for Clarity, Check for Completeness.
Example 2 (no example): Let's think step by step. policies. Provide relevant files or examples. Example: Write the essay in the style of [brand/author/magazine], capturing the
distinctive tone and approach. Adapt the tone, structure, and language to align with the
Example: Incorporate the attached survey feedback into the content creation process to
chosen inspiration while delivering the intended message effectively.
align the article with audience preferences. Additionally, use the provided example
Write with a tone that embodies authenticity and sophistication. Reflect these attributes
article as a reference.
in every aspect of the writing.

Recommended Techniques Tooling


Iterative Optimization: Refine with additional instructions. Use LLM Apps (e.g. ChatGPT App) for faster prompts based on voice input
Placeholders & Delimiters: Use delimiters to provide clarity, placeholders for flexibility. Use Prompting Platforms for instant prompt optimization (e.g. PromptPerfect)
AI as a Prompt Generator: Ask the model to generate or refine the prompt. Explore Prompt Libraries to discover creative prompts (e.g. PromptHero, PromptBase)
Chain-​of-​Thought: Ask the model to think step-​by-​step. Use Browser Extensions, such as Tex Blaze, to save prompts and integrate LLMs
Tree-​of-​Thought: Ask the model to analyze from multiple perspectives/personas. Compare and evaluate large language models side by side with LLM-​Arenas,
Emotion Prompting: add emotional phrases such as "This is important to my career". such as Chatbot Arena, to stay updated with the top AI models
Rephrase and Respond / Re-​Reading: Instruct model to first express the question in Use Custom-​GPTs, such as ScholarGPT, for specific purposes
their own words before giving an answer or tell it to read the question again. Create own Custom-​GPTs for customization and company-​wide use
Adjusting Hyperparameters (advanced): Adjust the model's settings Integrate the model via API into your application systems
(temperature, top-​p, frequency or presence penalty).

The Prompt Canvas © 2024 by Michael Hewing is licensed under CC BY 4.0. To view a copy of this license, visit https://creativecommons.org/licenses/by/4.0/ www.thepromptcanvas.com v. 1.0 | English

Figure 1: The Prompt Canvas.


1 Introduction 2.1 The Relevance of Prompting in Unsupervised
Learning and Transformer Architecture
With the advances of sophisticated Large Language Mod-
els (LLM), the ability to guide these models to generate Through the concepts of unsupervised learning and ba-
useful, contextually relevant, and coherent answers has be- sic transformer architecture, the relevance and impact of
come an essential skill. Prompt engineering refers to the art prompts is evident. Generative AI models, such as LLMs,
and science of designing inputs or queries (prompts) that can be assigned to natural language processing (NLP) in
effectively guide LLMs towards desired outputs. Schulhoff the field of artificial intelligence (Braun et al., 2024, p. 560).
et al. (2024, p. 7) describe related prompt techniques as a In an earlier paradigm of NLP (Liu et al., 2023, p. 4), mod-
“blueprint that outlines how to structure a prompt.” This els were typically trained for specific tasks using super-
discipline bridges the gap between the user’s goals and vised learning with manually annotated data. However,
the model’s capabilities, enabling more precise, creative, this limited a model to its training domain and manual
and domain-specific solutions. Prompt engineering allows annotation during training was time-consuming and ex-
users to precisely guide LLMs in generating contextually pensive (Radford et al., 2018, p. 1). This challenge led to
relevant and task-specific responses. unsupervised learning gaining in importance. According
Yet, much of the research and insights into prompt engi- to P. Liu et al., this represents the transition to the current
neering are distributed across disparate sources, such as NLP paradigm of Pre-train, Prompt, Predict. Large and
academic journals, preprints, blogs, and informal discus- diverse data sets are used for training. In this way, the
sions on platforms like GitHub, Reddit, or YouTube. Navi- model recognizes patterns and aligns parameters within a
gating this complex landscape requires not only significant neural network. By entering a prompt, the model adapts
effort, but also a level of expertise that may be inaccessible to the corresponding task, which is known as in-context
to practitioners, creating a substantial barrier to entry and learning. This allows the model to be used for a variety
hindering the effective application of prompt engineering of tasks (see Radford et al., 2018, p. 2; Brown et al., 2020,
techniques in practice. With this paper, a canvas-oriented p. 3). In addition to unsupervised learning, the transformer
approach is proposed that consolidates current knowledge architecture, which was published in 2017 by Vaswani et al.
in the field of prompt engineering into a coherent, visual under the title “Attention Is All You Need,” laid an impor-
format. This way, practitioners can implement effective tant foundation for today’s LLMs. It enables context to be
strategies more confidently and with clarity. maintained across long texts (Radford et al., 2018, p. 2). In
The second chapter of this paper describes the fragmented June 2018, OpenAI (2018) stated that their “. . . approach
state of knowledge in prompt engineering, highlighting is a combination of two existing ideas: transformers and
the challenges practitioners face in accessing and applying unsupervised pre-training.” The abbreviation GPT, Gener-
diverse techniques. In the third chapter, a comprehensive ative Pre-Trained Transformer, reflects this approach. The
review of existing studies and approaches in prompt en- process from prompt input to output is described in Lo
gineering is presented, showcasing key techniques and (2023) as follows (the process description has been short-
patterns in the field. Chapter Four introduces the Prompt ened for clarity): First, individual words of the prompt are
Canvas as a structured framework to consolidate and vi- broken down into tokens. Each token is represented by
sually represent prompt engineering techniques for better a vector that conveys its meaning. This representation is
accessibility and practical application. The last chapter referred to as embedding. Self-attention is used to capture
provides a conclusion along with constraints and areas for the relationship between tokens in the prompt. Finally,
future research. based on the previous context and the patterns learned
in the training data, next tokens are predicted. Once a
token has been selected, it is translated back into a human-
readable form. This process is repeated until a termination
2 The Need for an Overview of Prompt criterion is reached.
These foundational concepts enable LLMs to process di-
Engineering Techniques verse and complex tasks without task-specific retraining,
relying instead on adaptive responses generated through
Prompts are vital for LLMs because they serve as the prompting. Prompt engineering bridges the gap between
primary mechanism for translating user intentions into generalized pre-trained knowledge and specific user needs,
actionable outputs. By guiding the model’s responses, functioning as the key mechanism through which the
prompts enable LLMs to perform a wide range of tasks, model’s potential is harnessed. The transformer’s self-
from creative writing to complex problem-solving, with- attention mechanism ensures contextual integrity across
out requiring task-specific retraining. They leverage the sequences, while unsupervised learning enables the model
pre-trained knowledge embedded in the model, allowing to identify and generalize patterns from vast datasets. To-
users to adapt LLMs to specific contexts and applications gether, these innovations allow LLMs to excel in the Pre-
through in-context learning. train, Prompt, Predict paradigm, making prompt engineer-
ing not only a critical aspect of model utility but also a
determinant of task-specific success. As AI applications
expand across domains, the ability to craft precise and

2
effective prompts will remain central to realizing the full nature of a canvas facilitates communication and alignment
power of these transformative technologies. among team members with varying levels of expertise.
By applying this proven framework to prompt engineering
and making the transition to this visual representation more
2.2 The Need for a Practitioner-Oriented Overview intuitive, practitioners can leverage prompt techniques and
on Prompt Engineering Techniques patterns. Practitioners can quickly grasp the key elements
and a workflow, reducing barriers to entry and enabling
Prompt engineering is a rapidly evolving field, with tech- more effective application of the techniques.
niques such as Few-shot learning, Chain-of-Thought rea-
soning and iterative feedback loops being developed and
refined to solve complex problems. The pace of innovation
is driven by a wide range of applications in industries such 3 Identifying Common Techniques Through
as healthcare, education and software development, where
tailored prompts can significantly improve model perfor- a Systematic Literature Review
mance. A large body of research is investigating the effec-
tiveness of different prompting techniques. However, the In order to obtain a comprehensive overview of the current
current state of knowledge in this area is highly fragmented, state of techniques in the field of prompt engineering, a sys-
posing significant challenges to researchers and practition- tematic literature review (SLR) has been carried out. Such
ers alike. Fragmentation of knowledge refers to the dis- a systematic approach provides transparency in the selec-
jointed and inconsistent distribution of information across tion of databases, search terms, as well as inclusion and
various sources, often lacking coherence or standardized exclusion criteria. After the literature search and selection,
frameworks. One of the primary challenges of this frag- included literature will be analyzed and consolidated.
mented knowledge is the absence of a unified framework
that consolidates the diverse techniques, methodologies
and findings in prompt engineering. Practitioners new to 3.1 Literature Search and Selection
the field face steep learning curves, as they must navigate The literature search process primarily adheres to the
a scattered and complex body of literature. framework outlined by vom Brocke et al. (2009, pp. 8–11).
Yet, as it will be highlighted in the literature review of chap- For the subsequent selection of sources, the methodology
ter Three, initial efforts to systematically consolidate these is based on the Preferred Reporting Items for Systematic
techniques, develop taxonomies and establish a shared reviews and Meta-Analyses (PRISMA) guidelines (cf.
vocabulary are emerging. These publications structure cur- Page et al., 2021). Vom Brocke et al. (2009) outline
rent knowledge into schemes and patterns. While they the systematic literature review (SLR) process in five
provide in-depth analyses and valuable structures, they distinct phases. The process begins with defining the
often lack accessibility for practitioners seeking practical scope of the literature search (Phase 1) and creating a
solutions and actionable insights. This gap from research preliminary concept map (Phase 2) to guide the review.
advancements to practical application highlights a pressing This is followed by the execution of the actual literature
need for bridging between academic research and real- search (Phase 3). The later stages involve the analysis
world use. Addressing these challenges will ensure that and synthesis of the included literature (Phase 4) and a
the benefits of prompt engineering are more widely re- discussion of the findings along with their limitations
alized, enabling its application to expand further across (Phase 5). The last phase we integrated into the section
industries and domains. on limitations at the end of this paper. Vom Brocke et al.
(2009) emphasize the first three phases in their work. With
2.3 Canvas for Visualization the literature research, the following research question
shall be addressed:
The field of prompt engineering involves a dynamic and What is the current state of techniques and methods in
multifaceted interplay of strategies, methodologies, and the field of prompt engineering, especially in text-to-text
considerations, making it challenging to present in a way modalities?
that is both comprehensive and accessible. The canvas To establish the framework for the literature search in
model promotes visual thinking and has been widely phase one, vom Brocke et al. (2009) draw on Cooper’s
adopted in fields such as business strategy (Osterwalder taxonomy (1988, pp. 107–112). Cooper identifies six
and Pigneur 2010, Pichler 2016), teamwork (Ivanov and key characteristics for classifying literature searches:
Voloshchuk, 2015), startups (Maurya, 2012), research (The focus, goal, perspective, coverage, organization and
Fountain Institute, 2020) and design thinking (IBM, 2016), audience. These characteristics provide a structured
where it has proven to be an effective way to organize approach to defining the purpose and scope of a literature
and communicate complex processes. A canvas simplifies review. Table 1 offers a detailed overview of how these
complexity by visually organizing aspects of relevance into classifications align with the specific intentions of this
defined sections, allowing users to see the relationships SLR, ensuring a systematic and targeted review process.
and workflows at a glance. It promotes a holistic view of The second phase involves elaboration using concept
the process in one unified space. Also, the collaborative mapping. For this purpose, terms are selected that are

3
Table 1: Characteristics according to Cooper (1988, for the Advancement of Artificial Intelligence (AAAI),
pp. 107–112) applied to this SLR. International Joint Conference on Artificial Intelligence
(IJCAI), ACM Conference on Human Factors in Comput-
Characteristic Category ing Systems (CHI), Conference on Neural Information
Processing Systems (NeurIPS).
Focus Research outcomes, Practices or ap- The next step within the third phase, the selection of
plications databases and search terms, is to search for peer-reviewed
Goal Integration or synthesis SLRs on the topic of prompt engineering to identify
databases relevant to the subject. The search was
Perspective Neutral representation conducted on Scopus on the 10th of August and resulted
Coverage Exhaustive coverage with selective in five hits (Table 2). The title, abstract and keywords were
citation searched using the following search term:
( TITLE-ABS-KEY ( "prompt engineering")
Organization Conceptual (thematically organized) OR TITLE-ABS-KEY ( "prompt-engineering")
Audience Users of LLMs (private and business AND TITLE-ABS-KEY ( "systematic literature
use) review") OR TITLE-ABS-KEY ( "PRISMA") )
These publications focus on specific application areas,

Table 2: Search results for systematic literature reviews in


expected to lead to relevant and comprehensive results the field of prompt engineering (performed in Scopus on
in the subsequent database search. To keep the literature August 10, 2024).
review as inclusive as possible, only terms directly related
to prompt engineering were included: prompt engineering, Reference Domain Databases
prompt techniques, prompt designs, prompt patterns,
prompt strategies, prompt methods. Further related Sasaki et al. Programming Google Scholar; arXiv,
concepts such as LLMs or generative AI have not been (2024) ACM Digital Library,
considered because they might broaden the scope too IEEE Xplore
much. Moglia et al. Medicine PubMed, Web of Sci-
According to vom Brocke et al. (2009), the third phase (2024) ence, Scopus, arXiv
is divided into several steps. The first step is to identify
and select qualitative sources for the literature review. The Han et al. Economy JSTOR, ProQuest, Sci-
“VHB Publication Media Rating 2024” for the section (2023) enceDirect, Web of Sci-
Information Systems is an established reference for quality ence, Google Scholar
and impact of sources (Verband der Hochschullehrerinnen Watson et al. Machine Scopus, IEEE Xplore,
und Hochschullehrer für Betriebswirtschaft e.V., 2024). (2023) Learning ScienceDirect, Elicit,
Journals with a VHB rating of B or higher and a potential WorldCat, Google
focus on AI were preselected. This selection was made by Scholar, arXiv
manually reviewing the short descriptions of the respective
journals and evaluating their relevance with the assistance
of generative AI. Prompt used in GPT-4o on October 4, such as programming, medicine, economics, or machine
2024: “Evaluate which of the following journals could learning, making it challenging to generalize their insights
contain articles relevant to the topic of prompt engineer- for broader practical use by practitioners. Yet, similarities
ing.” Based on the manual and AI-supported selection, in the database selection were recognized. All hits
the following journals should at least be included in the either use arXiv.org directly as a database or cite a large
database set for this literature search: Nature Machine number of their sources from that website. Documents on
Intelligence, ACM Transactions on Computer-Human arXiv.org are largely not peer-reviewed, but on the other
Interaction (TOCHI), Artificial Intelligence (AIJ), IEEE hand enable the publication of current research. In the
Transactions on Knowledge and Data Engineering, ACM “Systematic Literature Review of Prompt Engineering
Transactions on Interactive Intelligent Systems (TiiS). Patterns in Software Engineering,” Sasaki et al. (2024,
It is understood that conferences play an important role p. 671) transparently state that most of their cited sources
in the field of generative AI, as the timely exchange of are not peer-reviewed, but that it is important to include
new approaches is fundamental. Taking into account a them because prompt engineering is a rapidly changing
VHB rating of B or higher and an evaluation of thematic field. Another SLR from the preliminary research states
relevance, the following conferences should also be that many current articles can only be obtained through
included in the database: International Conference arXiv.org and an increasing number of research groups are
on Information Systems (ICIS), European Conference publishing their work on arXiv (Moglia et al., 2024, p. 41).
on Information Systems (ECIS), Hawaii International Based on these findings and the previously identified
Conference on System Sciences (HICSS), International journals, databases and search terms were defined (see
Conference on Machine Learning (ICML), Association Table 8 in Appendix). On the one hand, those qualitative

4
I dent ificat ion

R ecor ds ident ified ( n = 718)


- A I S eL ibr ar y ( n = 17)
- A C M D igit al L ibr ar y ( n = 42) D uplicat e r ecor ds r em oved
- I EEE X plor e ( n = 148) ( n = 131)
- Scopus ( n = 251)
- ar X iv .or g ( n = 260)

Scr eening
R ecor ds ex cluded ( n = 472)
R ecor ds scr eened in t it le and abst r act - P ublished b efor e 2022 ( n = 82)
( n = 587) - N ot in English ( n = 3)
- T hem at ic m ism at ch ( n = 387)

Eligibilit y

R ep or t s assessed for eligibilit y R ep or t s ex cluded due t o a r at ing


( n = 115) b elow 7 out of 11 ( n = 110)

I nclusion

St udies included in t he r ev iew ( n = 5)

Figure 2: PRISMA procedure, based on (Page et al., 2021, p. 5).

journals and conferences should be considered (accessible time are often highly relevant, an evaluation system was
via AIS eLibrary, IEEE Xplore, ACM Digital Library), created to evaluate articles from all databases holistically
while at the same time fully including interdisciplinary according to thematic suitability, quality and actuality.
areas (through Scopus), as well as current – albeit largely The thematic suitability was weighted most heavily, while
non-peer-reviewed – articles (from arXiv). the quality of articles was assessed using two criteria to
The selected search terms result from concept mapping ensure a comprehensive evaluation. First, we prioritized
and iterative testing of keywords. It was found that many publications that include a literature review process,
articles only contained the term prompt engineering in assigning higher scores to systematic literature reviews
the abstract, but their research focus was in a different (SLRs) and lower scores to less detailed reviews. This is
area. This could be explained by the fact that prompt of importance as we want to consolidate the knowledge
engineering can play an indirect role in many areas of in this field. Second, the evaluation incorporated the
application. Since it is assumed that articles with a primary VHB rating (Verband der Hochschullehrerinnen und
focus on prompt engineering also contain this term in the Hochschullehrer für Betriebswirtschaft e.V.), with higher
title, only the title was searched. The following search scores. The evaluation criteria are defined in Table 3.
term with variations was formed: Articles scoring fewer than seven points were excluded
TITLE("prompt-engineering" OR "prompt from the primary selection. However, the articles below
engineering" OR "prompt techniques" OR this threshold, especially those with a score of six points,
"prompt designs" OR "prompt design" OR were reviewed as supplementary sources. These also
"prompt patterns" OR "prompt pattern" OR include the four SLR articles from the previous database
"prompt strategies" OR "prompt strategy" OR selection.
"prompt methods") There has been a significant increase in the number of
The search was carried out on October 4, 2024 in the articles published in recent years that can be assigned
respective databases according to the PRISMA procedure to the field of prompt engineering based on their title or
(Page et al., 2021) (see Figure 2) and documented with abstract. Of the 115 articles that were checked for their
the literature management program Zotero. 718 hits suitability in full text in the fourth step, 63 articles were
were identified for all databases. Of these, 131 hits were published in the year 2024 to date (up to October 4, 2024),
identified as duplicates and excluded accordingly. In the 44 articles were published in 2023 and eight articles in
next step, 587 hits were checked for suitability with regard 2022. Many articles were related to fine-tuning models,
to their title and abstract. Previously, articles that were which was not relevant to the research question, as it
published before 2022 or were not written in English requires specialized technical knowledge. Ultimately, five
were excluded. In the third step, 115 full-text articles articles met the inclusion criteria, demonstrating relevance,
were checked for suitability. Since articles from arXiv.org quality and alignment with the research question. These
may not contain peer-reviewed articles, but at the same five articles are summarized in Table 4.

5
Table 3: Criteria for evaluating articles in full text. Schulhoff et al. and Sahoo et al. present a total of 108
different prompting techniques.
Criterion Explanation "The Art of Creative Inquiry-From Question Asking to
Prompt Engineering" by Sasson Lazovsky et al. offers a
Topic Is the full text of the article relevant to perspective on the similarities between question formu-
answering the research question? lation and prompt engineering. The article shows which
4 = very relevant characteristics are important in the interaction between
3 = relevant humans and generative AI.
2 = somewhat relevant In "A Prompt Pattern Catalog to Enhance Prompt Engineer-
1 = less relevant ing with ChatGPT", White et al. (2023) present prompt
0 = not relevant patterns using practical examples for software develop-
Quality (1) How transparent is the literature re- ment. However, according to the authors, these can be
search process of that article? transferred to other areas.
2 = very transparent (SLR) In order to make a selection from the multitude of prompt-
1 = present (LR) ing techniques, those that were presented in several in-
0 = not transparent cluded articles were prioritized. If a prompting tech-
(2) Does a VHB rating (Ver- nique includes adapted variants, the parent prompting
band der Hochschullehrerinnen und technique is presented first, followed by possible adap-
Hochschullehrer für Betriebswirtschaft tations. Braun et al. (2024) classified nine dimensions and
e.V., 2024) exist for this article? three meta-dimensions that should be considered when
3 = A+ creating prompts (Figure 3). Firstly, the interaction be-
2=A tween the LLM and user, which, depending on the prompt,
1=B can be seen as computer-in-the-loop or human-in-the-loop
0=C (HITL). An example of HITL would be if the user asks the
0 = D or not available LLM, in addition to the main instruction, to pose follow-up
questions that the user then answers (Braun et al., 2024,
Actuality When was the article published? pp. 561, 565). In this case, the user would take on a more
2 = 2024 active role. According to Braun et al., input and output
1 = 2023 types such as text, image, audio, video and others are part
0 = 2022 or before of the interaction meta-dimension. Context is defined as
the second meta-dimension. This consists of the learning
dimension, which is divided into Zero-shot, One-shot and
Few-shot. In addition to Braun et al., three of the included
18 articles, each with six points, were reviewed and in- articles also identified these as superordinate prompting
cluded in the analysis as supplementary sources (Table 5), techniques. As already presented in Chapter Two, an LLM
but they are not described in as much detail as those with can adapt to new tasks (in-context learning), even if it has
seven points. The most common aspects have been al- not been explicitly trained for that task (Braun et al., 2024,
ready covered by the extensive reviews of the articles listed pp. 563–564; cf. Radford et al., 2018, Brown et al., 2020).
above. The addition of examples in a prompt is referred to as Few-
shot and One-shot in the case of an explicit example. As
Brown et al. showed in their article on GPT-3, the use of
3.2 Analysis and Results Few-shot can increase the accuracy of the output compared
to Zero-shot (see Radford et al., 2019), where no examples
In "Can (A)I Have a Word with You? A Taxonomy on the
are provided. In addition, the behavior of an LLM can
Design Dimensions of AI Prompts", Braun et al. (2024)
be adapted to a specific context by assigning a role in a
develop a taxonomy for the design of prompts for different
prompt (Braun et al., 2024, p. 564; cf. White et al., 2023,
modalities, such as text-to-text and text-to-image.
p. 7). By setting a style, the output can be adapted more
"The Prompt Report" by (Schulhoff et al., 2024) can be
generally (Braun et al., 2024, p. 564). Braun et al. go
considered the most comprehensive article of the included
on to define the information space dimension for the con-
literature. It considers prompting techniques for the text-
text meta-dimension. The authors differentiate between
to-text modality and gives an insight into other modalities,
whether additional information is provided internally - di-
such as text-to-visuals. At the same time, that article uses
rectly in the prompt - or externally - by agents that use
the PRISMA approach within a systematic literature re-
search engines, for example. If no additional context is
view, which increases the transparency of the selected
provided, the output is based exclusively on the training
prompting techniques. Prompting techniques are catego-
data of the LLM (Braun et al., 2024, p. 564). Braun et al.
rized by modality and prompting category.
define the third and final meta-dimension as the outcome
"A Systematic Survey of Prompt Engineering in Large
to be achieved by adapting a prompt (Braun et al., 2024,
Language Models: Techniques and Applications" by Sa-
p. 564). The authors classify Chain-of-Thought (CoT) for
hoo et al. (2024) is similarly comprehensive. It classifies
this purpose. This was also identified as a superior prompt-
the prompting techniques according to application area.

6
ing technique in the articles by Schulhoff et al. (2024) and understanding. This is also referred to as reasoning in var-
Sahoo et al. (2024), which refer to the article by Wei et al. ious included articles. CoT breaks down a problem into
(2023). CoT is designed for tasks that require complex smaller steps, solves them and then provides a final answer.

Table 4: Final inclusion of SLR articles.

No. Reference Title


1 Braun et al. (2024) Can (A)I Have a Word with You? A Taxonomy on the Design Dimensions
of AI Prompts
2 Schulhoff et al. (2024) The Prompt Report: A Systematic Survey of Prompting Techniques
3 Sahoo et al. (2024) A Systematic Survey of Prompt Engineering in Large Language Models:
Techniques and Applications
4 Sasson Lazovsky et al. (2024) The Art of Creative Inquiry—From Question Asking to Prompt Engineering
5 White et al. (2023) A Prompt Pattern Catalog to Enhance Prompt Engineering with ChatGPT

Table 5: Articles that score six points.

No. Reference Title


1 Bhandari (2024) A Survey on Prompting Techniques in LLMs
2 Bozkurt (2024) Tell Me Your Prompts and I Will Make Them True: The Alchemy of Prompt
Engineering and Generative AI
3 Chen et al. (2024) Unleashing the potential of prompt engineering in Large Language Models: a
comprehensive review
4 Chong et al. (2024) Prompting for products: Investigating design space exploration strategies for
text-to-image generative models
5 Fagbohun et al. (2024) An Empirical Categorization of Prompting Techniques for Large Language
Models: A Practitioner’s Guide
6 Garg and Rajendran (2024) Analyzing the Role of Generative AI in Fostering Self-directed Learning
Through Structured Prompt Engineering
7 Hill et al. (2024) Prompt Engineering Principles for Generative AI Use in Extension
8 Korzynski et al. (2023) Artificial intelligence prompt engineering as a new digital competence: Analy-
sis of generative AI technologies such as ChatGPT
9 Liu and Chilton (2022) Design Guidelines for Prompt Engineering Text-to-Image Generative Models
10 Sasaki et al. (2024) Systematic Literature Review of Prompt Engineering Patterns in Software
Engineering
11 Schmidt et al. (2024) Towards a Catalog of Prompt Patterns to Enhance the Discipline of Prompt
Engineering
12 Siino and Tinnirello (2024) GPT Hallucination Detection Through Prompt Engineering
13 Tolzin et al. (2024) Worked Examples to Facilitate the Development of Prompt Engineering Skills
14 Tony et al. (2024) Prompting Techniques for Secure Code Generation: A Systematic Investigation
15 Vatsal and Dubey (2024) A Survey of Prompt Engineering Methods in Large Language Models for
Different NLP Tasks
16 Wang et al. (2023a) Review of large vision models and visual prompt engineering
17 Wang et al. (2024) Prompt engineering in consistency and reliability with the evidence-based
guideline for LLMs
18 Ye et al. (2024) Prompt Engineering a Prompt Engineer

7
Dimensions Characteristics

Interaction
Input Type
NE Text Image Audio Video Other
Output Type
Interaction Type NE Computer-in-the-loop Human-in-the-loop
Role
ME Not defined Defined
Context

Style
Learning ME Zero-shot One-shot Few-shot
Information Space ME Not defined Explicit internal Explicit external
Outcome

Chain of Thoughts ME Single step Step-by-step


Investi- Monitor /
Goal NE Learn Lookup Decide Create
gate extract
ME = Mutually Exclusive, NE = Non-Exclusive

Figure 3: Prompt dimensions and characteristics (Braun et al., 2024, p. 563).

This approach aims to provide the user with a clear and Sahoo et al. (2024, pp. 2–7), who present several prompt-
understandable result by having the LLM explain the pro- ing techniques and refer to corresponding articles. There
cess it uses to generate its output (Wei et al., 2023, p. 3). are often a large number of articles that adapt a prompting
Finally, Braun et al. (2024, p. 565) classified the following technique for new purposes. Therefore, reference is always
goals of a prompt within the meta-dimension result: learn, made to the original article, unless an adapted variant is
lookup, investigate, monitor/extract, decide, create. This presented.
last dimension concludes Braun et al.’s taxonomy. Besides Role and Style Prompting, Emotion Prompting
The article “A Prompt Pattern Catalog to Enhance Prompt adds emotional phrases such as “This is very important to
Engineering with ChatGPT” by White et al. (2023) makes my career” to the end of a prompt (Li et al., 2023, p. 2).
a further conceptual contribution, which however focuses Another area of prompting techniques can be divided into
on concrete prompt structures. White et al. present pat- Rephrase and Re-read. Rephrase and Respond (RaR) in-
terns – so-called prompt patterns, which generally structure structs an LLM to first express the question in its own
prompts for frequently occurring problems in a task area. words before giving an answer (Deng et al., 2024, pp. 9–
With the help of such a prompt pattern, prompts can then 10). Re-reading (RE2) tells an LLM to read the question
be formulated for a specific task. This approach not only again. This can increase performance in the area of reason-
saves time, but also ensures compliance with proven stan- ing (Xu et al., 2024).
dards. White et al. present Prompt patterns as examples Prompting techniques that focus on a step-by-step ap-
for the area of software development. Following our own proach can be assigned to the prompting area of Chain-of-
analysis of the prompt patterns presented by White et al., Thought. Chain-of-Thought Zero-shot adds “Let’s think
similarities between them were analyzed and classified. step by step” at the beginning of a prompt (Kojima et al.,
These are presented in Table 6 using examples from White 2023, p. 1). Analogical Prompting instructs LLMs to
et al. (2023). These observations can be seen as an ex- create examples that can improve output quality using
tension of the previously presented prompt dimensions by in-context learning (Yasunaga et al., 2024). Thread-of-
Braun et al. (2024, p. 563). Thought (ThoT) reviews several prompting templates for
In addition to the selection of a prompt structure and the efficiency, with the following instruction rated best: “Walk
appropriate selection of prompting techniques, the formu- me through this context in manageable parts step by step,
lation of questions is essential in order to be able to interact summarizing and analyzing as we go” (Zhou et al., 2023b).
effectively with the generative AI as a user. This requires Plan-and-Solve builds on the previously introduced Chain-
similar skills to those required for asking interpersonal of-Thought Zero-shot, but instead uses: “Let’s first under-
questions (Sasson Lazovsky et al., 2024, pp. 7–9). Sasson stand the problem and devise a plan to solve the problem.
Lazovsky et al. identified the following seven common Then, let’s carry out the plan and solve the problem step by
key skills: Creativity, Clarity and Precision, Adaptability, step” (Wang et al., 2023b, pp. 3–4). Self-Consistency uses
Critical Thinking, Empathy, Cognitive Flexibility, Goal Chain-of-Thought, but executes a prompt several times
Orientation. These are described in Table 7. Subsequently, and decides, for example, in favor of the result whose solu-
prompting techniques from previously outlined areas such tion was mentioned most frequently (Wang et al., 2023c,
as Zero-shot, Few-shot and Chain-of-Thought will be ex- pp. 1–2). Tree-of-Thoughts (ToT) also extends the Chain-
plored in greater depth and subdivided into further poten- of-Thought approach by following individual steps such
tial areas. The following prompting techniques are taken as thought processes separately (Yao et al., 2023, pp. 1–2).
from the synthesis by Schulhoff et al. (2024, pp. 8–18) and Automatic Prompt Engineer (APE) presents a system with

8
Table 6: Exemplified prompt patterns from White et al. (2023).

Observation Example
Scope “Within scope X”
Task/Goal “Create a game. . . ”
“I would like to achieve X”
Context “When I say X, I mean. . . ”
“Consider Y”
“Ignore Z”
Procedure “When you are asked a question, follow these rules. . . ”
“Explain the reasoning and assumptions behind your answer”
Role “Act as persona X. Provide outputs that persona X would create”
Output “Please preserve the formatting and overall template that I provide”
Termination condition “You should ask questions until this condition is met or to achieve this goal”

Table 7: Prompt Engineering Skills, Sasson Lazovsky et al. (2024).

Capability Applied to the area of prompt engineering


Creativity Designing prompts that elicit desired and insightful responses from AI
Clarity and precision Conveying instructions precisely to minimize misunderstandings
Adaptability Tailoring prompts to the task and language model capabilities
Critical thinking Considering potential outcomes and responses for meaningful interactions
Empathy Optimizing language model responses through empathetic consideration
Cognitive flexibility Iterating with various prompts to optimize results
Goal orientation Eliciting specific responses that align with the intended purpose

which a prompt is selected from a set that leads to a high (persona and audience) to defining the task (goal and steps),
output quality. The following prompt was rated well in an providing the necessary background (context and refer-
evaluation: “Let’s work this out in a step-by-step way to ences), and finally specifying the desired output (format
be sure we have the right answer” (Zhou et al., 2023a). and tone).
Another noteworthy prompting technique is Self-Refine,
which uses an LLM to improve a result through feedback
until a termination condition is reached (Madaan et al., 4.1 Persona/Role and Target Audience
2023, pp. 1–2).
Defining a specific persona or role helps in tailoring the
language model’s perspective, ensuring that the response
aligns with the expected expertise or viewpoint. Identify-
4 Mapping Identified Techniques to the ing the target audience ensures that the content is appropri-
ate for the intended recipients, considering their knowledge
Prompt Canvas level and interests. This category is essential because it
sets the foundation for the model’s voice and the direction
This chapter focuses on synthesizing the insights gathered of the response, making it more relevant and engaging for
from the literature review to populate the Prompt Canvas the user.
with relevant, evidence-based elements. It identifies key This element was derived from recurring discussions in the
techniques in prompt engineering, aligning them with the literature about role-based prompting and user-centered de-
structured sections of the canvas. Based on a user-centered sign. Studies by Braun et al. (2024) highlighted the value
design focusing on understanding the users’ needs, the can- of assigning roles to guide the model’s tone and speci-
vas consists of four categories, each containing a distinct ficity. Sasson Lazovsky et al. (2024) further emphasized
aspect of a prompt: Persona/Role and Target Audience, the importance of personas in enhancing creative inquiry.
Goal and Step-by-Step, Context and References, Format These insights underscored the need to include a dedicated
and Tonality. These categories align with the natural flow
of information processing, from establishing the setting

9
section on tailoring prompts to roles and audience charac- model in delivering content that is not only informative
teristics. but also appropriately presented. This consideration is
crucial for aligning the response with the conventions of
the intended medium or genre.
4.2 Task/Intent and Step-by-Step This category emerged from the emphasis in the literature
on aligning the model’s outputs with specific user re-
Clearly articulating the goal provides the language model quirements and communication contexts. Techniques like
with a specific objective, enhancing the focus and purpose output specification and refinement, discussed in Sahoo
of the response. Breaking down the goal into step-by-step et al. (2024), are critical for aligning the model’s output
instructions or questions guides the model through com- with user needs. Braun et al. (2024) highlighted specifying
plex tasks or explanations systematically. This category output formats to meet technical or domain-specific
justifies its inclusion by emphasizing the importance of needs. Directing the model to produce responses in
precision and clarity in prompts, which directly impacts specific formats, such as tables, markdown, or code,
the quality and usefulness of the output. ensures that outputs meet those requirements. Tonality
Classified by Braun et al. (2024), Sahoo et al. (2024) and customization and aligning tone with organizational
Sasson Lazovsky et al. (2024) as a distinct prompting cat- branding to maintain consistency across communication
egory, Chain-of-Thought prompting techniques decom- outputs further validated the need to include this aspect
pose a task step-by-step and enhance thereby the model’s in the Prompt Canvas. Also, it is of use to specify tone
reasoning capabilities on complex problems. The Chain- attributes like luxury, authority, or informality, depending
of-Thought prompting technique can be used with both on the target audience or purpose.
Zero-shot and Few-shot concepts. By structuring tasks
incrementally, the model produces outputs that are both By mapping the identified techniques to the Prompt
coherent and logically organized. Furthermore, this cate- Canvas, the foundational aspects of a prompt from
gory facilitates creative inquiry; as Sasson Lazovsky et al. defining personas to output refinement are systematically
(2024) emphasize, clearly defining intent in prompts is addressed. The canvas simplifies the application of
essential for open-ended or exploratory tasks. complex techniques, making them more approachable for
practitioners.
In addition to its primary elements, the integration of
4.3 Context and References
Techniques and Tooling categories serves to enhance the
Providing context and relevant references equips the lan- canvas by offering deeper technical insights and practical
guage model with necessary background information, re- support. These categories focus on further techniques and
ducing ambiguity and enhancing the accuracy of the re- the tools available to implement them.
sponse. This category acknowledges that AI models rely
heavily on the input prompt for context, and without it, the
4.5 Recommended Techniques
responses may be generic or off-target. Including refer-
ences also allows the model to incorporate specific data or This category within the Prompt Canvas emphasizes the
adhere to particular frameworks, which is vital in academic application of further strategies to refine and optimize
or professional settings. prompts. These techniques enrich the Prompt Canvas by
This element was selected to address the frequent recom- offering a diverse set of strategies to address varying tasks
mendation to provide situational and contextual informa- and contexts. Practitioners can draw from this toolbox to
tion in prompts. Braun et al. (2024) stressed the impor- adapt their prompts to specific challenges.
tance of embedding contextual details to enhance output
reliability and Schulhoff et al. (2024) suggested incorporat-
ing external references or historical data into prompts for Iterative Optimization Sahoo et al. (2024) and Schul-
guidance. Linking prompts to prior decisions, documents, hoff et al. (2024) present iterative refinement, through
or reports enhances contextual richness and ensures out- prompting techniques, as a crucial approach for improving
puts reflect critical dependencies (Sasson Lazovsky et al., prompts. This involves adjusting and testing prompts in a
2024). By integrating these elements, practitioners can feedback loop to enhance their effectiveness. Iterative op-
craft prompts that are both informative and grounded in timization allows practitioners to fine-tune prompts based
factual context. on model responses, ensuring greater alignment with task
objectives.
4.4 Output/Format and Tonality
Placeholders and Delimiters Placeholders act as flexible
Specifying the desired format and tone ensures that
components that can be replaced with context-specific in-
the response meets stylistic and structural expectations.
formation, while delimiters help segment instructions, im-
Whether the output should be in the form of a report, a list,
proving clarity and reducing ambiguity. Both can be used
or an informal explanation, and whether the tone should
to create dynamic and adaptable prompts (cf. White et al.
be formal, friendly, or neutral, this category guides the
2023).

10
Prompt Generator LLMs can also help to generate and Prompt Perfect, allow users to experiment with prompts in
refine prompts, making AI communication more effective. real-time on websites.
They assist in crafting precise instructions and optimizing
existing prompts for better results. LLM Arenas LLM Arenas, like Chatbot Arena, offer
platforms to test and compare AI models, providing in-
Chain-of-Thought Reasoning Chain-of-Thought en- sights into their performance and capabilities. These are-
courages step-by-step reasoning in model outputs. By nas help users refine prompts and stay updated with the
embedding sequential logic into prompts, practitioners can latest advancements in LLM technology.
enhance the model’s ability to solve complex problems
and provide coherent explanations. Custom GPTs for Specific Purposes Chen et al. (2024)
mention GPTs as plugins in ChatGPT. Customized GPTs
Tree-of-Thoughts Exploration Building on Chain-of- such as Prompt Perfect or ScholarGPT are tailored LLMs
Thought methods, Tree-of-Thoughts prompting allows the optimized for specialized applications or industries. These
model to explore multiple perspectives or solutions simul- customized versions are also able to leverage additional
taneously. This technique is particularly valuable for tasks data through Application Programming Interfaces (APIs)
requiring diverse viewpoints or creative problem-solving. or are given additional context through text or PDFs for
specific objectives, making them highly effective for spe-
Emotion Prompting This technique involves appending cialized tasks.
emotional phrases to the end of a prompt to enhance the
model’s empathetic engagement (Li et al., 2023, p. 2). Customized LLMs and company-wide use
Developing company-specific custom GPTs takes
Rephrase and Respond / Re-Reading As outlined by customization a step further by integrating organizational
Deng et al. (2024, pp. 9–10) and Xu et al. (2024), these knowledge, values, and workflows into a LLM. These
techniques have been shown to enhance reasoning perfor- models have been given additional context or are even
mance. fine-tuned on internal data, leveraging documents and
APIs, and are primed with internal prompts to ensure
Adjusting Hyperparameters The ability to adjust hyper- alignment with company standards and improve oper-
parameters such as temperature, top-p, frequency penalty, ational efficiency. Additionally, some LLM providers
and presence penalty within the prompt itself is very help- offer a sandboxed environment for enterprises, ensuring
ful for controlling the diversity, creativity, and focus of the that entered data will not be used to train future publicly
model’s outputs. available models.
Integration of LLMs via API into application systems
4.6 Tooling APIs facilitate seamless integration of LLMs into existing
systems, enabling automated prompt generation and
The Tooling category offers practical support for designing application.
and applying prompts efficiently. Tools and platforms
simplify workflows, enhance accessibility, and enable the
scalable deployment of prompt engineering techniques.
5 Limitations, Outlook and Conclusion
LLM Apps Apps like the ChatGPT App enable faster
prompt creation through voice input, making interactions
This chapter outlines the limitations of the current study,
more efficient and accessible. This feature reduces typing
explores potential future directions for research and appli-
effort, enhances usability on-the-go, and supports diverse
cation, and concludes by emphasizing the significance of
users, streamlining the prompt engineering process for
the Prompt Canvas as a foundational tool for the evolving
dynamic or time-sensitive tasks.
field of prompt engineering. It provides a critical reflec-
Prompting Platforms Platforms like PromptPerfect al- tion on the scope of the work, its adaptability to emerging
low users to design, test, and optimize prompts interac- trends, and its role in bridging research and practice.
tively. These tools often include analytics for assessing
prompt performance and making informed adjustments.
5.1 Limitations
Prompt Libraries Pre-designed templates and reusable
prompts, discussed by White et al. (2023), provide a valu- As prompt engineering is not a one-size-fits-all disci-
able starting point for practitioners. Libraries save time pline, different tasks and domains may require tailored
and ensure consistency by offering solutions for common approaches and techniques. Yet, a canvas can be easily
tasks. Some platforms either offer prompts for purchase customized to include domain-specific elements, such as
(e.g., PromptBase), while others focus on sharing prompts ethical considerations for healthcare or creative constraints
for free (e.g., PromptHero). for marketing. This adaptability ensures that the canvas
remains relevant and useful across diverse use cases. The
Browser Extensions Providing direct integration into modular structure allows practitioners to customize tech-
web clients, browser extensions, like Text Blaze and niques for specific tasks or domains, improving relevance

11
and scalability. key techniques, such as role-based prompting, Chain-of-
The effectiveness of this canvas requires validation through Thought reasoning, and iterative refinement, onto a struc-
both quantitative and qualitative research methodologies. tured canvas, this work provides a valuable resource that
Recognizing the strong demand for a practical guide in the bridges the gap between academic research and practical
field of prompt engineering, this publication aims to serve application. Future research is encouraged to expand the
as a starting point to initiate and foster discussion on the framework to address these evolving challenges, ensuring
topic. Additionally, a research design to evaluate the utility its continued relevance and utility across diverse domains.
of the canvas is already under development.
This work focuses primarily on text-to-text modalities. Al-
though this modality should already cover a wide range
of applications, there are other modalities such as image, References
audio, video or image-as-text (cf. Schulhoff et al. 2024)
that are not highlighted in this study. At the same time, Prabin Bhandari. A survey on prompting techniques in
many techniques mentioned above are not designed exclu- LLMs, 2024. URL https://arxiv.org/abs/2312.
sively for the text-to-text modality, e.g. iterative prompting. 03740.
Furthermore, this work focused primarily on the design of
individual prompts. Prompting techniques that use agents A. Bozkurt. Tell me your prompts and i will make them
were thematically separated out in the analysis and syn- true: The alchemy of prompt engineering and generative
thesis. It is assumed that they can play another important AI. Open Praxis, 16(2):111–118, 2024. doi: 10.55982/
role in further improving the output quality. At the same openpraxis.16.2.661. Publisher: International Council
time, this work focused on findings for users of LLMs in for Open and Distance Education.
the private and business environment. Finally, it is impor-
tant to emphasize that this work does not explore potential Marvin Braun, Maike Greve, Felix Kegel, Lutz Kolbe,
risks associated with the use of LLMs. These risks in- and Philipp Emanuel Beyer. Can (a)i have a word
clude biases, handling of sensitive information, copyright with you? a taxonomy on the design dimensions of
violations, or the significant consumption of resources. AI prompts. In Proceedings of the 57th Hawaii Inter-
national Conference on System Sciences | 2024, 2024.
URL https://aisel.aisnet.org/hicss-57/cl/
5.2 Outlook design_development_and_evaluation/2. Publica-
tion Title: Hawaii International Conference on System
The Prompt Canvas serves as a foundational tool, offering Sciences 2024 (HICSS-57).
a shared framework for the field of prompt engineering. It
is intended not only for practical application but also to fos- Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Sub-
ter dialogue about which techniques are most relevant and biah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakan-
sustainable. By doing so, the canvas encourages discussion tan, Pranav Shyam, Girish Sastry, Amanda Askell, Sand-
and guides research in evaluating whether emerging devel- hini Agarwal, Ariel Herbert-Voss, Gretchen Krueger,
opments should be incorporated into its framework. Given Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M.
the dynamic and rapidly evolving nature of the discipline, it Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse,
is important to view the Prompt Canvas not as a static prod- Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray,
uct but as a living document that reflects the current state Benjamin Chess, Jack Clark, Christopher Berner, Sam
of practice. For instance, if prompting techniques are more McCandlish, Alec Radford, Ilya Sutskever, and Dario
deeply integrated into LLMs in the future through prompt Amodei. Language models are few-shot learners, 2020.
tuning and automated prompts, one could argue that some
prompt techniques may become less important. Advanc- Banghao Chen, Zhaofeng Zhang, Nicolas Langrené, and
ing models, such as OpenAI’s o1 model series, already Shengxin Zhu. Unleashing the potential of prompt
incorporate the Chain-of-Thought technique, enabling it engineering in large language models: A comprehen-
to perform complex reasoning by generating intermediate sive review. CoRR, abs/2310.14735, 2024. URL
steps before arriving at a final answer. https://arxiv.org/abs/2310.14735. Accessed:
2024-12-05.

5.3 Conclusion Leah Chong, I.-Ping Lo, Jude Rayan, Steven Dow, Faez
Ahmed, and Ioanna Lykourentzou. Prompting for
This paper introduces the Prompt Canvas as a unified products: Investigating design space exploration strate-
framework aimed at consolidating the diverse and frag- gies for text-to-image generative models, 2024. URL
mented techniques of prompt engineering into an acces- https://arxiv.org/abs/2408.03946v1.
sible and practical tool for practitioners. Grounded in an
extensive literature review and informed by established Harris M. Cooper. Organizing knowledge syntheses: A
methodologies, the Prompt Canvas addresses a need for taxonomy of literature reviews. Knowledge in Society, 1
a comprehensive and systematic approach to designing (1), 1988. ISSN 0897-1986. doi: 10.1007/BF03177550.
effective prompts for large language models. By mapping

12
Yihe Deng, Weitong Zhang, Zixiang Chen, and Quanquan and predict: A systematic survey of prompting methods
Gu. Rephrase and respond: Let large language models in natural language processing. ACM Comput. Surv.,
ask better questions for themselves, 2024. 55(9):195:1–195:35, 2023. ISSN 0360-0300. doi: 10.
1145/3560815.
Oluwole Fagbohun, Rachel M. Harrison, and Anton
Dereventsov. An empirical categorization of prompting Vivian Liu and Lydia B. Chilton. Design guidelines for
techniques for large language models: A practitioner’s prompt engineering text-to-image generative models. In
guide, 2024. URL https://arxiv.org/abs/2402. Proceedings of the 2022 CHI Conference on Human
14837v1. Factors in Computing Systems, CHI ’22. Association for
Computing Machinery, 2022. ISBN 978-1-4503-9157-
Ashish Garg and Ramkumar Rajendran. Analyzing the 3. doi: 10.1145/3491102.3501825. event-place: New
role of generative AI in fostering self-directed learning Orleans, LA, USA.
through structured prompt engineering. In Generative
Intelligence and Intelligent Tutoring Systems: 20th In- L.S. Lo. The art and science of prompt engineering: A
ternational Conference, ITS 2024, Thessaloniki, Greece, new literacy in the information age. Internet Reference
June 10–13, 2024, Proceedings, Part I, pages 232–243. Services Quarterly, 27(4):203–210, 2023. doi: 10.1080/
Springer-Verlag, 2024. ISBN 978-3-031-63027-9. doi: 10875301.2023.2227621. Publisher: Routledge.
10.1007/978-3-031-63028-6_18. event-place: Thessa-
loniki, Greece. Aman Madaan, Niket Tandon, Prakhar Gupta, Skyler Hal-
linan, Luyu Gao, Sarah Wiegreffe, Uri Alon, Nouha
Y. Han, J. Hou, and Y. Sun. Research and application Dziri, Shrimai Prabhumoye, Yiming Yang, Shashank
of GPT-based large language models in business and Gupta, Bodhisattwa Prasad Majumder, Katherine Her-
economics: A systematic literature review in progress. mann, Sean Welleck, Amir Yazdanbakhsh, and Pe-
In 2023 IEEE International Conference on Comput- ter Clark. Self-refine: Iterative refinement with self-
ing (ICOCO), pages 118–123, 2023. doi: 10.1109/ feedback, 2023.
ICOCO59262.2023.10397642.
Ash Maurya. Running Lean: Iterate from Plan A to a Plan
P.A. Hill, L.K. Narine, and A.L. Miller. Prompt en- That Works. O’Reilly Media, 2012.
gineering principles for generative AI use in exten-
sion. The Journal of Extension, 62(3), 2024. doi: A. Moglia, K. Georgiou, P. Cerveri, L. Mainardi, R.M.
10.34068/joe.62.03.20. Publisher: Extension Journal, Satava, and A. Cuschieri. Large language models in
Inc. healthcare: from a systematic review on medical exam-
inations to a comparative analysis on fundamentals of
IBM. Design Thinking Field Guide, 2016. Available at robotic surgery online test. Artificial Intelligence Re-
https://www.ibm.com/design/thinking. view, 57(9), 2024. doi: 10.1007/s10462-024-10849-5.

Alexey Ivanov and Dmitry Voloshchuk. The team can- OpenAI. Improving language understanding with unsu-
vas. https://theteamcanvas.com, 2015. Accessed: pervised learning, 2018. URL https://openai.com/
2024-12-05. index/language-unsupervised.

Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Alexander Osterwalder and Yves Pigneur. Business Model
Matsuo, and Yusuke Iwasawa. Large language models Generation: A Handbook for Visionaries, Game Chang-
are zero-shot reasoners, 2023. ers, and Challengers. John Wiley & Sons, Hoboken, NJ,
2010. English Edition, Strategyzer Series.
P. Korzynski, G. Mazurek, P. Krzypkowska, and
A. Kurasinski. Artificial intelligence prompt engineer- Matthew J. Page, Joanne E. McKenzie, Patrick M. Bossuyt,
ing as a new digital competence: Analysis of generative Isabelle Boutron, Tammy C. Hoffmann, Cynthia D. Mul-
AI technologies such as ChatGPT. Entrepreneurial Busi- row, Larissa Shamseer, Jennifer M. Tetzlaff, Elie A. Akl,
ness and Economics Review, 11(3):25–37, 2023. doi: Sue E. Brennan, Roger Chou, Julie Glanville, Jeremy M.
10.15678/EBER.2023.110302. Publisher: Cracow Uni- Grimshaw, Asbjørn Hróbjartsson, Manoj M. Lalu, Tian-
versity of Economics. jing Li, Elizabeth W. Loder, Evan Mayo-Wilson, Steve
McDonald, Luke A. McGuinness, Lesley A. Stewart,
Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, James Thomas, Andrea C. Tricco, Vivian A. Welch,
Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, and Penny Whiting, and David Moher. The PRISMA 2020
Xing Xie. Large language models understand and can statement: an updated guideline for reporting systematic
be enhanced by emotional stimuli, 2023. reviews. BMJ, 372:n71, 2021. ISSN 1756-1833. doi:
10.1136/bmj.n71. Publisher: British Medical Journal
Pengfei Liu, Weizhe Yuan, Jinlan Fu, Zhengbao Jiang, Publishing Group Section: Research Methods &
Hiroaki Hayashi, and Graham Neubig. Pre-train, prompt, Reporting.

13
Roman Pichler. Strategize: Product Strategy and Prod- Galuscakova P., and de Herrera A.G.S., editors, CLEF
uct Roadmap Practices for the Digital Age. Pichler 2024: Conference and Labs of the Evaluation Forum,
Consulting, London, UK, 2016. volume 3740, pages 712–721. CEUR-WS, 2024. URL
https://www.scopus.com/inward/record.uri?
Alec Radford, Karthik Narasimhan, Tim Sal- eid=2-s2.0-85201630710&partnerID=40&md5=
imans, and Ilya Sutskever. Improving lan- b9f52dd225e44f2f74ee40871bd0b9d9.
guage understanding with unsupervised learn-
ing, 2018. URL https://cdn.openai.com/ The Fountain Institute. Ux research canvas.
research-covers/language-unsupervised/ https://www.thefountaininstitute.com/
language_understanding_paper.pdf. ux-research-canvas, 2020. Accessed: 2024-12-05.

Alec Radford, Jeffrey Wu, Rewon Child, David Antonia Tolzin, Nils Knoth, and Andreas Jan-
Luan, Dario Amodei, and Ilya Sutskever. Bet- son. Worked examples to facilitate the de-
ter language models and their implications, velopment of prompt engineering skills. In
2019. URL https://cdn.openai.com/ ECIS 2024 Proceedings, 2024. URL https:
better-language-models/language_models_ //aisel.aisnet.org/ecis2024/track13_
are_unsupervised_multitask_learners.pdf. learning_teach/track13_learning_teach/10.
Publication Title: ECIS 2024 Proceedings.
Pranab Sahoo, Ayush Kumar Singh, Saha Sriparna, Jain
Vinija, Samrat Mondal, and Aman Chadha. A systematic Catherine Tony, Nicolás E. Díaz Ferreyra, Markus Mutas,
survey of prompt engineering in large language models: Salem Dhiff, and Riccardo Scandariato. Prompting
Techniques and applications, 2024. URL https:// techniques for secure code generation: A systematic
arxiv.org/abs/2402.07927. investigation, 2024. URL https://arxiv.org/abs/
2407.07064v1.
Y. Sasaki, H. Washizaki, J. Li, D. Sander, N. Yoshioka, and
Y. Fukazawa. Systematic literature review of prompt Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob
engineering patterns in software engineering. In 2024 Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser,
IEEE 48th Annual Computers, Software, and Applica- and Illia Polosukhin. Attention is all you need, 2023.
tions Conference (COMPSAC), pages 670–675, 2024.
doi: 10.1109/COMPSAC61105.2024.00096. Shubham Vatsal and Harsh Dubey. A survey of prompt en-
gineering methods in large language models for different
G. Sasson Lazovsky, T. Raz, and Y.N. Kenett. The art NLP tasks, 2024.
of creative inquiry—from question asking to prompt
engineering. The Journal of Creative Behavior, 2024. Verband der Hochschullehrerinnen und Hochschullehrer
doi: 10.1002/jocb.671. Publisher: John Wiley and Sons für Betriebswirtschaft e.V. Vhb-rating 2024 für publika-
Inc. tionsmedien, teilrating wirtschaftsinformatik (wi), 2024.
URL https://vhbonline.org/fileadmin/user_
Douglas C. Schmidt, Jesse Spencer-Smith, Quchen Fu, upload/VHB_Rating_2024_Area_rating_WI.pdf.
and Jules White. Towards a catalog of prompt patterns
to enhance the discipline of prompt engineering. Ada Jan vom Brocke, Alexander Simons, Bjoern Niehaves,
Lett., 43(2):43–51, 2024. ISSN 1094-3641. doi: 10. Björn Niehaves, Kai Riemer, Ralf Plattfaut, and Anne
1145/3672359.3672364. Place: New York, NY, USA Cleven. RECONSTRUCTING THE GIANT: ON THE
Publisher: Association for Computing Machinery. IMPORTANCE OF RIGOUR IN DOCUMENTING
THE LITERATURE SEARCH PROCESS. ECIS 2009
Sander Schulhoff, Michael Ilie, Nishant Balepur, Kon- Proceedings, 2009. URL https://aisel.aisnet.
stantine Kahadze, Amanda Liu, Chenglei Si, Yinheng org/ecis2009/161.
Li, Aayush Gupta, HyoJung Han, Sevien Schulhoff,
Pranav Sandeep Dulepet, Saurav Vidyadhara, Dayeon J. Wang, Z. Liu, L. Zhao, Z. Wu, C. Ma, S. Yu, H. Dai,
Ki, Sweta Agrawal, Chau Pham, Gerson Kroiz, Feileen Q. Yang, Y. Liu, S. Zhang, E. Shi, Y. Pan, T. Zhang,
Li, Hudson Tao, Ashay Srivastava, Hevander Da Costa, D. Zhu, X. Li, X. Jiang, B. Ge, Y. Yuan, D. Shen, T. Liu,
Saloni Gupta, Megan L. Rogers, Inna Goncearenco, and S. Zhang. Review of large vision models and vi-
Giuseppe Sarli, Igor Galynker, Denis Peskoff, Marine sual prompt engineering. Meta-Radiology, 1(3), 2023a.
Carpuat, Jules White, Shyamal Anadkat, Alexander doi: 10.1016/j.metrad.2023.100047. Publisher: KeAi
Hoyle, and Philip Resnik. The prompt report: A sys- Publishing Communications Ltd.
tematic survey of prompting techniques, 2024. URL
https://arxiv.org/abs/2406.06608v3. L. Wang, X. Chen, X. Deng, H. Wen, M. You, W. Liu,
Q. Li, and J. Li. Prompt engineering in consistency
M. Siino and I. Tinnirello. GPT hallucination detection and reliability with the evidence-based guideline for
through prompt engineering. In Faggioli G., Ferro N., LLMs. npj Digital Medicine, 7(1), 2024. doi: 10.1038/

14
s41746-024-01029-4. Publisher: Nature Research.

Lei Wang, Wanyu Xu, Yihuai Lan, Zhiqiang Hu, Yunshi


Lan, Roy Ka-Wei Lee, and Ee-Peng Lim. Plan-and-
solve prompting: Improving zero-shot chain-of-thought
reasoning by large language models, 2023b.

Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le,


Ed Chi, Sharan Narang, Aakanksha Chowdhery, and
Denny Zhou. Self-consistency improves chain of
thought reasoning in language models, 2023c.

E. Watson, T. Viana, and S. Zhang. Augmented behavioral


annotation tools, with application to multimodal datasets
and models: A systematic review. AI (Switzerland), 4
(1):128–171, 2023. doi: 10.3390/ai4010007.

Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten


Bosma, Brian Ichter, Fei Xia, Ed Chi, Quoc Le, and
Denny Zhou. Chain-of-thought prompting elicits rea-
soning in large language models, 2023.

Jules White, Quchen Fu, Sam Hays, Michael Sandborn,


Carlos Olea, Henry Gilbert, Ashraf Elnashar, Jesse
Spencer-Smith, and Douglas C. Schmidt. A prompt pat-
tern catalog to enhance prompt engineering with Chat-
GPT, 2023.

Xiaohan Xu, Chongyang Tao, Tao Shen, Can Xu, Hongbo


Xu, Guodong Long, Jian-guang Lou, and Shuai Ma. Re-
reading improves reasoning in large language models,
2024.

Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran,


Thomas L. Griffiths, Yuan Cao, and Karthik Narasimhan.
Tree of thoughts: Deliberate problem solving with large
language models, 2023.

Michihiro Yasunaga, Xinyun Chen, Yujia Li, Panupong


Pasupat, Jure Leskovec, Percy Liang, Ed H. Chi, and
Denny Zhou. Large language models as analogical rea-
soners, 2024.

Qinyuan Ye, Maxamed Axmed, Reid Pryzant, and Fereshte


Khani. Prompt engineering a prompt engineer, 2024.

Yongchao Zhou, Andrei Ioan Muresanu, Ziwen Han,


Keiran Paster, Silviu Pitis, Harris Chan, and Jimmy
Ba. Large language models are human-level prompt
engineers, 2023a.

Yucheng Zhou, Xiubo Geng, Tao Shen, Chongyang Tao,


Guodong Long, Jian-Guang Lou, and Jianbing Shen.
Thread of thought unraveling chaotic contexts, 2023b.

15
A Appendix

Table 8: Identified databases with respective search terms.


Database Search term
AIS eLibrary 1) “prompt-engineering”; 2) “prompt engineering”; 3) “prompt techniques”; 4) “prompt
designs”; 5) “prompt design”; 6) “prompt patterns”; 7) “prompt pattern”; 8) “prompt
strategies”; 9) “prompt strategy”; 10) “prompt methods”
The search in the AIS eLibrary was carried out individually for each term, as the website
did not support OR operators.
ACM Digital Library Title: ("prompt-engineering" OR "prompt engineering" OR "prompt
techniques" OR "prompt designs" OR "prompt design" OR "prompt
patterns" OR "prompt pattern" OR "prompt strategies" OR "prompt
strategy" OR "prompt methods")
IEEE Xplore ("Document Title":prompt-engineering) OR ("Document Title":prompt
engineering) OR ("Document Title":prompt techniques) OR ("Document
Title":prompt designs) OR ("Document Title":prompt design) OR
("Document Title":prompt patterns) OR ("Document Title":prompt
pattern) OR ("Document Title":prompt strategies) OR ("Document
Title":prompt strategy) OR ("Document Title":prompt methods")
Scopus (TITLE("prompt-engineering") OR TITLE("prompt engineering")
OR TITLE("prompt techniques") OR TITLE("prompt designs") OR
TITLE("prompt design") OR TITLE("prompt patterns") OR TITLE("prompt
pattern") OR TITLE("prompt strategies") OR TITLE("prompt strategy")
OR TITLE("prompt methods"))
arXiv "prompt-engineering" OR "prompt engineering" OR "prompt techniques"
OR "prompt designs" OR "prompt design" OR "prompt patterns" OR
"prompt pattern" OR "prompt strategies" OR "prompt strategy" OR
"prompt methods"

16

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy