Search This Blog

Prompt Format Commands

 

The Prompt Engineer's Lexicon: A Definitive Guide to 40+ Commands for Mastering Large Language Models



Introduction: From Art to Engineering


Prompt engineering has rapidly evolved from an intuitive art into a critical engineering discipline essential for harnessing the full capabilities of large language models (LLMs).1 As generative artificial intelligence (GenAI) systems become more integrated into diverse industries and research domains, the ability to guide their behavior with precision is paramount.3 The interaction with these models, which fundamentally operate by predicting the next most likely sequence of words based on a given input, is governed by the prompt.4 Consequently, the structure and content of this input—the prompt—are the primary levers for controlling the model's output.5

This guide moves beyond the perception of prompt crafting as mere guesswork, reframing it as a systematic practice built on a lexicon of actionable techniques, principles, and architectural patterns.6 These methods, referred to here as "prompt commands," constitute a comprehensive toolkit for developers, researchers, and practitioners. Mastering this lexicon enables the construction of more accurate, reliable, and sophisticated AI applications by transforming ambiguous requests into precise, machine-interpretable instructions.8

This report provides a structured and exhaustive overview of more than 40 distinct prompt engineering commands. It is organized into five parts, beginning with the universal principles that form the bedrock of any effective prompt. It then progresses to structural and formatting commands, techniques for in-context learning, methods for eliciting complex reasoning, and finally, advanced architectures for building multi-stage, self-optimizing AI systems. By understanding and applying these commands, practitioners can systematically enhance model performance, mitigate limitations such as factual inaccuracies, and unlock new capabilities in areas ranging from complex problem-solving to creative content generation.1


The Prompt Engineering Command Lexicon


The following table serves as a quick-reference index to the 40 commands detailed in this report. It provides a high-level map of the techniques, categorized by their primary function, allowing for both linear learning and targeted, non-linear reference.

Command #

Command Name

Category

Concise Definition

1

Be Specific, Descriptive, and Detailed

Foundational

Provide precise details on desired context, outcome, length, format, and style.

2

Provide Essential Context and Background

Foundational

Include relevant facts, data, or source information to ground the model.

3

Define the Target Audience

Foundational

Specify the intended audience to guide the model's tone and complexity.

4

Specify the Output Format

Foundational

Clearly define the desired structure of the output (e.g., list, Markdown).

5

Specify Length and Constraints

Foundational

Set concrete limits on the output's length (e.g., word count, sentences).

6

Use Affirmative Directives

Foundational

Instruct the model on what to do, rather than what not to do.

7

Employ Iterative Refinement

Foundational

Treat prompt design as a cycle of testing, analyzing output, and refining the input.

8

Use Clear, Simple Language

Foundational

Avoid ambiguous jargon to ensure the model correctly interprets the request.

9

Set a Clear Goal or Objective

Foundational

Ensure the prompt has a single, well-defined purpose to avoid unfocused output.

10

Provide Evaluation Criteria

Foundational

Ask the model to assess its own output against criteria you provide.

11

Use Delimiters

Structural

Employ separators (e.g., ###, """) to distinguish different parts of the prompt.

12

Position Instructions at the Beginning

Structural

Place the primary task instruction at the start of the prompt.

13

Assign a Persona or Role

Structural

Instruct the model to adopt a specific persona (e.g., "Act as an expert...").

14

Structure as a Conversation

Structural

Use System, User, and Assistant roles in chat-based models.

15

Prime the Output with Leading Words

Structural

End the prompt with the beginning of the desired response to cue the format.

16

Request Structured Data

Structural

Explicitly ask for machine-readable formats like JSON or XML with a schema.

17

Create a Scenario or Narrative Context

Structural

Frame the request within a story or situation to generate richer responses.

18

Give the Model an "Out"

Structural

Provide an alternative response path if the task cannot be completed.

19

Zero-Shot Prompting

In-Context Learning

Ask the model to perform a task without providing any examples.

20

One-Shot Prompting

In-Context Learning

Provide a single example of the desired input-output pair.

21

Few-Shot Prompting

In-Context Learning

Provide multiple examples (exemplars) to demonstrate the task.

22

Chain-of-Thought (CoT) Prompting

Advanced Reasoning

Guide the model to break down a problem into intermediate reasoning steps.

23

Zero-Shot CoT

Advanced Reasoning

Trigger step-by-step reasoning by adding "Let's think step by step."

24

Self-Consistency

Advanced Reasoning

Generate multiple reasoning paths and select the most consistent answer.

25

Tree of Thoughts (ToT)

Advanced Reasoning

Allow the model to explore and evaluate multiple reasoning branches.

26

Graph of Thoughts (GoT)

Advanced Reasoning

Generalize ToT to a graph structure, allowing for more complex reasoning paths.

27

Generate Knowledge Prompting

Advanced Reasoning

Instruct the model to generate relevant facts before answering the main query.

28

Directional Stimulus Prompting

Advanced Reasoning

Use subtle hints or cues to guide the model's output without being explicit.

29

Plan-and-Solve Prompting

Advanced Reasoning

Instruct the model to first create a plan and then execute it to solve a problem.

30

Re-Reading (Recitation)

Advanced Reasoning

Prompt the model to recite or re-read context to improve comprehension.

31

Prompt Chaining

Advanced Architectures

Break a complex task into a sequence of prompts, using outputs as inputs.

32

Reflection and Self-Correction

Advanced Architectures

Use a subsequent prompt to ask the model to critique and improve its own output.

33

Meta-Prompting

Advanced Architectures

Ask the model to generate or refine a prompt for a given task.

34

Clarification Prompting

Advanced Architectures

Instruct the model to ask questions when a prompt is ambiguous.

35

Retrieval Augmented Generation (RAG)

Advanced Architectures

Augment the prompt with externally retrieved data from a knowledge base.

36

Agentic Prompting and Tool Use

Advanced Architectures

Enable the model to use external tools (APIs, code execution) to complete tasks.

37

Automatic Prompt Engineer (APE)

Advanced Architectures

Use an LLM to automatically generate and select optimal prompts for a task.

38

Active-Prompt

Advanced Architectures

Dynamically select the most relevant few-shot examples for a given query.

39

Program-Aided Language Models (PAL)

Advanced Architectures

Instruct the LLM to write and execute code to derive an answer.

40

Multimodal Prompting

Advanced Architectures

Craft prompts that combine text with other modalities like images or audio.


Part I: The Bedrock of Effective Prompting: Foundational Commands


This section details the universal principles that are prerequisites for any effective prompt. These foundational commands are not advanced techniques but rather the essential building blocks upon which all successful prompt engineering is based. Mastering them is the first and most critical step toward achieving reliable and high-quality outputs from any large language model.


Command 1: Be Specific, Descriptive, and Detailed


The most universally cited principle in prompt engineering is the need for specificity. Vague or ambiguous prompts force the model to make assumptions, which often leads to generic, irrelevant, or incorrect responses.9 To guide the model effectively, the prompt must be as precise and descriptive as possible regarding the desired context, outcome, length, format, and style.10 Longer, more detailed prompts generally provide better context and clarity than shorter ones.12

This principle can be understood by considering the model's objective: to generate the most probable sequence of text following the prompt. A detailed prompt creates a highly specific statistical context, narrowing the range of probable and acceptable outputs. For instance, the prompt "Write a poem about OpenAI" is highly underspecified.10 The model must guess the desired tone (celebratory, critical, technical?), form (sonnet, haiku, free verse?), and focus.

A more effective prompt provides these details explicitly: "Write a short, inspiring poem about OpenAI, focusing on the recent DALL-E product launch (DALL-E is a text-to-image ML model) in the style of a {famous poet}".10 This command provides multiple layers of specificity: a topic (OpenAI), a sub-topic (DALL-E launch), a tone (inspiring), a length constraint (short), and a stylistic guide (in the style of a famous poet). Each detail added to the prompt further constrains the model's vast output space, dramatically increasing the likelihood of generating a response that aligns with the user's intent.


Command 2: Provide Essential Context and Background


LLMs do not possess real-world understanding or up-to-the-minute knowledge; they operate solely on the information provided within the prompt and their pre-trained data. Providing relevant context, facts, data, or background information is therefore crucial for grounding the model's response in a specific reality.13 This command involves augmenting the prompt with the necessary information for the model to perform its task accurately. This can include defining key terms, referencing specific source documents, or including relevant data points.13

For example, asking a generic question like "Discuss the consequences of rising temperatures" may yield a general, textbook-style answer. However, providing specific context focuses the model's attention and elicits a more targeted and useful response. A better prompt would be: "Given that global temperatures have risen by 1 degree Celsius since the pre-industrial era, discuss the potential consequences for sea level rise".13 This prompt provides a key piece of data ($1^{\circ}\text{C}$ rise) and narrows the scope of the consequences to a specific area (sea level rise), guiding the model to generate a more analytical and relevant output. Similarly, for tasks involving document analysis, the prompt should explicitly reference the source: "Based on the attached financial report, analyze the company's profitability over the past five years".13


Command 3: Define the Target Audience


The intended audience of a response significantly influences its appropriate tone, vocabulary, and level of complexity. An explanation of quantum computing for a non-technical audience should be vastly different from one intended for graduate physics students. Explicitly defining the target audience in the prompt is a powerful command for controlling these stylistic and substantive aspects of the output.13

This command acts as a direct instruction to the model on how to tailor its communication style. For instance, a prompt could specify: "Explain the concept of quantum computing in simple terms, suitable for a non-technical audience".13 This instructs the model to avoid jargon, use analogies, and prioritize conceptual clarity over technical depth. Conversely, a prompt could state: "Describe the principles of quantum superposition and entanglement for an audience of undergraduate physics majors." This would signal the model to use more technical language and assume a higher level of foundational knowledge. This technique is essential for tasks ranging from marketing and communications ("Write a product description for a new line of organic skincare products, targeting young adults concerned with sustainability" 13) to education and technical writing.


Command 4: Specify the Output Format


For many applications, particularly those that involve programmatic processing of an LLM's output, the structure of the response is as important as its content. Explicitly specifying the desired output format is a critical command for ensuring the model's generation is immediately usable and reliable.7 This can range from simple formats like a bulleted list or a numbered list to more complex, machine-readable structures like JSON, XML, or Markdown tables.12

When the desired format is not specified, the model will default to a narrative or paragraph style, which can be difficult to parse automatically. A less effective prompt might say, "List the key findings." A far more effective prompt provides a clear template: "Summarize the text below as a bullet point list of the most important points".10 For more structured data extraction, providing an explicit schema is best practice. For example: "Extract the important entities mentioned in the text below. Desired format: Company names: <comma_separated_list_of_company_names>, People names: <comma_separated_list_of_company_names>, Specific topics: <comma_separated_list_of_company_names>, General themes: <comma_separated_list_of_company_names>".10 This level of format specification leaves little room for ambiguity and ensures the output can be reliably integrated into downstream software applications.7


Command 5: Specify Length and Constraints


Controlling the length and scope of the model's output is essential for creating concise, focused, and relevant content. Imprecise descriptions of length, such as "fairly short" or "not too much," are ineffective because they are subjective and difficult for the model to interpret.10 Instead, this command requires providing concrete, quantifiable constraints.

Effective length specifications can be given in terms of word count, sentence count, or paragraph count. For example, "Compose a 500-word essay" provides a clear target for the model.13 Similarly, "Use a 3 to 5 sentence paragraph to describe this product" sets a precise and actionable boundary.10 These constraints help prevent the model from generating overly verbose or unnecessarily brief responses. In addition to length, other constraints can be applied, such as limiting the response to information found only within a provided text, which helps to reduce the risk of the model introducing outside information or "hallucinating" facts.


Command 6: Use Affirmative Directives


The way instructions are framed has a significant impact on model behavior. A key principle is to use affirmative directives, telling the model what it should do, rather than what it should not do.10 This is not merely a matter of stylistic preference; it is a direct consequence of how LLMs are architected. These models are probabilistic sequence generators, trained to predict the next word based on the patterns in their training data.4 They do not process negations or rules in the same logical, deterministic way as traditional software.

A negative command, such as "DO NOT ASK USERNAME OR PASSWORD," introduces tokens ("not," "username," "password") that are statistically associated with the very action one is trying to prevent. This can create a confusing signal for the model, which must learn to "steer away" from a region in its vast probability space, an operation that is often less reliable. In contrast, an affirmative directive provides a clear, positive target. An improved version of the previous prompt would be: "The agent will attempt to diagnose the problem... whilst refraining from asking any questions related to PII. Instead of asking for PII, such as username or password, refer the user to the help article www.samplewebsite.com/help/faq".10 This command gives the model a constructive alternative, creating a strong probabilistic path toward the desired behavior. It is about creating the path of least resistance toward the correct output, rather than attempting to build fences around incorrect ones.


Command 7: Employ Iterative Refinement


Prompt engineering is rarely a one-shot process; it is an iterative cycle of design, testing, and refinement.9 The most effective prompts are often the result of progressive enhancement. This command embodies the workflow of starting with a simple, direct prompt, analyzing the model's response, and then incrementally adding more elements—such as context, examples, specificity, and constraints—to improve the results.11

This iterative approach is vital for several reasons. First, it helps in diagnosing why a prompt may be underperforming. By adding one element at a time, the engineer can observe its effect on the output. Second, it prevents over-engineering the prompt from the start. A simple prompt may be sufficient for a given task, and starting simply avoids unnecessary complexity. The process typically looks like this:

  1. Initial Prompt: Start with a basic instruction.

  2. Review Output: Analyze the response for accuracy, style, and format. Identify any deviations from the desired outcome.

  3. Refine Prompt: Based on the review, modify the prompt. This could involve adding more specific instructions (Command 1), providing context (Command 2), defining a format (Command 4), or including examples (Command 21).

  4. Repeat: Continue this cycle until the model consistently produces the desired output. This methodical process of refinement is fundamental to moving from a proof-of-concept to a production-ready application.


Command 8: Use Clear, Simple Language


Unless the task specifically involves a specialized domain, prompts should be written in clear, simple, and accessible language. The use of unnecessary jargon, complex sentence structures, or ambiguous vocabulary can confuse the model and lead to misinterpretation of the user's intent.12 The goal is to minimize the cognitive load on the model, ensuring that the instruction is as unambiguous as possible.

This principle is particularly important because LLMs do not "understand" language in a human sense; they process it as a sequence of tokens and statistical relationships. Ambiguity in the prompt introduces noise into this process. For example, using a highly technical term when a simpler synonym exists might cause the model to respond in an overly technical or incorrect way, especially if the term has multiple meanings across different domains. Sticking to straightforward language ensures that the prompt's core instruction is the strongest signal the model receives.


Command 9: Set a Clear Goal or Objective


An effective prompt is typically focused on a single, well-defined objective.12 Attempting to accomplish multiple, disparate tasks within a single prompt can lead to a diluted, unfocused, or incomplete response. The model may struggle to prioritize the different parts of the request or may only address one part of it thoroughly.

This command is closely related to the principle of dividing complex tasks (Command 31). If a task involves several distinct steps, such as summarizing a document, extracting key entities from the summary, and then classifying the sentiment of the original document, it is generally more effective to break this down into a chain of separate prompts rather than a single, convoluted one. Each prompt in the chain would have a clear, singular goal. This approach not only improves the reliability of each step but also makes the overall workflow more modular and easier to debug.


Command 10: Provide Evaluation Criteria


A more advanced foundational technique involves instructing the model to evaluate its own output based on a set of criteria provided within the prompt itself. This command forces the model to perform a degree of self-assessment and can lead to higher-quality, more refined responses.7 It essentially adds a meta-cognitive layer to the generation process.

For example, when asking for creative outputs like marketing taglines, one could include an evaluation step in the prompt: "Generate 5 catchy taglines for a new line of sustainable, plant-based protein bars. After generating each tagline, rate it on a scale of 1-10 for each of the following criteria: – Memorability: How likely is it to stick in someone's mind? – Relevance: How well does it communicate the product's key benefits? – Emotional Appeal: How effectively does it evoke positive feelings? – Uniqueness: How different is it from common taglines in the health food industry?".7 This prompt structure does two things: it generates the desired content (taglines) and provides a structured analysis of that content. This self-evaluation can help the user quickly identify the most promising options and provides a framework for further refinement.


Part II: Architecting the Prompt: Structure, Role, and Format


Beyond the content of the instructions, the structural organization of the prompt itself is a powerful tool for guiding a large language model. This part focuses on the "scaffolding" of a prompt—the syntactical and architectural elements that shape the interaction, reduce ambiguity, and direct the model's interpretation of the task. These commands are about designing the container for the content.


Command 11: Use Delimiters to Separate Prompt Components


Complex prompts often contain multiple distinct components: instructions, context, examples, and the final query. To prevent the model from confusing these components, it is a best practice to use clear delimiters to separate them.10 Delimiters are sequences of characters that create a visual and structural boundary within the prompt text. Common choices include triple quotes ("""), triple backticks (``````), triple hashes (###), or XML-style tags (<context>, </context>).

Using delimiters makes the structure of the prompt explicit, helping the model understand which part of the text is the instruction, which part is the data to be processed, and so on. For example, a summarization prompt is significantly improved with delimiters:

Less effective: Summarize the text below as a bullet point list of the most important points. {text input here}

Better: Summarize the text below as a bullet point list of the most important points. Text: """{text input here}""".10

This clear separation is crucial for tasks that involve processing user-provided text, as it also serves as a basic defense against prompt injection attacks, where a user might include malicious instructions within the text to be processed.12


Command 12: Position Instructions at the Beginning


The order of information within a prompt can influence the model's output, a phenomenon sometimes related to recency bias.4 A widely adopted best practice is to place the primary instruction or task command at the beginning of the prompt.10 This approach "primes" the model on its objective from the outset, setting the context for all subsequent information it processes.

By stating the main goal first—for example, "Translate the text below to Spanish:" followed by the text—the model immediately knows its primary function. All the text that follows is then interpreted through the lens of that instruction. While some models may be sensitive to information placed at the end of a prompt, starting with the core directive is a reliable and robust strategy for ensuring the model's focus is correctly aligned with the user's primary intent. Experimenting with repeating the instruction at the end can also be a useful strategy to test for and mitigate recency bias.4


Command 13: Assign a Persona or Role


Instructing the model to adopt a specific persona or role is one of the most powerful and widely used prompting techniques.16 By beginning a prompt with a command like "You are an expert copywriter," "Act as a senior SQL developer," or "You are an experienced wildlife biologist specializing in trees," the user can guide the model's response style, tone, and knowledge base.14 This technique is also known as role-based prompting.

The effectiveness of this command extends beyond mere stylistic imitation. It functions as a powerful method of "context-slicing." An LLM is trained on a vast and diverse corpus of text, containing everything from expert-level research papers to casual forum discussions.4 A generic prompt forces the model to average its response across this entire spectrum. Assigning a persona, however, acts as a filter. The tokens "expert copywriter" are statistically associated with a specific subset of the training data characterized by persuasive language, marketing knowledge, and a particular communication style. This command effectively instructs the model to prioritize the knowledge, vocabulary, and patterns found within that relevant subspace of its training data. This context-slicing dramatically reduces the search space for the "next most likely word," leading to a response that is not only stylistically appropriate but also more likely to be factually accurate and detailed within that specific domain.


Command 14: Structure as a Conversation


Modern chat-based LLMs are optimized for conversational interaction, and their APIs often expose a structured format of alternating roles, typically system, user, and assistant.4 Effectively using these roles is a key structural command.

  • The system message: This message is used to set the overall behavior, persona, and high-level instructions for the model throughout the entire conversation. It is the place to put commands like "You are a helpful assistant who always responds in JSON format" or to define the model's persona (Command 13).14

  • The user message: This represents the input from the end-user for a given turn in the conversation.

  • The assistant message: This represents the model's response. This role is also critically important for providing few-shot examples (Command 21). By crafting a sequence of user/assistant pairs in the initial prompt, the developer can demonstrate the desired conversational pattern or task format.4

Structuring the prompt using this conversational format allows for more complex and stateful interactions than a single, monolithic prompt, and it aligns directly with the model's fine-tuning for dialogue.


Command 15: Prime the Output with Leading Words


A subtle yet highly effective structural command is to end the prompt with the beginning of the desired response. This technique, also known as using a "cue," acts as a jumpstart for the model's output, strongly nudging it toward the correct format or content.4 The model's core function is to continue the text it is given, so providing the first few words of the expected output creates a powerful probabilistic path.

For example, if the desired output is a bulleted list, ending the prompt with "Here is the bulleted list of key points:\n-" strongly encourages the model to begin generating list items.4 In the context of code generation, a prompt designed to create a Python function can be made more reliable by ending with the word import. This primes the model to begin with the necessary library imports, a common pattern in Python code.10 Similarly, for a SQL query, ending with SELECT cues the model to start writing a query. This command is a simple way to increase the reliability of formatted output.


Command 16: Request Structured Data


For applications that require machine-readable output, it is essential to go beyond simply specifying a format and to explicitly request structured data like JSON, XML, or YAML. This command should ideally be paired with an example of the desired schema to ensure the model produces a valid and parsable output.7

A prompt for this purpose might look like: "Generate 5 ideas for eco-friendly product innovations. Present each idea in the following JSON format: { “productName”: “Name of the product”, “briefDescription”: “A one-sentence description”, “targetMarket”: “Primary intended users” }".7 Providing the schema within the prompt serves as a clear template for the model to follow. This is far more robust than simply asking for "a list," as it defines the keys, expected value types (implicitly), and overall structure, making the output programmatically reliable and ready for integration into databases, APIs, or front-end applications.


Command 17: Create a Scenario or Narrative Context


Instead of issuing a dry, direct command, framing the request within a scenario or narrative can elicit more detailed, creative, and context-aware responses.12 This technique works by providing the model with a richer situational context, which helps it to infer details and constraints that might not be explicitly stated.

For example, instead of the direct command "List marketing channels for an eco-friendly water bottle," a more effective approach is to create a scenario: "We are going to create a marketing plan for a new eco-friendly water bottle. We will do this in steps. For the first step, define three detailed buyer personas for our product, including their demographics, psychographics, and key pain points".7 This narrative framing transforms the interaction from a simple query to a collaborative task. It helps the model understand the broader goal and generates a more thoughtful and strategically aligned response. This is particularly effective for complex tasks that require planning or creative brainstorming.


Command 18: Give the Model an "Out"


LLMs are designed to be helpful and will often attempt to answer a question even if they do not have the correct information, which can lead to factual inaccuracies or "hallucinations." A crucial command for improving reliability and reducing fabricated answers is to provide the model with an explicit alternative path—an "out"—if it cannot complete the task as requested.4

This is typically done by adding a conditional instruction to the prompt. For example, in a question-answering task based on a provided document, the prompt should include a directive such as: "Based on the text provided, answer the following question. If the answer is not present in the text, respond with 'Information not found.'".4 This command gives the model permission to not know the answer, which is a critical safeguard. Without this "out," the model's internal weighting might favor generating a plausible-sounding but incorrect answer over admitting a lack of information. This simple addition significantly improves the factuality and trustworthiness of the model's responses in closed-domain tasks.


Part III: In-Context Learning: Guiding by Example


In-context learning is a powerful capability of large language models where the model learns to perform a task simply by being shown examples (demonstrations or exemplars) within the prompt itself.18 This form of learning is transient—it conditions the model for the current inference only and does not permanently update its weights.4 The commands in this section represent a hierarchy of intervention, allowing the engineer to guide the model by providing zero, one, or multiple examples of the desired behavior.

This progression from zero-shot to few-shot prompting, and ultimately to fine-tuning, represents a clear and strategic trade-off. It is an optimization problem where the developer must balance the desired performance level against the increasing cost, complexity, and effort required for each method.

  1. Zero-Shot Prompting is the most economical and fastest approach. It requires minimal prompt design effort and consumes the fewest tokens. It is the first and most efficient step, leveraging the model's generalized pre-trained knowledge.

  2. Few-Shot Prompting represents a moderate intervention. It requires more effort to craft high-quality examples and consumes more tokens in the prompt context. However, it offers a significant performance boost for tasks where the general model is insufficient, without incurring the computational and data management costs of retraining.

  3. Fine-Tuning (a related but distinct process beyond prompting) is the most expensive and complex option. It requires a curated dataset, computational resources for training, and ongoing model management. It is the "heavy machinery" used when in-context learning hits its performance ceiling for a specific, high-value, and repetitive task.
    Understanding this hierarchy allows a developer to make a rational, cost-benefit decision, asking: "What is the minimum level of intervention required to achieve the target performance?"


Command 19: Zero-Shot Prompting


Zero-shot prompting is the most fundamental form of interaction with an LLM. It involves instructing the model to perform a task without providing any prior examples of how to do it.5 The model must rely entirely on its pre-trained knowledge and its ability to generalize from the instruction itself.4 For many straightforward tasks—such as summarization, translation, or answering general knowledge questions—zero-shot prompting is often sufficient.

An example of a zero-shot prompt is: "Extract keywords from the below text. Text: {text} Keywords:".10 The model is expected to understand the concept of "keywords" and apply it to the provided text without having been shown an example. The success of zero-shot prompting is a testament to the powerful generalization capabilities of modern LLMs. It should always be the starting point for any new task, as it is the simplest and most efficient method.10 If it yields satisfactory results, no further complexity is needed.


Command 20: One-Shot Prompting


When a task is slightly more ambiguous or requires a specific output format that is not easily described, providing a single example—a technique known as one-shot prompting—can significantly improve performance.5 A single demonstration is often enough to clarify the user's intent and guide the model toward the correct output structure or style.

For instance, if a zero-shot prompt for a sentiment classification task is not performing well, a one-shot prompt can provide the necessary clarity.

Prompt:

This is awesome! // Positive

Wow that movie was rad! //

The model can infer from the single example that it is expected to classify the sentiment of the sentence and use the // {Sentiment} format. One-shot prompting is particularly useful for teaching the model novel tasks or specific formatting conventions with minimal prompt overhead.18


Command 21: Few-Shot Prompting


Few-shot prompting extends the concept of one-shot prompting by providing multiple examples (typically 2-5, but sometimes more) of the desired input-output pairs.19 This is a highly effective technique for conditioning the model on more complex tasks, nuanced classifications, or specific stylistic patterns where a single example is insufficient.17 The multiple demonstrations provide a richer context for the model to learn from, enabling it to better understand the underlying pattern of the task.4

For example, to extract keywords from texts, a few-shot prompt would provide several text-keyword pairs before presenting the final text for which keywords are needed:

Prompt:

Extract keywords from the corresponding texts below.

Text 1: Stripe provides APIs that web developers can use to integrate...[source](https://medium.com/@pankaj_pandey/optimizing-interactions-with-openai-and-llms-api-a-guide-on-effective-prompt-engineering-33bb5adb2466) processing, API.

##

Text 3: {text}

Keywords 3:.10

Interestingly, research has shown that the format and the distribution of the labels in the examples are often more important for performance than the correctness of the labels themselves.18 This suggests that few-shot prompting is less about teaching the model new facts and more about demonstrating the expected structure and nature of the task.


Part IV: Eliciting Complex Reasoning and Analysis


While foundational and structural commands can elicit high-quality information and formatted output, a distinct class of advanced techniques is required to unlock an LLM's ability to perform complex, multi-step reasoning. These commands are designed to guide the model through logical problem-solving processes, moving beyond simple information retrieval to genuine analysis and deduction.

The core principle unifying these techniques is the externalization of the model's "thought process." A standard prompt asks for a final answer, leaving the model's internal reasoning as an un-inspectable "black box".11 If an error occurs, it is difficult to diagnose. Advanced reasoning techniques fundamentally change the task from "find the answer" to "write a step-by-step explanation that leads to the answer." This leverages the model's primary strength—generating coherent, sequential text—to scaffold and improve its primary weakness, which is performing implicit, un-inspectable reasoning. Each reasoning step the model generates becomes part of the context for the next step, creating a feedback loop where the model's own output guides its subsequent generation. This externalization breaks a complex problem into simpler sub-problems, makes the process transparent and debuggable, and keeps the model focused.


Command 22: Chain-of-Thought (CoT) Prompting


Chain-of-Thought (CoT) prompting is a seminal technique that encourages the model to break down a complex problem into a series of intermediate, logical steps before arriving at a final answer.19 It is particularly effective for tasks requiring arithmetic, commonsense, and symbolic reasoning, where a direct leap to the answer is prone to error.5 CoT can be implemented in a few-shot setting by providing examples that include not just the question and answer, but also the step-by-step reasoning used to derive the answer.18

For example, when asked a math word problem, a standard prompt might lead to an incorrect answer. A CoT prompt, however, would demonstrate the reasoning process:

Standard Prompt Example:

Q: Roger has 5 tennis balls. He buys 2 more cans of tennis balls. Each can has 3 tennis balls. How many tennis balls does he have now?

A: The answer is 11.

CoT Prompt Example:

Q: Roger has 5 tennis balls. He buys 2 more cans of tennis balls. Each can has 3 tennis balls. How many tennis balls does he have now?

A: Roger started with 5 balls. 2 cans of 3 tennis balls each is 6 tennis balls. 5 + 6 = 11. The answer is 11.

By showing the model how to think through the problem, it is better able to apply a similar reasoning process to new, unseen problems.21


Command 23: Zero-Shot CoT


One of the most remarkable findings in prompt engineering is that the benefits of Chain-of-Thought reasoning can often be triggered without providing any examples. This technique, known as Zero-Shot CoT, involves simply appending a magical phrase to the end of the prompt: "Let's think step by step".18

This simple command acts as a trigger, prompting the model to externalize its reasoning process and generate a detailed, step-by-step breakdown of how it arrives at a solution, even for a problem it has never seen before.21 For example, given a complex logic puzzle, adding "Let's think step by step" will cause the model to first articulate the premises, then deduce intermediate conclusions, and finally state the final answer based on the generated chain of thought. This technique is incredibly powerful due to its simplicity and effectiveness, making it one of the most valuable tools for improving reasoning in a zero-shot context.


Command 24: Self-Consistency


Self-consistency is an advanced technique that enhances the robustness of Chain-of-Thought prompting. The core idea is to replace the standard "greedy" approach (taking the single most likely output) with an ensemble method.18 Instead of generating just one reasoning path, the model is prompted to generate multiple, diverse chains of thought for the same problem. The final answer is then determined by a majority vote among the outcomes of these different paths.19

For example, for a complex math problem, the model might be run five times with a high "temperature" setting to encourage response diversity. This could result in five different reasoning chains. Three of them might arrive at the correct answer of "67," while two might make calculation errors and arrive at different answers. By taking the majority result, the system selects "67" as the most consistent and likely correct answer.18 This technique significantly boosts performance on tasks involving arithmetic and commonsense reasoning by mitigating the impact of occasional logical or calculation errors in any single reasoning chain.20


Command 25: Tree of Thoughts (ToT)


Tree of Thoughts (ToT) is a more advanced and deliberate problem-solving framework that generalizes Chain-of-Thought. While CoT explores a single reasoning path, ToT enables the model to explore multiple reasoning paths simultaneously, like branches on a tree.5 At each step of the problem, the model can generate multiple different "thoughts" or next steps. It can then evaluate the progress made along each of these branches and decide which paths to continue pursuing and which to prune.22

This process allows the model to perform a more strategic exploration of the solution space. It can backtrack from dead ends, compare different approaches, and plan ahead. For example, when solving a logic puzzle, the model might generate three possible next steps. It could then use a "value" prompt to evaluate which of these steps is most likely to lead to a solution, and then focus its subsequent generation on that most promising branch. This makes ToT a much more powerful problem-solving technique for tasks that require exploration, strategic planning, or trial and error.1


Command 26: Graph of Thoughts (GoT)


Graph of Thoughts (GoT) represents a further generalization of these reasoning frameworks, moving from the linear structure of CoT and the tree structure of ToT to a more flexible graph structure.1 In GoT, the "thoughts" (intermediate reasoning steps) are nodes in a graph. This architecture allows for more complex operations, such as merging different reasoning paths or transforming entire branches of thought.

For instance, the model could pursue two different lines of reasoning independently and then, at a later stage, generate a new "thought" that synthesizes the findings from both paths. This allows for a richer and more dynamic problem-solving process, where information can be combined and refined in arbitrary ways, much like a human mind mapping out a complex problem. While more complex to implement, GoT offers a glimpse into more powerful and flexible AI reasoning systems.


Command 27: Generate Knowledge Prompting


This technique addresses the limitation that a model might not retrieve all relevant facts from its memory before answering a question. Generate Knowledge Prompting instructs the model to first generate a set of relevant facts or background knowledge about the topic before attempting to answer the main question.1 This self-generated knowledge is then included as part of the context for the final answer.

For example, a prompt could be structured as: "Before explaining climate change, first list the key scientific principles related to it (e.g., the greenhouse effect, carbon cycle). Once done, use these principles to explain the concept, its causes, and its effects".19 This two-step process primes the model with the necessary information, effectively allowing it to "study up" just before answering. This leads to more informed, accurate, and comprehensive responses, as the final generation is grounded in a context that the model itself has just articulated.18


Command 28: Directional Stimulus Prompting


Directional Stimulus Prompting is a more subtle reasoning technique that involves providing hints, keywords, or cues to guide the LLM toward a desired output without being overly prescriptive or providing a full example.1 It is a way of gently "nudging" the model in the right direction. This can be particularly useful for creative tasks or for guiding the model's focus within a broad topic. For instance, in a prompt asking for a story, one might include a list of keywords like "ancient ruins, forgotten map, solar eclipse" to stimulate a particular narrative direction. This technique helps to shape the model's output while still allowing for a high degree of generative freedom.5


Command 29: Plan-and-Solve Prompting


Similar to Chain-of-Thought, Plan-and-Solve Prompting is a more explicit method for breaking down complex problems. This technique instructs the model to first devise a high-level plan to solve the problem and then execute that plan, showing its work for each step.22 This separates the strategic planning phase from the tactical execution phase. A prompt might state: "First, devise a plan to calculate the total surface area of the object described. Then, execute the steps in your plan to find the final answer." This encourages a more structured and methodical approach to problem-solving and has been shown to improve performance on zero-shot reasoning tasks.


Command 30: Re-Reading (Recitation)


For tasks that involve reasoning over a long and detailed context, models can sometimes fail to recall or correctly utilize information presented earlier in the prompt. Research has shown that prompting the model to "re-read" or recite key parts of the provided context can improve its ability to reason accurately.22 This command forces the model to pay closer attention to the provided information. For example, before asking a question about a long passage of text, one could instruct the model: "First, summarize the main argument of the third paragraph. Now, based on the entire text, answer the following question..." This act of forced recall and summarization appears to strengthen the model's internal representation of the context, leading to better performance on downstream reasoning tasks.


Part V: Advanced Architectures and Self-Optimization


The most sophisticated applications of prompt engineering move beyond single, static prompts to construct dynamic, multi-stage systems. The commands in this section represent architectural patterns and meta-cognitive techniques that treat prompting as a system-level design challenge. These approaches enable the creation of autonomous agents, self-optimizing workflows, and robust, production-grade AI applications.

The emergence of these techniques signals a fundamental shift in the role of the human prompt engineer. The focus is evolving from crafting the perfect execution prompt for a single task to designing the optimal meta-prompt that governs an entire autonomous system. Early prompt engineering focused on micro-managing the model's token-level output for a specific task. The evolution to prompt chaining saw the engineer become a static system architect, designing a fixed workflow. Now, with agentic and automated techniques, the prompt becomes a "mission briefing" for an autonomous agent. The LLM itself becomes the reasoner and planner, tasked with figuring out the steps, using tools, and even designing its own prompts. This trajectory represents a powerful abstraction of human effort, elevating the prompt engineer's role from a hands-on technician to a high-level architect and supervisor of intelligent, autonomous systems.


Command 31: Prompt Chaining


Prompt chaining, also known as creating multi-stage workflows, is the practice of breaking down a complex task into a sequence of simpler prompts, where the output of one prompt serves as the input for the next.5 This architectural pattern is essential for building any non-trivial application and is a more robust alternative to trying to accomplish everything in a single, overly complex prompt.1

For example, creating a comprehensive marketing plan could be broken down into a chain of prompts:

  1. Prompt 1: "Create 3 detailed buyer personas for our new eco-friendly water bottle...".7

  2. Prompt 2: "Based on the personas defined in the previous step: {output of Prompt 1}, craft a compelling Unique Selling Proposition (USP)...".7

  3. Prompt 3: "Using the USP from the previous step: {output of Prompt 2}, suggest 5 marketing channels...".7
    This modular approach allows for validation and correction at each step, improves reliability, and makes the overall system easier to design, debug, and maintain.8


Command 32: Reflection and Self-Correction


Reflection is a powerful multi-stage technique that emulates the human process of iterative refinement. It involves a two-step process: first, the model generates an initial response to a prompt. Second, a subsequent prompt is used to ask the model to critique, evaluate, and correct its own initial output.17 This creates a feedback loop that can significantly improve the quality and accuracy of the final result.

For example, after generating a piece of code, a reflection prompt could be: "Review the code you just wrote. Are there any potential bugs? Could the efficiency be improved? Is the code well-documented? Provide a revised version of the code that addresses these points." This technique leverages the model's analytical capabilities to improve its own generative output, leading to a more robust and polished final product.8


Command 33: Meta-Prompting


Meta-prompting is a fascinating technique where the model's own intelligence is leveraged to improve the way it is prompted. It involves asking the model to generate or refine a prompt that would be effective for a given task.5 This command essentially asks the model, "How should I ask you to do this?"

An example of a meta-prompt would be: "I want to create a prompt that will help an AI explain the concept of climate change in simple terms. Create an effective prompt for this task".19 The model might then generate a detailed prompt that includes instructions on defining the audience, specifying the format, and suggesting the inclusion of analogies—incorporating many of the best practices discussed in this guide. This technique can accelerate the prompt engineering process by leveraging the model's vast "understanding" of language and tasks.5


Command 34: Clarification Prompting


Instead of a one-way instruction, this command transforms the interaction into a collaborative dialogue by instructing the model to ask clarifying questions if the initial prompt is ambiguous or lacks sufficient information.16 This proactive approach can prevent the model from making incorrect assumptions and generating a flawed response based on incomplete information.

The prompt can be framed as a directive that sets the mode of interaction: "From now on, when I give you a task, I want you to ask me clarifying questions until you have enough information to provide the needed output. Start by asking me what task I want to accomplish".16 This shifts the burden of providing all necessary details upfront from the user to a more natural, conversational exchange, leading to a more accurate final outcome.


Command 35: Retrieval Augmented Generation (RAG)


Retrieval Augmented Generation (RAG) is a powerful architectural pattern that grounds LLM responses in external, up-to-date, or proprietary knowledge.1 It is not a single prompt but a system that combines an information retrieval component with a language model. The process is as follows:

  1. Retrieval: When a user asks a query, the system first retrieves relevant documents or data chunks from an external knowledge base (such as a vector database containing company documents or recent news articles).

  2. Augmentation: The retrieved information is then dynamically inserted into the prompt as context.

  3. Generation: The LLM receives the augmented prompt (containing both the original query and the retrieved context) and generates an answer that is grounded in the provided information.

RAG is a critical technique for reducing hallucinations, enabling the model to answer questions about data it was not trained on, and providing citations for its answers.18


Command 36: Agentic Prompting and Tool Use


Agentic prompting represents a paradigm shift where the LLM is used not just as a generator of content, but as a reasoning engine that can take actions.8 This involves designing prompts that give the model a high-level goal and access to a set of external "tools" (such as APIs, calculators, code interpreters, or search engines).5 The model's task is to reason about the goal, decide which tool to use, generate the necessary input for that tool, execute it, and then process the output to determine the next step.18

Frameworks like ReAct (Reason and Act) demonstrate this pattern, where the model alternates between generating reasoning steps ("Thought:") and actions ("Action:").1 This allows the model to overcome its inherent limitations by offloading tasks like real-time information gathering or precise mathematical calculations to specialized tools.


Command 37: Automatic Prompt Engineer (APE)


Automatic Prompt Engineer (APE) is an automated approach that uses an LLM to perform the task of prompt engineering itself.1 The process treats prompt generation as a search problem. Given a task description and a few examples of input-output pairs, an LLM is used to generate a diverse set of candidate instructions. These candidate prompts are then tested on a task, and the one that yields the best performance (based on some scoring metric) is selected as the optimal prompt.18 This technique automates the manual, iterative process of prompt refinement and can often discover highly effective but non-intuitive prompt phrasing.


Command 38: Active-Prompt


Active-Prompt is a technique for dynamically optimizing few-shot prompting. In standard few-shot prompting, the same set of examples is used for every query. However, some examples may be more relevant to certain queries than others. Active-Prompt addresses this by first analyzing the specific query being asked. It then selects a subset of the most relevant and informative examples from a larger pool of candidates to include in the prompt.1 This query-specific adaptation of the few-shot examples has been shown to significantly improve performance by providing the model with more targeted guidance.18


Command 39: Program-Aided Language Models (PAL)


Program-Aided Language Models (PAL) is a technique that combines the linguistic capabilities of LLMs with the deterministic execution of code.1 Instead of prompting the model to compute the final answer directly, PAL prompts the model to write a program (e.g., in Python) that solves the problem. The generated code is then executed by a standard interpreter, and the result of the execution is returned as the final answer. This approach offloads complex calculations, logical operations, or symbolic reasoning to the code interpreter, leveraging the LLM for understanding the problem and generating the solution logic, while relying on the interpreter for flawless execution.18 This dramatically improves accuracy on tasks involving math and complex logic.


Command 40: Multimodal Prompting


With the rise of multimodal models like GPT-4o and Gemini, prompt engineering is expanding beyond text to include other data types.5 Multimodal prompting involves crafting inputs that combine text with images, audio, or video.23 The fundamental principles of clarity, context, and specificity still apply, but they must now be extended across different modalities. For example, a multimodal prompt might consist of an image of a diagram and a textual question: "Based on the attached circuit diagram, explain the function of the resistor labeled R1." This requires the model to jointly process and reason over both the visual and textual information to generate a coherent response.18


Conclusion: The Future of Human-AI Collaboration


This comprehensive analysis of over 40 prompt engineering commands illustrates a clear and decisive shift in human-AI interaction—a transition from intuitive art to a structured, systematic engineering discipline. The journey from foundational principles of clarity and specificity to the design of complex, autonomous agentic systems underscores the rapid maturation of this field. The commands detailed herein are not merely a list of tips and tricks; they represent a lexicon for precise communication with large language models, enabling practitioners to control, guide, and collaborate with these powerful systems in increasingly sophisticated ways.

The evidence synthesized from extensive research and best practices reveals several key trajectories. First, the principle of externalizing thought—making the model's reasoning process explicit through techniques like Chain-of-Thought—has proven to be a fundamental breakthrough in unlocking complex problem-solving abilities. Second, the move toward multi-stage, chained architectures like RAG and agentic frameworks demonstrates that the future of AI applications lies not in single, monolithic prompts, but in the design of robust, modular systems where LLMs act as core reasoning components. Finally, the emergence of meta-cognitive and self-optimizing techniques such as Meta-Prompting and APE signals a new level of abstraction, where the human engineer's role is evolving from a micro-manager of token sequences to a high-level architect of autonomous reasoning systems.

Looking forward, the continued co-evolution of LLMs and the engineering practices used to interact with them will be critical. The challenges of ensuring AI safety, mitigating biases, and preventing malicious use through techniques like prompt injection will require even more sophisticated architectural and security-aware prompting strategies.5 The ultimate goal remains the transformation of human-AI interaction from guesswork into a precise, reliable, and scalable science.6 Mastering the lexicon of prompt engineering is the definitive path toward achieving that goal, paving the way for a future where AI can be more effectively and responsibly integrated into every facet of technology and society.

Works cited

  1. dair-ai/Prompt-Engineering-Guide - GitHub, accessed October 29, 2025, https://github.com/dair-ai/Prompt-Engineering-Guide

  2. Prompt Engineering Guide, accessed October 29, 2025, https://www.promptingguide.ai/

  3. The Prompt Report: A Systematic Survey of Prompt Engineering Techniques - arXiv, accessed October 29, 2025, https://arxiv.org/abs/2406.06608

  4. Prompt engineering techniques - Azure OpenAI | Microsoft Learn, accessed October 29, 2025, https://learn.microsoft.com/en-us/azure/ai-foundry/openai/concepts/prompt-engineering

  5. Prompt Engineering Guide | IBM, accessed October 29, 2025, https://www.ibm.com/think/prompt-engineering

  6. The First Principles of Prompt Engineering : r/PromptEngineering - Reddit, accessed October 29, 2025, https://www.reddit.com/r/PromptEngineering/comments/1mp0tv1/the_first_principles_of_prompt_engineering/

  7. 5 Timeless Prompt Engineering Principles for Reliable AI Outputs, accessed October 29, 2025, https://generalassemb.ly/blog/timeless-prompt-engineering-principles-improve-ai-output-reliability/

  8. Advanced Prompting Techniques: Accessing True AI Feature Richness - Hummingbird, accessed October 29, 2025, https://www.hummingbird.co/resources/advanced-prompting-techniques-accessing-true-ai-feature-richness

  9. Prompt engineering best practices for ChatGPT - OpenAI Help Center, accessed October 29, 2025, https://help.openai.com/en/articles/10032626-prompt-engineering-best-practices-for-chatgpt

  10. Best practices for prompt engineering with the OpenAI API | OpenAI ..., accessed October 29, 2025, https://help.openai.com/en/articles/6654000-best-practices-for-prompt-engineering-with-the-openai-api

  11. General Tips for Designing Prompts - Prompt Engineering Guide, accessed October 29, 2025, https://www.promptingguide.ai/introduction/tips

  12. 10 Best Practices for Prompt Engineering with Any Model - PromptHub, accessed October 29, 2025, https://www.prompthub.us/blog/10-best-practices-for-prompt-engineering-with-any-model

  13. Prompt Engineering for AI Guide | Google Cloud, accessed October 29, 2025, https://cloud.google.com/discover/what-is-prompt-engineering

  14. Effective Prompts for AI: The Essentials - MIT Sloan Teaching & Learning Technologies, accessed October 29, 2025, https://mitsloanedtech.mit.edu/ai/basics/effective-prompts/

  15. Best Practices for Writing Prompts | by Sravani Thota - Medium, accessed October 29, 2025, https://medium.com/@sravani.thota/best-practices-for-writing-prompts-1a79aa9a4a34

  16. 26 Prompt Engineering Principles for 2024 | by Dan Cleary - Medium, accessed October 29, 2025, https://medium.com/@dan_43009/26-prompt-engineering-principles-for-2024-775099ddfe94

  17. 17 Prompting Techniques to Supercharge Your LLMs - Analytics Vidhya, accessed October 29, 2025, https://www.analyticsvidhya.com/blog/2024/10/17-prompting-techniques-to-supercharge-your-llms/

  18. Prompting Techniques | Prompt Engineering Guide, accessed October 29, 2025, https://www.promptingguide.ai/techniques

  19. Prompt Engineering Techniques | IBM, accessed October 29, 2025, https://www.ibm.com/think/topics/prompt-engineering-techniques

  20. Advanced Prompt Engineering Techniques - Mercity AI, accessed October 29, 2025, https://www.mercity.ai/blog-post/advanced-prompt-engineering-techniques

  21. Prompt Design and Engineering: Introduction and Advanced Methods - arXiv, accessed October 29, 2025, https://arxiv.org/html/2401.14423v4

  22. Papers | Prompt Engineering Guide, accessed October 29, 2025, https://www.promptingguide.ai/papers

  23. Prompt design strategies | Gemini API | Google AI for Developers, accessed October 29, 2025, https://ai.google.dev/gemini-api/docs/prompting-strategies

[2506.18199] Prompt Engineering Techniques for Mitigating Cultural Bias Against Arabs and Muslims in Large Language Models: A Systematic Review - arXiv, accessed October 29, 2025, https://arxiv.org/abs/2506.18199

No comments:

Post a Comment

Prompt Format Commands

  The Prompt Engineer's Lexicon: A Definitive Guide to 40+ Commands for Mastering Large Language Models Introduction: From Art to Engine...

Shaker Posts