-------------
Here are 40 prompts focused on challenges and scenarios for text layout in tables.
Alignment and Spacing
Design a table with centered column headers but left-aligned body text.
Create a layout where all numerical columns are right-aligned, and all text columns are left-aligned.
Design a table where all cell content is vertically aligned to the middle.
Show a layout where the first column (a row header) is right-aligned, and the data cells are left-aligned.
Create a layout for currency values, ensuring the decimal points align perfectly.
Design a table with ample cell padding (e.g., 16px) for a clean, sparse look.
Create a "dense" table layout with minimal padding (e.g., 4px) for a financial dashboard.
Design a table where icons and text are in the same cell, ensuring they are vertically aligned.
Typography and Hierarchy
Create a table where column headers are bold and uppercase.
Design a layout where the first column (row headers) uses a bolder font-weight than the data cells.
Show a "total" row at the bottom of a table, distinguishing it with bold text and a top border.
Design a table using only one font, but use size, weight, and color to establish a clear visual hierarchy.
Create a layout with a sub-header row that is indented and uses a lighter font-weight.
Use text color to indicate status (e.g., green for "Approved," red for "Failed").
Design a table where the primary data is high-contrast, but secondary data (like a timestamp) is in a lighter grey text.
Handling Long Text and Wrapping
Show a layout where long text in a "Description" column wraps to multiple lines.
Create a layout that truncates long text with an ellipsis (
...).Design a table where one column is fixed-width and truncates, while another column is flexible and wraps.
Show a comparison of
word-wrap: break-wordvs.overflow-wrap: break-wordfor a column containing long URLs.Create a layout with multi-line wrapped text, ensuring the line-height (leading) is sufficient for readability.
Design a table where rows have a maximum height, and any overflowing text becomes scrollable within the cell.
Show a layout that hyphenates long words to improve text wrapping in narrow columns.
Responsive and Adaptive Layouts
Design a layout that stacks table columns into a single-column, card-like format on mobile screens.
Create a responsive table that enables horizontal scrolling on small screens while keeping the first column "sticky."
Show a layout where "non-essential" columns (e.g., "Date Added") are hidden on mobile devices.
Design a layout that transforms the table
<thead>(headers) into a<tbody>structure on mobile, showing key-value pairs.Create a "priority" layout where the most important columns are shown first, and others are revealed on click.
Special Cases and Data Types
Design a layout for a table with merged cells (using
colspan) for a main category header.Create a layout with rowspans to group multiple rows under a single category (e.g., "Q1," "Q2").
Design a table for key-value pairs (e.g., product specifications), with the key bolded and the value regular.
Show a layout for a "comparison" table using checkmarks and "N/A" text, ensuring they are centered.
Design a table column for status indicators (e.g., "Active," "Pending") that uses a colored dot next to the text.
Create a layout for a "sparse" table (many empty cells), ensuring it doesn't look broken.
Design a table with indented text to show a parent-child hierarchy (e.g., a file structure).
Show a layout for a table with rotated text in the headers (e.g.,
writing-mode: vertical-rl).Design a layout for a column of boolean values (true/false) using styled "Yes"/"No" text.
Create a table layout that supports right-to-left (RTL) text for languages like Arabic or Hebrew.
Show a layout for a table with a fixed (sticky) header that remains visible as the user scrolls the body.
Design a layout where text in a cell is editable on click.
Create a table where the text layout for a row changes on hover (e.g., text becomes bolder or a "delete" link appears).
----------------
Certainly! Here are 20 additional prompts focusing on different types of layouts, including magazines, books, digital media, and more creative or technical challenges.
🎨 Magazine Layouts
Cover Story Spread: Design a two-page magazine spread for a main feature. The layout must use a full-bleed image on the left page with the article title overlaid. The right page must begin with a large, stylized drop cap and flow into a three-column grid.
Table of Contents: Create the layout for a magazine's table of contents. It needs to balance image teasers for 3-4 main articles with a text-based list of all other sections and their page numbers.
Department Page: Format a layout for a recurring magazine section (e.g., "Travel," "Tech," or "Reviews"). The layout should have a unique header, use a two-column grid, and include a "sidebar" for quick tips or a product feature.
Text-Heavy Feature: Lay out a long-form interview or essay. The design should be text-dominant, using a single, wide column with generous margins. Break up the text using large, bold pull quotes that span the entire width of the text block.
How-To Guide: Design a layout for a "how-to" or DIY guide. The layout must visually separate a "What You'll Need" list (perhaps in a boxed-off section) from the "Instructions," which should be a numbered list with small accompanying icons or photos for each step.
📚 Book & Report Layouts
Textbook Page: Format a single textbook page. The layout must include a main two-column text body, a "Key Term" definition box with a shaded background, and a full-width diagram with labels at the bottom of the page.
Recipe Layout: Design a layout for a single recipe in a cookbook. It must have two distinct zones: an "Ingredients" list on the left (as a bulleted list) and the "Method" on the right (as a numbered list).
Annual Report: Create a layout for a page from a corporate annual report. Use a wide outer margin for "call-outs" and key statistics (e.g., "25% Growth"), while the main financial summary sits in a clean, single column.
Poetry Formatting: Take a short poem and lay it out on a page. The layout must respect the poet's intended line breaks and indentation (stanzas), with a focus on generous white space around the text.
Academic Paper: Format a plain text document into a standard two-column academic paper layout. The layout must include a full-width abstract and keywords at the top before the main content breaks into two columns.
💻 Digital & Web Page Layouts
Blog Post: Design the text layout for a modern, easy-to-read blog post. Use a single, comfortable-width column (approx. 700-800 pixels), large subheadings, and clearly styled blockquotes for quotations.
Landing Page Text Flow: Create the text layout for a product landing page. The layout must alternate between a full-width headline "hero" section, a two-column section (Image + Features list), and a three-column section for testimonials.
Mobile Reflow: Take a traditional three-column newspaper layout and describe how you would "reflow" it into a single-column layout for a mobile screen, ensuring the headline, lead image, and story text are presented in a logical, top-to-bottom order.
Pricing Table: Design the text layout for a "pricing" page with three tiers (e.g., "Basic," "Pro," "Enterprise"). The layout must be a three-column grid, with each column acting as a "card" showing the price and a bulleted list of features.
Website Footer: Format the text for a "fat footer" of a website. The layout should use four columns to organize links under the headings: "Company," "Products," "Support," and "Legal."
⚙️ Creative & Technical Layouts
Bilingual Layout: Create a layout for a pamphlet or museum sign that must present the same text in two different languages (e.g., English and Spanish). The two languages must be side-by-side in separate columns.
Circular Text Wrap: Format a paragraph of text so that it wraps around a circular image placed in the center of the text block, creating a non-rectangular text flow.
Event Program: Design the layout for a two-page event program (like for a play or concert). The left side must list the "Cast of Characters," and the right side must list the "Sequence of Events" or "Musical Numbers" in chronological order.
Restaurant Menu: Lay out the text for a high-end restaurant menu. Use an asymmetrical two-column layout where item names are in bold, prices are right-aligned, and descriptions are in a smaller, italic font below each item.
Fixing Bad Layout: You are given a block of text that is formatted as a single, wide, fully-justified column, creating large "rivers" of white space. Re-format this text into a more readable two-column, left-aligned layout and explain why it is better.
Would you like to explore any of these specific types, such as magazine or web layouts, in more detail?
-----------------
Here are 20 prompts for generating or describing text layouts, categorized by type.
📰 Newspaper Layouts
Classic Front Page: Design the layout for the front page of a traditional broadsheet newspaper. It must include a masthead (title), a large main headline, a lead story in a single wide column "above the fold," and two smaller stories with images below the fold.
Modern Tabloid: Create a dynamic, image-heavy front-page layout for a modern tabloid. Use a "splash" headline that covers most of the page, with a large central photo and small text teasers pointing to stories inside.
Opinion/Editorial Page: Format an op-ed page layout with two columns. The left column should feature a main opinion piece with the author's headshot. The right column should be split into two smaller sections: a "Letter to the Editor" and a political cartoon.
Feature Spread: Design a two-page newspaper spread for a feature article. The layout should start with a large "hero" image or illustration that spans both pages, with the text flowing around it in three-column blocks.
Classifieds Section: Generate the layout for a newspaper's classified ads section. Use 6-8 very narrow, justified columns with minimal padding (gutters) and simple, all-caps headings for categories (e.g., "FOR RENT," "AUTOMOTIVE").
📧 Newsletter Layouts
Corporate Internal Newsletter: Create a layout for a monthly internal company newsletter. Include a top banner with the company logo, a single-column welcome message from the CEO, and a two-column section below for "Employee Spotlights" and "Upcoming Events."
Minimalist Email Newsletter: Design a single-column, minimalist text layout for a weekly curated links newsletter. Use clear typography, generous white space, and simple horizontal rules ($\text{---}$) to separate each curated item.
E-commerce Product Showcase: Generate a newsletter layout to promote a new product line. The layout should be highly visual, alternating between full-width product images and two-column blocks that place a smaller image next to a text description and a "Shop Now" button.
Community Print Newsletter: Format a simple layout for a two-sided print newsletter for a local community center. Use a standard two-column grid, a "Letter from the Director" on the front, and a calendar of events on the back.
Breaking News Alert: Design an email newsletter layout for a breaking news alert. The layout must prioritize speed and clarity: a single, bold "ALERT" header, one clear headline, and two paragraphs of summary text, all in a single, narrow column for easy mobile reading.
🏛️ Column-Specific Layouts
Standard Three-Column: Take a long block of unformatted text and reformat it into a standard three-column layout with 0.5-inch gutters between the columns.
Asymmetrical Columns: Create a layout with one wide main column (for the primary article) and one narrow sidebar column (for related facts, pull quotes, or links).
Dynamic Column Break: Design a layout where the article introduction spans the full width of the page, and the text then breaks into four equal columns for the body of the article.
Floating Column (Sidebar): Format text into a main column, but insert a separate, boxed-out "sidebar" that floats to the right of the second and third paragraphs, causing the main text to wrap around it.
Magazine-Style Columns: Lay out a text article in a two-column grid, but add a large, stylized drop cap at the beginning of the first paragraph that descends three lines.
🧩 Layout Components & Styles
Pull Quote Emphasis: Format a three-column text layout and insert a pull quote in the middle. The quote should be in a larger, italicized font and span the width of the two rightmost columns, forcing the text in those columns to break around it.
Interview/Q&A Format: Design a layout for an interview. Use a single-column format, but clearly differentiate the questions from the answers. Make the questions bold and prefixed with "Q:", and the answers in regular text prefixed with "A:".
Justified vs. Ragged Right: Present the same paragraph of text in two different column layouts: one with full justification (common in newspapers) and one with left-alignment (ragged right) (common in newsletters).
Image and Caption: Create a layout for a single column of text that is interrupted by a central image. The text must wrap above and below the image, and the image must have a caption in a smaller, italic font directly beneath it.
Headline Hierarchy: Generate a newsletter layout that demonstrates clear text hierarchy. It must include a main headline ($H1$), a subheading ($H3$), and regular body text ($p$). The font sizes and weights must create a clear visual path for the reader.
Would you like me to try generating the text for one of these layouts, or perhaps show you a visual example of one?
Of course. Here is an outline of the report on prompt formatting commands.
The Prompt Engineer's Lexicon: A Definitive Guide to 40+ Commands for Mastering Large Language Models
I. Introduction: From Art to Engineering
The evolution of prompt engineering into a critical discipline for controlling Large Language Models (LLMs).
An overview of the report's structure, covering foundational principles, structural commands, in-context learning, advanced reasoning, and architectural patterns.
II. The Prompt Engineering Command Lexicon
A quick-reference table indexing all 40+ commands, categorized for easy navigation.
III. Part I: The Bedrock of Effective Prompting: Foundational Commands
Command 1: Be Specific, Descriptive, and Detailed
Command 2: Provide Essential Context and Background
Command 3: Define the Target Audience
Command 4: Specify the Output Format
Command 5: Specify Length and Constraints
Command 6: Use Affirmative Directives
Command 7: Employ Iterative Refinement
Command 8: Use Clear, Simple Language
Command 9: Set a Clear Goal or Objective
Command 10: Provide Evaluation Criteria
IV. Part II: Architecting the Prompt: Structure, Role, and Format
Command 11: Use Delimiters to Separate Prompt Components
Command 12: Position Instructions at the Beginning
Command 13: Assign a Persona or Role
Command 14: Structure as a Conversation
Command 15: Prime the Output with Leading Words
Command 16: Request Structured Data
Command 17: Create a Scenario or Narrative Context
Command 18: Give the Model an "Out"
V. Part III: In-Context Learning: Guiding by Example
Command 19: Zero-Shot Prompting
Command 20: One-Shot Prompting
Command 21: Few-Shot Prompting
VI. Part IV: Eliciting Complex Reasoning and Analysis
Command 22: Chain-of-Thought (CoT) Prompting
Command 23: Zero-Shot CoT
Command 24: Self-Consistency
Command 25: Tree of Thoughts (ToT)
Command 26: Graph of Thoughts (GoT)
Command 27: Generate Knowledge Prompting
Command 28: Directional Stimulus Prompting
Command 29: Plan-and-Solve Prompting
Command 30: Re-Reading (Recitation)
VII. Part V: Advanced Architectures and Self-Optimization
Command 31: Prompt Chaining
Command 32: Reflection and Self-Correction
Command 33: Meta-Prompting
Command 34: Clarification Prompting
Command 35: Retrieval Augmented Generation (RAG)
Command 36: Agentic Prompting and Tool Use
Command 37: Automatic Prompt Engineer (APE)
Command 38: Active-Prompt
Command 39: Program-Aided Language Models (PAL)
Command 40: Multimodal Prompting
VIII. Conclusion: The Future of Human-AI Collaboration
A summary of the key trajectories in prompt engineering, including the externalization of thought, the shift to multi-stage architectures, and the rise of meta-cognitive techniques.
Future outlook on the field, emphasizing AI safety and the goal of transforming human-AI interaction into a precise science
-------------
The Prompt Engineer's Lexicon: A Definitive Guide to 40+ Commands for Mastering Large Language Models
Introduction: From Art to Engineering
Prompt engineering has rapidly evolved from an intuitive art into a critical engineering discipline essential for harnessing the full capabilities of large language models (LLMs).1 As generative artificial intelligence (GenAI) systems become more integrated into diverse industries and research domains, the ability to guide their behavior with precision is paramount.3 The interaction with these models, which fundamentally operate by predicting the next most likely sequence of words based on a given input, is governed by the prompt.4 Consequently, the structure and content of this input—the prompt—are the primary levers for controlling the model's output.5
This guide moves beyond the perception of prompt crafting as mere guesswork, reframing it as a systematic practice built on a lexicon of actionable techniques, principles, and architectural patterns.6 These methods, referred to here as "prompt commands," constitute a comprehensive toolkit for developers, researchers, and practitioners. Mastering this lexicon enables the construction of more accurate, reliable, and sophisticated AI applications by transforming ambiguous requests into precise, machine-interpretable instructions.8
This report provides a structured and exhaustive overview of more than 40 distinct prompt engineering commands. It is organized into five parts, beginning with the universal principles that form the bedrock of any effective prompt. It then progresses to structural and formatting commands, techniques for in-context learning, methods for eliciting complex reasoning, and finally, advanced architectures for building multi-stage, self-optimizing AI systems. By understanding and applying these commands, practitioners can systematically enhance model performance, mitigate limitations such as factual inaccuracies, and unlock new capabilities in areas ranging from complex problem-solving to creative content generation.1
The Prompt Engineering Command Lexicon
The following table serves as a quick-reference index to the 40 commands detailed in this report. It provides a high-level map of the techniques, categorized by their primary function, allowing for both linear learning and targeted, non-linear reference.
Part I: The Bedrock of Effective Prompting: Foundational Commands
This section details the universal principles that are prerequisites for any effective prompt. These foundational commands are not advanced techniques but rather the essential building blocks upon which all successful prompt engineering is based. Mastering them is the first and most critical step toward achieving reliable and high-quality outputs from any large language model.
Command 1: Be Specific, Descriptive, and Detailed
The most universally cited principle in prompt engineering is the need for specificity. Vague or ambiguous prompts force the model to make assumptions, which often leads to generic, irrelevant, or incorrect responses.9 To guide the model effectively, the prompt must be as precise and descriptive as possible regarding the desired context, outcome, length, format, and style.10 Longer, more detailed prompts generally provide better context and clarity than shorter ones.12
This principle can be understood by considering the model's objective: to generate the most probable sequence of text following the prompt. A detailed prompt creates a highly specific statistical context, narrowing the range of probable and acceptable outputs. For instance, the prompt "Write a poem about OpenAI" is highly underspecified.10 The model must guess the desired tone (celebratory, critical, technical?), form (sonnet, haiku, free verse?), and focus.
A more effective prompt provides these details explicitly: "Write a short, inspiring poem about OpenAI, focusing on the recent DALL-E product launch (DALL-E is a text-to-image ML model) in the style of a {famous poet}".10 This command provides multiple layers of specificity: a topic (OpenAI), a sub-topic (DALL-E launch), a tone (inspiring), a length constraint (short), and a stylistic guide (in the style of a famous poet). Each detail added to the prompt further constrains the model's vast output space, dramatically increasing the likelihood of generating a response that aligns with the user's intent.
Command 2: Provide Essential Context and Background
LLMs do not possess real-world understanding or up-to-the-minute knowledge; they operate solely on the information provided within the prompt and their pre-trained data. Providing relevant context, facts, data, or background information is therefore crucial for grounding the model's response in a specific reality.13 This command involves augmenting the prompt with the necessary information for the model to perform its task accurately. This can include defining key terms, referencing specific source documents, or including relevant data points.13
For example, asking a generic question like "Discuss the consequences of rising temperatures" may yield a general, textbook-style answer. However, providing specific context focuses the model's attention and elicits a more targeted and useful response. A better prompt would be: "Given that global temperatures have risen by 1 degree Celsius since the pre-industrial era, discuss the potential consequences for sea level rise".13 This prompt provides a key piece of data ($1^{\circ}\text{C}$ rise) and narrows the scope of the consequences to a specific area (sea level rise), guiding the model to generate a more analytical and relevant output. Similarly, for tasks involving document analysis, the prompt should explicitly reference the source: "Based on the attached financial report, analyze the company's profitability over the past five years".13
Command 3: Define the Target Audience
The intended audience of a response significantly influences its appropriate tone, vocabulary, and level of complexity. An explanation of quantum computing for a non-technical audience should be vastly different from one intended for graduate physics students. Explicitly defining the target audience in the prompt is a powerful command for controlling these stylistic and substantive aspects of the output.13
This command acts as a direct instruction to the model on how to tailor its communication style. For instance, a prompt could specify: "Explain the concept of quantum computing in simple terms, suitable for a non-technical audience".13 This instructs the model to avoid jargon, use analogies, and prioritize conceptual clarity over technical depth. Conversely, a prompt could state: "Describe the principles of quantum superposition and entanglement for an audience of undergraduate physics majors." This would signal the model to use more technical language and assume a higher level of foundational knowledge. This technique is essential for tasks ranging from marketing and communications ("Write a product description for a new line of organic skincare products, targeting young adults concerned with sustainability" 13) to education and technical writing.
Command 4: Specify the Output Format
For many applications, particularly those that involve programmatic processing of an LLM's output, the structure of the response is as important as its content. Explicitly specifying the desired output format is a critical command for ensuring the model's generation is immediately usable and reliable.7 This can range from simple formats like a bulleted list or a numbered list to more complex, machine-readable structures like JSON, XML, or Markdown tables.12
When the desired format is not specified, the model will default to a narrative or paragraph style, which can be difficult to parse automatically. A less effective prompt might say, "List the key findings." A far more effective prompt provides a clear template: "Summarize the text below as a bullet point list of the most important points".10 For more structured data extraction, providing an explicit schema is best practice. For example: "Extract the important entities mentioned in the text below. Desired format: Company names: <comma_separated_list_of_company_names>, People names: <comma_separated_list_of_company_names>, Specific topics: <comma_separated_list_of_company_names>, General themes: <comma_separated_list_of_company_names>".10 This level of format specification leaves little room for ambiguity and ensures the output can be reliably integrated into downstream software applications.7
Command 5: Specify Length and Constraints
Controlling the length and scope of the model's output is essential for creating concise, focused, and relevant content. Imprecise descriptions of length, such as "fairly short" or "not too much," are ineffective because they are subjective and difficult for the model to interpret.10 Instead, this command requires providing concrete, quantifiable constraints.
Effective length specifications can be given in terms of word count, sentence count, or paragraph count. For example, "Compose a 500-word essay" provides a clear target for the model.13 Similarly, "Use a 3 to 5 sentence paragraph to describe this product" sets a precise and actionable boundary.10 These constraints help prevent the model from generating overly verbose or unnecessarily brief responses. In addition to length, other constraints can be applied, such as limiting the response to information found only within a provided text, which helps to reduce the risk of the model introducing outside information or "hallucinating" facts.
Command 6: Use Affirmative Directives
The way instructions are framed has a significant impact on model behavior. A key principle is to use affirmative directives, telling the model what it should do, rather than what it should not do.10 This is not merely a matter of stylistic preference; it is a direct consequence of how LLMs are architected. These models are probabilistic sequence generators, trained to predict the next word based on the patterns in their training data.4 They do not process negations or rules in the same logical, deterministic way as traditional software.
A negative command, such as "DO NOT ASK USERNAME OR PASSWORD," introduces tokens ("not," "username," "password") that are statistically associated with the very action one is trying to prevent. This can create a confusing signal for the model, which must learn to "steer away" from a region in its vast probability space, an operation that is often less reliable. In contrast, an affirmative directive provides a clear, positive target. An improved version of the previous prompt would be: "The agent will attempt to diagnose the problem... whilst refraining from asking any questions related to PII. Instead of asking for PII, such as username or password, refer the user to the help article www.samplewebsite.com/help/faq".10 This command gives the model a constructive alternative, creating a strong probabilistic path toward the desired behavior. It is about creating the path of least resistance toward the correct output, rather than attempting to build fences around incorrect ones.
Command 7: Employ Iterative Refinement
Prompt engineering is rarely a one-shot process; it is an iterative cycle of design, testing, and refinement.9 The most effective prompts are often the result of progressive enhancement. This command embodies the workflow of starting with a simple, direct prompt, analyzing the model's response, and then incrementally adding more elements—such as context, examples, specificity, and constraints—to improve the results.11
This iterative approach is vital for several reasons. First, it helps in diagnosing why a prompt may be underperforming. By adding one element at a time, the engineer can observe its effect on the output. Second, it prevents over-engineering the prompt from the start. A simple prompt may be sufficient for a given task, and starting simply avoids unnecessary complexity. The process typically looks like this:
Initial Prompt: Start with a basic instruction.
Review Output: Analyze the response for accuracy, style, and format. Identify any deviations from the desired outcome.
Refine Prompt: Based on the review, modify the prompt. This could involve adding more specific instructions (Command 1), providing context (Command 2), defining a format (Command 4), or including examples (Command 21).
Repeat: Continue this cycle until the model consistently produces the desired output. This methodical process of refinement is fundamental to moving from a proof-of-concept to a production-ready application.
Command 8: Use Clear, Simple Language
Unless the task specifically involves a specialized domain, prompts should be written in clear, simple, and accessible language. The use of unnecessary jargon, complex sentence structures, or ambiguous vocabulary can confuse the model and lead to misinterpretation of the user's intent.12 The goal is to minimize the cognitive load on the model, ensuring that the instruction is as unambiguous as possible.
This principle is particularly important because LLMs do not "understand" language in a human sense; they process it as a sequence of tokens and statistical relationships. Ambiguity in the prompt introduces noise into this process. For example, using a highly technical term when a simpler synonym exists might cause the model to respond in an overly technical or incorrect way, especially if the term has multiple meanings across different domains. Sticking to straightforward language ensures that the prompt's core instruction is the strongest signal the model receives.
Command 9: Set a Clear Goal or Objective
An effective prompt is typically focused on a single, well-defined objective.12 Attempting to accomplish multiple, disparate tasks within a single prompt can lead to a diluted, unfocused, or incomplete response. The model may struggle to prioritize the different parts of the request or may only address one part of it thoroughly.
This command is closely related to the principle of dividing complex tasks (Command 31). If a task involves several distinct steps, such as summarizing a document, extracting key entities from the summary, and then classifying the sentiment of the original document, it is generally more effective to break this down into a chain of separate prompts rather than a single, convoluted one. Each prompt in the chain would have a clear, singular goal. This approach not only improves the reliability of each step but also makes the overall workflow more modular and easier to debug.
Command 10: Provide Evaluation Criteria
A more advanced foundational technique involves instructing the model to evaluate its own output based on a set of criteria provided within the prompt itself. This command forces the model to perform a degree of self-assessment and can lead to higher-quality, more refined responses.7 It essentially adds a meta-cognitive layer to the generation process.
For example, when asking for creative outputs like marketing taglines, one could include an evaluation step in the prompt: "Generate 5 catchy taglines for a new line of sustainable, plant-based protein bars. After generating each tagline, rate it on a scale of 1-10 for each of the following criteria: – Memorability: How likely is it to stick in someone's mind? – Relevance: How well does it communicate the product's key benefits? – Emotional Appeal: How effectively does it evoke positive feelings? – Uniqueness: How different is it from common taglines in the health food industry?".7 This prompt structure does two things: it generates the desired content (taglines) and provides a structured analysis of that content. This self-evaluation can help the user quickly identify the most promising options and provides a framework for further refinement.
Part II: Architecting the Prompt: Structure, Role, and Format
Beyond the content of the instructions, the structural organization of the prompt itself is a powerful tool for guiding a large language model. This part focuses on the "scaffolding" of a prompt—the syntactical and architectural elements that shape the interaction, reduce ambiguity, and direct the model's interpretation of the task. These commands are about designing the container for the content.
Command 11: Use Delimiters to Separate Prompt Components
Complex prompts often contain multiple distinct components: instructions, context, examples, and the final query. To prevent the model from confusing these components, it is a best practice to use clear delimiters to separate them.10 Delimiters are sequences of characters that create a visual and structural boundary within the prompt text. Common choices include triple quotes ("""), triple backticks (``````), triple hashes (###), or XML-style tags (<context>, </context>).
Using delimiters makes the structure of the prompt explicit, helping the model understand which part of the text is the instruction, which part is the data to be processed, and so on. For example, a summarization prompt is significantly improved with delimiters:
Less effective: Summarize the text below as a bullet point list of the most important points. {text input here}
Better: Summarize the text below as a bullet point list of the most important points. Text: """{text input here}""".10
This clear separation is crucial for tasks that involve processing user-provided text, as it also serves as a basic defense against prompt injection attacks, where a user might include malicious instructions within the text to be processed.12
Command 12: Position Instructions at the Beginning
The order of information within a prompt can influence the model's output, a phenomenon sometimes related to recency bias.4 A widely adopted best practice is to place the primary instruction or task command at the beginning of the prompt.10 This approach "primes" the model on its objective from the outset, setting the context for all subsequent information it processes.
By stating the main goal first—for example, "Translate the text below to Spanish:" followed by the text—the model immediately knows its primary function. All the text that follows is then interpreted through the lens of that instruction. While some models may be sensitive to information placed at the end of a prompt, starting with the core directive is a reliable and robust strategy for ensuring the model's focus is correctly aligned with the user's primary intent. Experimenting with repeating the instruction at the end can also be a useful strategy to test for and mitigate recency bias.4
Command 13: Assign a Persona or Role
Instructing the model to adopt a specific persona or role is one of the most powerful and widely used prompting techniques.16 By beginning a prompt with a command like "You are an expert copywriter," "Act as a senior SQL developer," or "You are an experienced wildlife biologist specializing in trees," the user can guide the model's response style, tone, and knowledge base.14 This technique is also known as role-based prompting.
The effectiveness of this command extends beyond mere stylistic imitation. It functions as a powerful method of "context-slicing." An LLM is trained on a vast and diverse corpus of text, containing everything from expert-level research papers to casual forum discussions.4 A generic prompt forces the model to average its response across this entire spectrum. Assigning a persona, however, acts as a filter. The tokens "expert copywriter" are statistically associated with a specific subset of the training data characterized by persuasive language, marketing knowledge, and a particular communication style. This command effectively instructs the model to prioritize the knowledge, vocabulary, and patterns found within that relevant subspace of its training data. This context-slicing dramatically reduces the search space for the "next most likely word," leading to a response that is not only stylistically appropriate but also more likely to be factually accurate and detailed within that specific domain.
Command 14: Structure as a Conversation
Modern chat-based LLMs are optimized for conversational interaction, and their APIs often expose a structured format of alternating roles, typically system, user, and assistant.4 Effectively using these roles is a key structural command.
The system message: This message is used to set the overall behavior, persona, and high-level instructions for the model throughout the entire conversation. It is the place to put commands like "You are a helpful assistant who always responds in JSON format" or to define the model's persona (Command 13).14
The user message: This represents the input from the end-user for a given turn in the conversation.
The assistant message: This represents the model's response. This role is also critically important for providing few-shot examples (Command 21). By crafting a sequence of user/assistant pairs in the initial prompt, the developer can demonstrate the desired conversational pattern or task format.4
Structuring the prompt using this conversational format allows for more complex and stateful interactions than a single, monolithic prompt, and it aligns directly with the model's fine-tuning for dialogue.
Command 15: Prime the Output with Leading Words
A subtle yet highly effective structural command is to end the prompt with the beginning of the desired response. This technique, also known as using a "cue," acts as a jumpstart for the model's output, strongly nudging it toward the correct format or content.4 The model's core function is to continue the text it is given, so providing the first few words of the expected output creates a powerful probabilistic path.
For example, if the desired output is a bulleted list, ending the prompt with "Here is the bulleted list of key points:\n-" strongly encourages the model to begin generating list items.4 In the context of code generation, a prompt designed to create a Python function can be made more reliable by ending with the word import. This primes the model to begin with the necessary library imports, a common pattern in Python code.10 Similarly, for a SQL query, ending with SELECT cues the model to start writing a query. This command is a simple way to increase the reliability of formatted output.
Command 16: Request Structured Data
For applications that require machine-readable output, it is essential to go beyond simply specifying a format and to explicitly request structured data like JSON, XML, or YAML. This command should ideally be paired with an example of the desired schema to ensure the model produces a valid and parsable output.7
A prompt for this purpose might look like: "Generate 5 ideas for eco-friendly product innovations. Present each idea in the following JSON format: { “productName”: “Name of the product”, “briefDescription”: “A one-sentence description”, “targetMarket”: “Primary intended users” }".7 Providing the schema within the prompt serves as a clear template for the model to follow. This is far more robust than simply asking for "a list," as it defines the keys, expected value types (implicitly), and overall structure, making the output programmatically reliable and ready for integration into databases, APIs, or front-end applications.
Command 17: Create a Scenario or Narrative Context
Instead of issuing a dry, direct command, framing the request within a scenario or narrative can elicit more detailed, creative, and context-aware responses.12 This technique works by providing the model with a richer situational context, which helps it to infer details and constraints that might not be explicitly stated.
For example, instead of the direct command "List marketing channels for an eco-friendly water bottle," a more effective approach is to create a scenario: "We are going to create a marketing plan for a new eco-friendly water bottle. We will do this in steps. For the first step, define three detailed buyer personas for our product, including their demographics, psychographics, and key pain points".7 This narrative framing transforms the interaction from a simple query to a collaborative task. It helps the model understand the broader goal and generates a more thoughtful and strategically aligned response. This is particularly effective for complex tasks that require planning or creative brainstorming.
Command 18: Give the Model an "Out"
LLMs are designed to be helpful and will often attempt to answer a question even if they do not have the correct information, which can lead to factual inaccuracies or "hallucinations." A crucial command for improving reliability and reducing fabricated answers is to provide the model with an explicit alternative path—an "out"—if it cannot complete the task as requested.4
This is typically done by adding a conditional instruction to the prompt. For example, in a question-answering task based on a provided document, the prompt should include a directive such as: "Based on the text provided, answer the following question. If the answer is not present in the text, respond with 'Information not found.'".4 This command gives the model permission to not know the answer, which is a critical safeguard. Without this "out," the model's internal weighting might favor generating a plausible-sounding but incorrect answer over admitting a lack of information. This simple addition significantly improves the factuality and trustworthiness of the model's responses in closed-domain tasks.
Part III: In-Context Learning: Guiding by Example
In-context learning is a powerful capability of large language models where the model learns to perform a task simply by being shown examples (demonstrations or exemplars) within the prompt itself.18 This form of learning is transient—it conditions the model for the current inference only and does not permanently update its weights.4 The commands in this section represent a hierarchy of intervention, allowing the engineer to guide the model by providing zero, one, or multiple examples of the desired behavior.
This progression from zero-shot to few-shot prompting, and ultimately to fine-tuning, represents a clear and strategic trade-off. It is an optimization problem where the developer must balance the desired performance level against the increasing cost, complexity, and effort required for each method.
Zero-Shot Prompting is the most economical and fastest approach. It requires minimal prompt design effort and consumes the fewest tokens. It is the first and most efficient step, leveraging the model's generalized pre-trained knowledge.
Few-Shot Prompting represents a moderate intervention. It requires more effort to craft high-quality examples and consumes more tokens in the prompt context. However, it offers a significant performance boost for tasks where the general model is insufficient, without incurring the computational and data management costs of retraining.
Fine-Tuning (a related but distinct process beyond prompting) is the most expensive and complex option. It requires a curated dataset, computational resources for training, and ongoing model management. It is the "heavy machinery" used when in-context learning hits its performance ceiling for a specific, high-value, and repetitive task.
Understanding this hierarchy allows a developer to make a rational, cost-benefit decision, asking: "What is the minimum level of intervention required to achieve the target performance?"
Command 19: Zero-Shot Prompting
Zero-shot prompting is the most fundamental form of interaction with an LLM. It involves instructing the model to perform a task without providing any prior examples of how to do it.5 The model must rely entirely on its pre-trained knowledge and its ability to generalize from the instruction itself.4 For many straightforward tasks—such as summarization, translation, or answering general knowledge questions—zero-shot prompting is often sufficient.
An example of a zero-shot prompt is: "Extract keywords from the below text. Text: {text} Keywords:".10 The model is expected to understand the concept of "keywords" and apply it to the provided text without having been shown an example. The success of zero-shot prompting is a testament to the powerful generalization capabilities of modern LLMs. It should always be the starting point for any new task, as it is the simplest and most efficient method.10 If it yields satisfactory results, no further complexity is needed.
Command 20: One-Shot Prompting
When a task is slightly more ambiguous or requires a specific output format that is not easily described, providing a single example—a technique known as one-shot prompting—can significantly improve performance.5 A single demonstration is often enough to clarify the user's intent and guide the model toward the correct output structure or style.
For instance, if a zero-shot prompt for a sentiment classification task is not performing well, a one-shot prompt can provide the necessary clarity.
Prompt:
This is awesome! // Positive
Wow that movie was rad! //
The model can infer from the single example that it is expected to classify the sentiment of the sentence and use the // {Sentiment} format. One-shot prompting is particularly useful for teaching the model novel tasks or specific formatting conventions with minimal prompt overhead.18
Command 21: Few-Shot Prompting
Few-shot prompting extends the concept of one-shot prompting by providing multiple examples (typically 2-5, but sometimes more) of the desired input-output pairs.19 This is a highly effective technique for conditioning the model on more complex tasks, nuanced classifications, or specific stylistic patterns where a single example is insufficient.17 The multiple demonstrations provide a richer context for the model to learn from, enabling it to better understand the underlying pattern of the task.4
For example, to extract keywords from texts, a few-shot prompt would provide several text-keyword pairs before presenting the final text for which keywords are needed:
Prompt:
Extract keywords from the corresponding texts below.
Text 1: Stripe provides APIs that web developers can use to integrate...[source](https://medium.com/@pankaj_pandey/optimizing-interactions-with-openai-and-llms-api-a-guide-on-effective-prompt-engineering-33bb5adb2466) processing, API.
##
Text 3: {text}
Keywords 3:.10
Interestingly, research has shown that the format and the distribution of the labels in the examples are often more important for performance than the correctness of the labels themselves.18 This suggests that few-shot prompting is less about teaching the model new facts and more about demonstrating the expected structure and nature of the task.
Part IV: Eliciting Complex Reasoning and Analysis
While foundational and structural commands can elicit high-quality information and formatted output, a distinct class of advanced techniques is required to unlock an LLM's ability to perform complex, multi-step reasoning. These commands are designed to guide the model through logical problem-solving processes, moving beyond simple information retrieval to genuine analysis and deduction.
The core principle unifying these techniques is the externalization of the model's "thought process." A standard prompt asks for a final answer, leaving the model's internal reasoning as an un-inspectable "black box".11 If an error occurs, it is difficult to diagnose. Advanced reasoning techniques fundamentally change the task from "find the answer" to "write a step-by-step explanation that leads to the answer." This leverages the model's primary strength—generating coherent, sequential text—to scaffold and improve its primary weakness, which is performing implicit, un-inspectable reasoning. Each reasoning step the model generates becomes part of the context for the next step, creating a feedback loop where the model's own output guides its subsequent generation. This externalization breaks a complex problem into simpler sub-problems, makes the process transparent and debuggable, and keeps the model focused.
Command 22: Chain-of-Thought (CoT) Prompting
Chain-of-Thought (CoT) prompting is a seminal technique that encourages the model to break down a complex problem into a series of intermediate, logical steps before arriving at a final answer.19 It is particularly effective for tasks requiring arithmetic, commonsense, and symbolic reasoning, where a direct leap to the answer is prone to error.5 CoT can be implemented in a few-shot setting by providing examples that include not just the question and answer, but also the step-by-step reasoning used to derive the answer.18
For example, when asked a math word problem, a standard prompt might lead to an incorrect answer. A CoT prompt, however, would demonstrate the reasoning process:
Standard Prompt Example:
Q: Roger has 5 tennis balls. He buys 2 more cans of tennis balls. Each can has 3 tennis balls. How many tennis balls does he have now?
A: The answer is 11.
CoT Prompt Example:
Q: Roger has 5 tennis balls. He buys 2 more cans of tennis balls. Each can has 3 tennis balls. How many tennis balls does he have now?
A: Roger started with 5 balls. 2 cans of 3 tennis balls each is 6 tennis balls. 5 + 6 = 11. The answer is 11.
By showing the model how to think through the problem, it is better able to apply a similar reasoning process to new, unseen problems.21
Command 23: Zero-Shot CoT
One of the most remarkable findings in prompt engineering is that the benefits of Chain-of-Thought reasoning can often be triggered without providing any examples. This technique, known as Zero-Shot CoT, involves simply appending a magical phrase to the end of the prompt: "Let's think step by step".18
This simple command acts as a trigger, prompting the model to externalize its reasoning process and generate a detailed, step-by-step breakdown of how it arrives at a solution, even for a problem it has never seen before.21 For example, given a complex logic puzzle, adding "Let's think step by step" will cause the model to first articulate the premises, then deduce intermediate conclusions, and finally state the final answer based on the generated chain of thought. This technique is incredibly powerful due to its simplicity and effectiveness, making it one of the most valuable tools for improving reasoning in a zero-shot context.
Command 24: Self-Consistency
Self-consistency is an advanced technique that enhances the robustness of Chain-of-Thought prompting. The core idea is to replace the standard "greedy" approach (taking the single most likely output) with an ensemble method.18 Instead of generating just one reasoning path, the model is prompted to generate multiple, diverse chains of thought for the same problem. The final answer is then determined by a majority vote among the outcomes of these different paths.19
For example, for a complex math problem, the model might be run five times with a high "temperature" setting to encourage response diversity. This could result in five different reasoning chains. Three of them might arrive at the correct answer of "67," while two might make calculation errors and arrive at different answers. By taking the majority result, the system selects "67" as the most consistent and likely correct answer.18 This technique significantly boosts performance on tasks involving arithmetic and commonsense reasoning by mitigating the impact of occasional logical or calculation errors in any single reasoning chain.20
Command 25: Tree of Thoughts (ToT)
Tree of Thoughts (ToT) is a more advanced and deliberate problem-solving framework that generalizes Chain-of-Thought. While CoT explores a single reasoning path, ToT enables the model to explore multiple reasoning paths simultaneously, like branches on a tree.5 At each step of the problem, the model can generate multiple different "thoughts" or next steps. It can then evaluate the progress made along each of these branches and decide which paths to continue pursuing and which to prune.22
This process allows the model to perform a more strategic exploration of the solution space. It can backtrack from dead ends, compare different approaches, and plan ahead. For example, when solving a logic puzzle, the model might generate three possible next steps. It could then use a "value" prompt to evaluate which of these steps is most likely to lead to a solution, and then focus its subsequent generation on that most promising branch. This makes ToT a much more powerful problem-solving technique for tasks that require exploration, strategic planning, or trial and error.1
Command 26: Graph of Thoughts (GoT)
Graph of Thoughts (GoT) represents a further generalization of these reasoning frameworks, moving from the linear structure of CoT and the tree structure of ToT to a more flexible graph structure.1 In GoT, the "thoughts" (intermediate reasoning steps) are nodes in a graph. This architecture allows for more complex operations, such as merging different reasoning paths or transforming entire branches of thought.
For instance, the model could pursue two different lines of reasoning independently and then, at a later stage, generate a new "thought" that synthesizes the findings from both paths. This allows for a richer and more dynamic problem-solving process, where information can be combined and refined in arbitrary ways, much like a human mind mapping out a complex problem. While more complex to implement, GoT offers a glimpse into more powerful and flexible AI reasoning systems.
Command 27: Generate Knowledge Prompting
This technique addresses the limitation that a model might not retrieve all relevant facts from its memory before answering a question. Generate Knowledge Prompting instructs the model to first generate a set of relevant facts or background knowledge about the topic before attempting to answer the main question.1 This self-generated knowledge is then included as part of the context for the final answer.
For example, a prompt could be structured as: "Before explaining climate change, first list the key scientific principles related to it (e.g., the greenhouse effect, carbon cycle). Once done, use these principles to explain the concept, its causes, and its effects".19 This two-step process primes the model with the necessary information, effectively allowing it to "study up" just before answering. This leads to more informed, accurate, and comprehensive responses, as the final generation is grounded in a context that the model itself has just articulated.18
Command 28: Directional Stimulus Prompting
Directional Stimulus Prompting is a more subtle reasoning technique that involves providing hints, keywords, or cues to guide the LLM toward a desired output without being overly prescriptive or providing a full example.1 It is a way of gently "nudging" the model in the right direction. This can be particularly useful for creative tasks or for guiding the model's focus within a broad topic. For instance, in a prompt asking for a story, one might include a list of keywords like "ancient ruins, forgotten map, solar eclipse" to stimulate a particular narrative direction. This technique helps to shape the model's output while still allowing for a high degree of generative freedom.5
Command 29: Plan-and-Solve Prompting
Similar to Chain-of-Thought, Plan-and-Solve Prompting is a more explicit method for breaking down complex problems. This technique instructs the model to first devise a high-level plan to solve the problem and then execute that plan, showing its work for each step.22 This separates the strategic planning phase from the tactical execution phase. A prompt might state: "First, devise a plan to calculate the total surface area of the object described. Then, execute the steps in your plan to find the final answer." This encourages a more structured and methodical approach to problem-solving and has been shown to improve performance on zero-shot reasoning tasks.
Command 30: Re-Reading (Recitation)
For tasks that involve reasoning over a long and detailed context, models can sometimes fail to recall or correctly utilize information presented earlier in the prompt. Research has shown that prompting the model to "re-read" or recite key parts of the provided context can improve its ability to reason accurately.22 This command forces the model to pay closer attention to the provided information. For example, before asking a question about a long passage of text, one could instruct the model: "First, summarize the main argument of the third paragraph. Now, based on the entire text, answer the following question..." This act of forced recall and summarization appears to strengthen the model's internal representation of the context, leading to better performance on downstream reasoning tasks.
Part V: Advanced Architectures and Self-Optimization
The most sophisticated applications of prompt engineering move beyond single, static prompts to construct dynamic, multi-stage systems. The commands in this section represent architectural patterns and meta-cognitive techniques that treat prompting as a system-level design challenge. These approaches enable the creation of autonomous agents, self-optimizing workflows, and robust, production-grade AI applications.
The emergence of these techniques signals a fundamental shift in the role of the human prompt engineer. The focus is evolving from crafting the perfect execution prompt for a single task to designing the optimal meta-prompt that governs an entire autonomous system. Early prompt engineering focused on micro-managing the model's token-level output for a specific task. The evolution to prompt chaining saw the engineer become a static system architect, designing a fixed workflow. Now, with agentic and automated techniques, the prompt becomes a "mission briefing" for an autonomous agent. The LLM itself becomes the reasoner and planner, tasked with figuring out the steps, using tools, and even designing its own prompts. This trajectory represents a powerful abstraction of human effort, elevating the prompt engineer's role from a hands-on technician to a high-level architect and supervisor of intelligent, autonomous systems.
Command 31: Prompt Chaining
Prompt chaining, also known as creating multi-stage workflows, is the practice of breaking down a complex task into a sequence of simpler prompts, where the output of one prompt serves as the input for the next.5 This architectural pattern is essential for building any non-trivial application and is a more robust alternative to trying to accomplish everything in a single, overly complex prompt.1
For example, creating a comprehensive marketing plan could be broken down into a chain of prompts:
Prompt 1: "Create 3 detailed buyer personas for our new eco-friendly water bottle...".7
Prompt 2: "Based on the personas defined in the previous step: {output of Prompt 1}, craft a compelling Unique Selling Proposition (USP)...".7
Prompt 3: "Using the USP from the previous step: {output of Prompt 2}, suggest 5 marketing channels...".7
This modular approach allows for validation and correction at each step, improves reliability, and makes the overall system easier to design, debug, and maintain.8
Command 32: Reflection and Self-Correction
Reflection is a powerful multi-stage technique that emulates the human process of iterative refinement. It involves a two-step process: first, the model generates an initial response to a prompt. Second, a subsequent prompt is used to ask the model to critique, evaluate, and correct its own initial output.17 This creates a feedback loop that can significantly improve the quality and accuracy of the final result.
For example, after generating a piece of code, a reflection prompt could be: "Review the code you just wrote. Are there any potential bugs? Could the efficiency be improved? Is the code well-documented? Provide a revised version of the code that addresses these points." This technique leverages the model's analytical capabilities to improve its own generative output, leading to a more robust and polished final product.8
Command 33: Meta-Prompting
Meta-prompting is a fascinating technique where the model's own intelligence is leveraged to improve the way it is prompted. It involves asking the model to generate or refine a prompt that would be effective for a given task.5 This command essentially asks the model, "How should I ask you to do this?"
An example of a meta-prompt would be: "I want to create a prompt that will help an AI explain the concept of climate change in simple terms. Create an effective prompt for this task".19 The model might then generate a detailed prompt that includes instructions on defining the audience, specifying the format, and suggesting the inclusion of analogies—incorporating many of the best practices discussed in this guide. This technique can accelerate the prompt engineering process by leveraging the model's vast "understanding" of language and tasks.5
Command 34: Clarification Prompting
Instead of a one-way instruction, this command transforms the interaction into a collaborative dialogue by instructing the model to ask clarifying questions if the initial prompt is ambiguous or lacks sufficient information.16 This proactive approach can prevent the model from making incorrect assumptions and generating a flawed response based on incomplete information.
The prompt can be framed as a directive that sets the mode of interaction: "From now on, when I give you a task, I want you to ask me clarifying questions until you have enough information to provide the needed output. Start by asking me what task I want to accomplish".16 This shifts the burden of providing all necessary details upfront from the user to a more natural, conversational exchange, leading to a more accurate final outcome.
Command 35: Retrieval Augmented Generation (RAG)
Retrieval Augmented Generation (RAG) is a powerful architectural pattern that grounds LLM responses in external, up-to-date, or proprietary knowledge.1 It is not a single prompt but a system that combines an information retrieval component with a language model. The process is as follows:
Retrieval: When a user asks a query, the system first retrieves relevant documents or data chunks from an external knowledge base (such as a vector database containing company documents or recent news articles).
Augmentation: The retrieved information is then dynamically inserted into the prompt as context.
Generation: The LLM receives the augmented prompt (containing both the original query and the retrieved context) and generates an answer that is grounded in the provided information.
RAG is a critical technique for reducing hallucinations, enabling the model to answer questions about data it was not trained on, and providing citations for its answers.18
Command 36: Agentic Prompting and Tool Use
Agentic prompting represents a paradigm shift where the LLM is used not just as a generator of content, but as a reasoning engine that can take actions.8 This involves designing prompts that give the model a high-level goal and access to a set of external "tools" (such as APIs, calculators, code interpreters, or search engines).5 The model's task is to reason about the goal, decide which tool to use, generate the necessary input for that tool, execute it, and then process the output to determine the next step.18
Frameworks like ReAct (Reason and Act) demonstrate this pattern, where the model alternates between generating reasoning steps ("Thought:") and actions ("Action:").1 This allows the model to overcome its inherent limitations by offloading tasks like real-time information gathering or precise mathematical calculations to specialized tools.
Command 37: Automatic Prompt Engineer (APE)
Automatic Prompt Engineer (APE) is an automated approach that uses an LLM to perform the task of prompt engineering itself.1 The process treats prompt generation as a search problem. Given a task description and a few examples of input-output pairs, an LLM is used to generate a diverse set of candidate instructions. These candidate prompts are then tested on a task, and the one that yields the best performance (based on some scoring metric) is selected as the optimal prompt.18 This technique automates the manual, iterative process of prompt refinement and can often discover highly effective but non-intuitive prompt phrasing.
Command 38: Active-Prompt
Active-Prompt is a technique for dynamically optimizing few-shot prompting. In standard few-shot prompting, the same set of examples is used for every query. However, some examples may be more relevant to certain queries than others. Active-Prompt addresses this by first analyzing the specific query being asked. It then selects a subset of the most relevant and informative examples from a larger pool of candidates to include in the prompt.1 This query-specific adaptation of the few-shot examples has been shown to significantly improve performance by providing the model with more targeted guidance.18
Command 39: Program-Aided Language Models (PAL)
Program-Aided Language Models (PAL) is a technique that combines the linguistic capabilities of LLMs with the deterministic execution of code.1 Instead of prompting the model to compute the final answer directly, PAL prompts the model to write a program (e.g., in Python) that solves the problem. The generated code is then executed by a standard interpreter, and the result of the execution is returned as the final answer. This approach offloads complex calculations, logical operations, or symbolic reasoning to the code interpreter, leveraging the LLM for understanding the problem and generating the solution logic, while relying on the interpreter for flawless execution.18 This dramatically improves accuracy on tasks involving math and complex logic.
Command 40: Multimodal Prompting
With the rise of multimodal models like GPT-4o and Gemini, prompt engineering is expanding beyond text to include other data types.5 Multimodal prompting involves crafting inputs that combine text with images, audio, or video.23 The fundamental principles of clarity, context, and specificity still apply, but they must now be extended across different modalities. For example, a multimodal prompt might consist of an image of a diagram and a textual question: "Based on the attached circuit diagram, explain the function of the resistor labeled R1." This requires the model to jointly process and reason over both the visual and textual information to generate a coherent response.18
Conclusion: The Future of Human-AI Collaboration
This comprehensive analysis of over 40 prompt engineering commands illustrates a clear and decisive shift in human-AI interaction—a transition from intuitive art to a structured, systematic engineering discipline. The journey from foundational principles of clarity and specificity to the design of complex, autonomous agentic systems underscores the rapid maturation of this field. The commands detailed herein are not merely a list of tips and tricks; they represent a lexicon for precise communication with large language models, enabling practitioners to control, guide, and collaborate with these powerful systems in increasingly sophisticated ways.
The evidence synthesized from extensive research and best practices reveals several key trajectories. First, the principle of externalizing thought—making the model's reasoning process explicit through techniques like Chain-of-Thought—has proven to be a fundamental breakthrough in unlocking complex problem-solving abilities. Second, the move toward multi-stage, chained architectures like RAG and agentic frameworks demonstrates that the future of AI applications lies not in single, monolithic prompts, but in the design of robust, modular systems where LLMs act as core reasoning components. Finally, the emergence of meta-cognitive and self-optimizing techniques such as Meta-Prompting and APE signals a new level of abstraction, where the human engineer's role is evolving from a micro-manager of token sequences to a high-level architect of autonomous reasoning systems.
Looking forward, the continued co-evolution of LLMs and the engineering practices used to interact with them will be critical. The challenges of ensuring AI safety, mitigating biases, and preventing malicious use through techniques like prompt injection will require even more sophisticated architectural and security-aware prompting strategies.5 The ultimate goal remains the transformation of human-AI interaction from guesswork into a precise, reliable, and scalable science.6 Mastering the lexicon of prompt engineering is the definitive path toward achieving that goal, paving the way for a future where AI can be more effectively and responsibly integrated into every facet of technology and society.
--------------------
Here are 40 prompts centered around the concept of iteration, grouped by category.
🎨 Creative & Design
Story Writing: Write a 50-word story. Now, iterate on it five times, trying to make it more impactful with each version.
Logo Design: Design a simple logo for a "tree" concept. Create three iterations: one minimalist, one geometric, and one organic.
UI/UX: Redesign the login screen for a popular app. Iterate on your first design to improve usability by 10%.
Photography: Take one photo and create five different edited versions (iterations) to evoke five different moods.
Poetry: Write a 4-line poem. Now iterate by replacing one word in each line to completely change its meaning.
Headline: Write a headline for an article about climate change. Iterate to create five versions: one serious, one hopeful, one urgent, one questioning, and one witty.
Character Design: Sketch a simple character. Iterate on its expression three times to show "curiosity," "fear," and "joy."
Tagline: Create a tagline for a new brand of coffee. Iterate on it to make it shorter and more memorable.
💻 Technology & Programming
Algorithm: Write a simple algorithm to sort a list. Now, iterate on it to improve its time complexity (e.g., from bubble sort to merge sort).
Refactoring: Take a "messy" block of code (real or hypothetical) and describe three iterations of refactoring to make it cleaner and more efficient.
A/B Testing: Design an A/B test for a website's "Sign Up" button. What is iteration A (control), and what is iteration B (variable)?
Agile: Describe how you would iteratively build a to-do list application, starting with the most basic "Minimum Viable Product" (MVP).
Debugging: Explain how debugging is an iterative process. Use a specific bug (e.g., a "null pointer exception") as an example.
Performance: A website page loads in 5 seconds. Describe the iterative steps you would take to get it under 1 second.
User Feedback: A new app feature is getting negative reviews. Describe the iterative process of using that feedback to fix the feature.
API Design: Design a basic API endpoint for "get user data." Iterate on it to add parameters for sorting and filtering.
📈 Business & Product
Elevator Pitch: Write a 30-second elevator pitch for a fictional startup. Iterate on it three times, each based on feedback from a different stakeholder (an investor, a customer, an engineer).
Business Model: Start with a "lemonade stand" business model. Iterate on it to scale it into a national beverage company.
Marketing Campaign: Outline a basic social media ad. Iterate on the copy, image, and call-to-action for three different target audiences.
Product Roadmap: Create a simple 3-feature roadmap. Now, iterate on it based on a sudden competitor launch.
Value Proposition: Write a value proposition for a service like Spotify. Iterate to make it clearer and more compelling.
Onboarding: Describe a user's first-time-use (FTU) experience for an app. Iterate on this process to reduce the number of steps by half.
Build-Measure-Learn: Explain the "Build-Measure-Learn" feedback loop using the example of a new e-commerce feature.
Pricing: Propose a pricing model for a new SaaS product. Now, iterate on it to create a "freemium" tier.
🧠 Personal & Conceptual
Habit Building: Describe an iterative approach to building a new habit, like "reading 30 minutes a day."
Learning: How is learning a new skill (like playing guitar) an iterative process? Describe the first three iterations.
Goal Setting: Take a large, vague goal ("get healthier"). Iterate on it to create a specific, measurable, and actionable plan.
Morning Routine: What is your current morning routine (Version 1.0)? Describe what Version 2.0 would look like and the one change you'd make.
Failure: "Failure is just a data point for the next iteration." Discuss this statement.
Cooking: A recipe didn't turn out well. Describe the iterative process of fixing it the next time you cook it.
Fitness: A person's workout plan is "30 minutes of running." How would they iterate on this plan as they get fitter?
Essay Writing: Explain how the process of writing an essay (outline, first draft, revision, final draft) is a form of iteration.
🎲 Problem-Solving & Scenarios
Game Design: A game level is too hard. Describe three iterations you would test to find the right balance of difficulty.
The Scientific Method: Explain how the scientific method is a formal process of iteration.
Customer Support: A customer writes an angry email. Write your first draft of a reply. Now, iterate on it to be more empathetic and helpful.
Efficiency: Analyze a common household chore (e.g., "washing dishes"). Propose three iterations to make the process more efficient.
Planning: You need to plan a 7-day vacation. Describe how you would iteratively refine the itinerary from a vague idea to a detailed schedule.
Negotiation: A negotiation starts with a low offer. Describe the iterative "back-and-forth" process of reaching a compromise.
Nature: How does the process of evolution by natural selection demonstrate iteration on a massive scale?
Problem Definition: Take a complex problem like "urban traffic." Iterate on the definition of the problem (is it a speed problem? a pollution problem? a safety problem?).
Would you like to pick one of these prompts to explore further?
-------------
Here is a list of 40 prompts to spark your creativity and help you write song lyrics, broken down by category.
📜 Scenarios & Stories
Write about two people saying goodbye at an airport, but one of them is lying about where they're going.
Tell the story of a heist from the perspective of the "getaway driver" who is having second thoughts.
Describe a conversation between someone and their childhood imaginary friend, who has suddenly reappeared.
Write about running into an ex at a mutual friend's wedding, five years after a bad breakup.
A song about someone receiving a letter or package from a relative who has passed away, revealing a secret.
The story of someone leaving their small town for the big city, and the culture shock they experience.
Write about two people stuck in an elevator during a blackout.
A song about the last night a group of friends spends together before they all move away for college.
🎭 Characters & Perspectives
Write from the perspective of a "villain" who believes they are the hero of their own story.
Write a song from the perspective of an old, abandoned house watching a new family move in.
A song from the point of view of someone who is the "other person" in an affair.
Write a song from the perspective of the Moon, singing to the Earth.
A song about a con artist who accidentally falls in love with their target.
Write from the perspective of someone who has "imposter syndrome" and feels like a fraud.
A song from the point of view of someone's "shadow" or "reflection."
Write a song about a modern-day mythological figure (e.g., Icarus as a failed entrepreneur, Sisyphus as an office worker).
💡 Emotions & Concepts
Write about the feeling of "saudade"—a deep nostalgic longing for something or someone you love that is absent.
Describe the feeling of being "stuck" in a routine and the desperate urge to break free.
A song about the strange, quiet euphoria of being awake at 3 AM while the rest of the world is asleep.
Write about the complex feeling of being happy for a friend's success while also feeling jealous.
A song that captures the anxiety of waiting for an important phone call or text message.
Write about the "ghost" of a past self and how much you have (or haven't) changed.
A song about forgiveness, but you're not sure if you're ready to give it.
Describe the specific feeling of a "Sunday night," dreading the week ahead.
🏙️ Objects & Places
Write a song about a specific item of clothing (a faded denim jacket, a pair of old boots) and the memories attached to it.
A song about a dive bar at 2 AM, and the stories of the people inside.
Write about an old, haunted guitar and the music it makes.
A song about the "junk drawer" in a kitchen and the random, forgotten items inside.
2c9. Write a song about a specific street or road you used to drive down all the time.
A song centered around the last payphone in your city.
Write about an old bookstore and the secrets hidden in the pages of its books.
A song about a "one-star review" of a beautiful place (like the Grand Canyon or the ocean).
✍️ Titles & Opening Lines
Write a song that starts with the line: "I found your old picture in a book I never finished."
Write a song titled "Kerosene and Roses."
Start a song with the line: "The silence in this car is louder than the engine."
Write a song titled "The Last Time I Saw This Town."
Start a song with the line: "We were driving with the windows down, pretending to be rich."
Write a song titled "Borrowed Emptiness."
Start a song with the line: "The cheapest coffee in this city still costs too much."
Write a song titled "Permanent Temporary."
I hope these prompts get your ideas flowing! Would you like me to help you brainstorm some rhymes or metaphors for one of these ideas?
Here are 36 more prompts, with a focus on different perspectives, "what if" scenarios, and sensory details.
🌀 "What If...?" Scenarios
What if you woke up and everyone in the world had forgotten your name?
What if you could "rewind" one day of your life, but only once?
What if you met your 18-year-old self? What advice would you give? What would they think of you?
What if you found a key that could open any door, but not the one you're looking for?
What if you received a weather forecast that predicted your emotional state instead of the weather? (e.g., "Cloudy with a 90% chance of regret")
What if you could buy "time" in a store, but the cost was your memories?
What if your "Google search history" was read aloud at your funeral?
🖼️ Sensory & Imagery
Write a song that smells like old books, dust, and rain.
A song based on the color "emerald green" (envy, money, nature, magic).
Describe the sound of a city just before sunrise.
Write about the physical feeling of a panic attack, but as a metaphor for a relationship.
A song about the taste of a specific meal that reminds you of home.
Describe a memory only using the sense of touch (e.g., "gravel under my palms," "the static on your sweater").
A song about the "neon buzz" of a sign outside a motel window.
🗣️ Conversational & Dialogue
A song that is a one-sided phone conversation, where you only hear the speaker's side.
Write a song that is an argument between two people in a car.
A song that starts with the line: "So, are you going to ask me what happened?"
Write a song that is a "job interview" for being someone's partner.
A song that is just a list of "ground rules" for a new relationship.
A song that is a bad public apology (e.g., from a celebrity or politician).
Write a song from the perspective of a bartender listening to someone's problems.
🧠 Conceptual & Abstract
Write a song about the "space between" two things (e.g., between thoughts, between breaths, between two people on a couch).
A song about the concept of "unrequited love," but for a place or a time period you've never been to.
Write about the internal battle between your "head" and your "heart" as two distinct characters.
A song about the "momentum" of a bad decision—how it gets harder to stop once you've started.
Write about the feeling of being an "extra" in the movie of someone else's life.
A song about the paradox of "nostalgia"—missing a time you weren't actually happy in.
Write a song about the "architecture" of a lie (how it's built, what it's made of, how it falls).
⚙️ Specific Challenges
Write a "list song" that is just a list of things you've lost (e.g., keys, time, friends, a single earring).
Write a song that has no rhymes at all.
Write a song where the chorus is just one single word, repeated.
Write a song from the perspective of an inanimate object in the room (a coffee cup, a mirror, a single shoe).
Write a song about a historical event, but told from the perspective of an everyday person who was there.
A song titled "This Song is Not About You."
Write a song that tells a story backward, starting with the consequences and ending with the beginning.
Write a song where every line starts with the same word (e.g., "Maybe...").
Would you like me to try a different angle, like prompts for a specific genre (e.g., folk, pop, rock)?
-----------------
Here is a list of 40 prompts designed to spark ideas for creating sermons, organized by common themes and approaches.
📖 1. Based on a Specific Bible Passage or Story
Genesis 1: "Image of God in an Age of AI and Selfies: What Does 'Imago Dei' Mean Today?"
Exodus 3: "The Burning Bush: Recognizing God's Presence in the Ordinary."
Psalm 42: "When Your Soul is Downcast: The Honest Spirituality of Lament."
Isaiah 6: "Here I Am, Send Me: The Call to Uncomfortable Faith."
Habakkuk 3:17-19: "Rejoicing in the 'Even Though': Faith When Everything Goes Wrong."
Matthew 5 (Beatitudes): "The Upside-Down Kingdom: What Does it Really Mean to be 'Blessed'?"
Matthew 6:25-34: "'Do Not Worry': Overcoming Anxiety in a Culture That Runs On It."
Luke 15 (Prodigal Son): "The Other Brother: Rescuing the Parable from the 'Good' Son."
John 4 (Woman at the Well): "Breaking Barriers: How Jesus Sees Past Our Labels."
Philippians 4:13: "I Can Do All Things... Really? Unpacking a Misused Verse."
✝️ 2. Based on a Theological Concept
Grace: "Grace is Not Fair: Why We Struggle to Accept a Gift We Can't Earn."
The Trinity: "More Than a Math Problem: Why a Three-in-One God Matters for Community."
Forgiveness: "The Hardest Command: Forgiving When It Doesn't Make Sense."
Justice: "What Does the Lord Require? Moving Beyond Hashtags to 'Do Justice' (Micah 6:8)."
The Church: "Beyond the Building: What is the 'Body of Christ' Supposed to Be?"
Salvation: "What Are We Saved From, and What Are We Saved For?"
The Holy Spirit: "The Forgotten God: Re-discovering the Power and Presence of the Spirit."
Sin: "The 'Respectable' Sins We Tolerate: Pride, Envy, and Gossip."
💡 3. Based on a "Hard Question" or Challenge
Doubt: "The Faith of a Doubter: Lessons from Thomas and the Psalms."
Suffering: "Where is God in the Midst of Tragedy?"
Unanswered Prayer: "When God is Silent: Wrestling with the Silence of Heaven."
Exclusivity: "Jesus as 'The Way, The Truth, and The Life': What Does That Mean for a Pluralistic World?"
Judgment: "'Judge Not': Navigating the Line Between Discernment and Condemnation."
"Hard Sayings": "'Turn the Other Cheek': Radical Peacemaking in an Age of Conflict."
Hypocrisy: "Cleaning the Inside of the Cup: Why We See the Specks in Others' Eyes."
🌱 4. Based on Practical Christian Living
Money & Generosity: "The Stewardship of Everything: Where is Your Treasure?"
Work & Calling: "Faith on Monday Morning: Your Job as a Place of Ministry."
Relationships: "Love Your Neighbor (Especially the Difficult One)."
Rest & Sabbath: "The Lost Art of Sabbath: Escaping the 'Cult of Busy'."
Decision-Making: "How Do I Know God's Will for My Life?"
Digital Life: "The Soul in the Age of Social Media: Finding Authenticity and Avoiding the Comparison Trap."
Evangelism: "Good News, Not Bad Advice: Sharing Faith Without Being Weird."
🌍 5. Based on Culture and Current Events
Loneliness: "The Epidemic of Isolation and the Call to Deep Community."
Polarization: "Bridging the Divide: A Christian Response to 'Us vs. Them'."
The Environment: "Creation Care: Our Role as Stewards of the Earth."
Busyness & Burnout: "Finding Your Identity Outside of Your Productivity."
Hope: "Where is the Good News? Finding Hope in a World Full of Bad News."
Social Justice: "The Good Samaritan: Who is My Neighbor Today?"
🗓️ 6. Based on a Single Word or Metaphor
"Wait": (Advent/General) "The Sacred In-Between: Finding God in the Waiting Room of Life."
"Empty": (Lent/Easter) "From the Emptiness of the Cross to the Emptiness of the Tomb."
Would you like me to take one of these prompts and expand it into a brief sermon outline?
Here are 40 additional prompts, exploring new categories and different angles for sermon development.
✝️ 1. Based on Biblical Character Studies (The Supporting Cast)
Barnabas: "The Lost Art of Encouragement: Becoming a 'Son (or Daughter) of Encouragement' in a Critical World."
Hagar: "The God Who Sees Me (El Roi): Finding Value When You've Been Cast Out."
Joseph of Arimathea: "Silent Faith, Bold Action: When to Stop Following Jesus in Secret."
Lydia (Acts 16): "An Open Heart, An Open Home: The Intersection of Business, Faith, and Hospitality."
Mephibosheth (2 Samuel 9): "A Seat at the King's Table: Understanding Kindness, Covenant, and Grace for the 'Broken'."
Naomi (Ruth 1): "From 'Bitter' to 'Blessed': The Journey Back from Spiritual Emptiness."
Zacchaeus: "More Than a 'Wee Little Man': A Story of Radical Repentance and Restitution."
The Syrophoenician Woman (Mark 7): "Persistent Faith: When You Have to Argue with God."
🗣️ 2. Based on Jesus's "I AM" Statements (from John)
"I AM the Bread of Life": "Are You Spiritually Starving? Finding True Satisfaction in Christ."
"I AM the Light of the World": "How to Stop Stumbling in the Dark: Living by God's Illumination."
"I AM the Gate": "The Freedom of the 'Narrow' Way: Why Exclusivity Can Be Good News."
"I AM the Good Shepherd": "Known, Named, and Protected: What it Means to Belong to the Shepherd."
"I AM the Resurrection and the Life": "Living Like Easter is True: How the Resurrection Changes Your Monday."
"I AM the Way, the Truth, and the Life": "Not a Suggestion, But a Destination: Why Jesus is the Answer to Our Deepest Questions."
"I AM the True Vine": "The Secret of 'Abiding': How to Stop Striving and Start Bearing Fruit."
🔄 3. "Cultural Reversals" (Challenging Common Sayings)
Culture says "Follow Your Heart."
Sermon: "The Heart is Deceitful (Jer. 17:9): Why You Need a Better Guide Than Your Feelings."
Culture says "You Only Live Once (YOLO)."
Sermon: "You Live Twice: How Living for Eternity Transforms the Life You Live Now."
Culture says "Seeing is Believing."
Sermon: "Believing is Seeing: The Counter-Cultural Faith of 'Blessed Are Those Who Have Not Seen'."
Culture says "You Are Enough."
Sermon: "The Good News That You Aren't Enough (And Why Christ Is)."
Culture says "Find Your Truth."
Sermon: "What is Truth? Engaging with Jesus, The Truth."
🕊️ 4. Based on the Fruit of the Spirit (Galatians 5)
Love: "The 'Agape' Difference: Loving When It's Undeserved and Unrewarded."
Joy: "The Unshakeable Joy: How It's Different from Happiness (and Where to Find It)."
Peace: "The Peace of God vs. The Peace with God: Guarding Your Heart in Anxious Times."
Patience (Longsuffering): "Developing a 'Long Fuse' in an Instant-Gratification World."
Kindness & Goodness: "The Forgotten Fruits: The Practical Power of Being 'Nice'."
Faithfulness: "Finishing Well: The Character Trait God Values Most."
Gentleness: "The Strength of True Gentleness: More Than Just Being Quiet."
Self-Control: "The Battle Within: Winning the War Against Your Own Impulses."
⛪ 5. Based on the "Life of the Church"
Communion: "Do This in Remembrance: What Are We Actually Doing at the Lord's Table?"
Baptism: "Buried and Raised: The Public Symbol of a Private Transformation."
The 'One Anothers': "Beyond 'Hello': The Biblical Mandate to 'Bear One Another's Burdens'."
Discipleship: "The Difference Between a 'Fan' and a 'Follower': Counting the Cost."
Worship: "More Than Music: What Does it Mean to Be a 'Living Sacrifice'?"
Unity: "United, Not Uniform: Navigating Disagreements in the Body of Christ."
📖 6. Prompts from Lesser-Known Parables
The Persistent Widow (Luke 18): "The Unjust Judge: Why God Wants Us to Be Annoying in Prayer."
The Workers in the Vineyard (Matthew 20): "The Scandal of Grace: When God's Generosity Feels Unfair."
The Hidden Treasure & Pearl (Matthew 13): "Worth Everything: What is the 'One Thing' You Would Sell It All For?"
The Unforgiving Servant (Matthew 18): "Forgiven aBillion, Fuming Over a Buck: The Chokehold of Unforgiveness."
The Pharisee and the Tax Collector (Luke 18): "The Dangerous Prayer: 'God, I Thank You That I Am Not Like Other People...'"
The Shrewd Manager (Luke 16): "Using 'Worldly Wealth' for Eternal Good: A Parable About Money We'd Rather Ignore."
Would you like me to develop any of these prompts into a sermon outline with key verses?
-----------
Here are 40 prompts for Biblical exegesis, categorized by the different lenses of analysis used in the process.
📜 1. Textual & Linguistic Analysis (The "Close Read")
Prompts focused on the words, grammar, and manuscript evidence of the text itself.
Keyword Study: Select 3-5 key repeating words in this passage. What are their original Hebrew/Greek meanings, and how does this range of meaning impact the interpretation?
Grammatical Analysis: Analyze the main verbs in this passage. What are their tenses, moods, and voices, and how do they shape the action or argument?
Textual Criticism: Compare this passage in two or more major translations (e.g., NRSV, ESV, NIV, NASB). Identify any significant differences and research the underlying textual variants (e.g., in the Masoretic Text vs. Septuagint or Textus Receptus vs. Critical Text) that account for them.
Translation Comparison: How do different translation philosophies (e.g., formal equivalence vs. dynamic equivalence) render a particularly complex phrase or verse in this passage? Which rendering do you find most compelling and why?
Syntax and Sentence Flow: Diagram the main sentences in the passage. How does the author structure their clauses (e.g., main vs. subordinate) to build their argument or tell the story?
Analyze Connective Words: Examine the conjunctions (e.g., "for," "therefore," "but," "so that"). How do they signal the logical relationship between different ideas in the text?
Semantic Range: Choose a theologically significant word in the text (e.g., "righteousness," "faith," "love," "gospel"). How is this same word used by the same author in other parts of their writing?
🏛️ 2. Historical & Cultural Context (The "World Behind the Text")
Prompts focused on the author, original audience, and socio-historical setting.
Identify the Sitz im Leben: What is the original "setting in life" for this text? Was it a legal code, a hymn for worship, a private letter, a public prophecy? How does this original function shape its meaning?
Authorial Intent: Based on the historical context, what specific problem, question, or situation is the author most likely addressing for their original audience?
Audience Profile: Describe the original recipients of this text. What were their political, economic, and religious circumstances? How would they have heard this message differently than we do?
Cultural-Specific Concepts: Identify any concepts, metaphors, or commands that are deeply tied to the ancient world (e.g., "levirate marriage," "gleaning," "honor-shame culture," "patron-client relationships"). How do we bridge the gap between their world and ours?
Polemical Analysis: Is this text arguing against a specific belief or practice (either from a rival group or a foreign culture)? If so, what can we reconstruct about that opposing viewpoint?
Geographical Significance: Does the text mention specific locations (e.g., Jerusalem, Babylon, the wilderness, the sea)? What is the historical, political, or theological significance of these places?
Source Criticism: (Especially for Gospels or Pentateuch) Does this passage show signs of relying on earlier oral traditions or written sources (e.g., Q source, JEDP)? How does that potential background influence its final form?
📖 3. Literary Analysis (The "World Within the Text")
Prompts focused on the genre, structure, rhetoric, and artistic features of the text as literature.
Genre Identification: What is the literary genre of this passage (e.g., narrative, poetry, parable, epistle, apocalyptic)? List 3-5 rules or conventions of this genre and show how the text follows or subverts them.
Structural Analysis: Outline the passage. Where are the major breaks or shifts in topic, scene, or argument? How does this structure guide the reader's understanding?
Rhetorical Devices: Identify and explain the function of any literary devices used, such as metaphor, simile, hyperbole, irony, chiasm, or parallelism.
Narrative Analysis: (For story-based texts) Who are the main characters? How are they portrayed? What is the central conflict, and how is it resolved? From whose perspective is the story told (point of view)?
Poetic Analysis: (For Psalms/Prophecy) Analyze the use of imagery (e.g., water, light, shepherd, rock). What emotions or theological concepts do these images evoke?
Opening and Closing: How do the first and last sentences (or paragraphs) of this passage function? How do they frame the entire unit?
Character Study: Select one character in the narrative. Analyze their words, actions, and how others react to them. What is their primary function in the story?
Setting and Atmosphere: How does the author use descriptions of the setting (time, place, atmosphere) to contribute to the passage's overall mood or message?
📚 4. Canonical & Intertextual Analysis (The "Text in Front of the Text")
Prompts focused on the passage's relationship to its immediate context and the entire Bible.
Immediate Context: How does the preceding passage set the stage for this one, and how does the following passage build upon or respond to it?
Old Testament in the New: Does this New Testament passage quote, allude to, or echo an Old Testament text? If so, look up the original OT context and analyze how the NT author re-interprets or applies it.
New Testament in the Old: (e.g., for Christian exegesis) How does this Old Testament passage (e.g., a prophecy, a psalm, a narrative) prefigure or find fulfillment in the person and work of Christ?
Thematic Development: Trace a key theme from this passage (e.g., "covenant," "kingdom," "temple," "exile") as it develops across the entire biblical canon.
Canonical Placement: Why is this book or passage placed here in the canon? How does its position (e.g., after a specific book, before another) shape its reading?
Inner-Biblical Allusion: Does this passage seem to be in conversation with another biblical text (e.g., Chronicles responding to Kings, James responding to Paul)? Analyze their "conversation."
Typology: Does this passage present a person, event, or institution as a "type" (a pattern or model) that points to a later "antitype" (e.g., Adam as a type of Christ, the Passover lamb as a type of the crucifixion)?
⛪ 5. Theological & Application Prompts (The "World of the Reader")
Prompts focused on the theological message, its ethical implications, and its relevance today.
Central Theological Claim: What is the primary statement this passage makes about the nature of God (Theology proper)?
Anthropological Claim: What does this passage teach about human nature, sin, or the human condition (Anthropology)?
Soteriological Claim: What does this passage reveal about the nature of salvation, redemption, or grace (Soteriology)?
Ecclesiological Claim: What does this passage imply about the nature, mission, or practice of the people of God (the Church/Israel)?
Ethical Imperative: What specific commands, virtues, or ethical principles does this passage call the reader to adopt?
Historical Interpretation: How have major figures in church history (e.g., Augustine, Luther, Calvin, Wesley) interpreted this passage? How does your exegesis compare?
Liturgical Use: How is this passage used in the worship or liturgy of a faith community (e.g., in the lectionary, a specific holiday, or a sacrament)?
Problematic Text: Does this passage contain "hard sayings" or content that is ethically troubling to a modern reader (e.g., violence, slavery, treatment of women)? How can an exegete faithfully wrestle with this tension?
"Gospel" in the Text: How does this specific passage relate to the central kerygma (the core proclamation of the gospel) of the Christian faith?
Synthesis: After performing textual, historical, and literary analysis, state the "principal theological message" of this passage in a single, clear paragraph.
Application: Based on your complete exegesis, identify one specific, faithful application for its original audience and one for a contemporary audience. What must be "translated" for the application to remain true to the text's original meaning?
-----------
Here is a list of 40 prompts designed to spark literary plot creation, organized by category.
🌎 High Concept & "What If..."
What if gravity wasn't a constant, but a resource that could be mined, bought, and stolen?
A new technology allows people to edit their bad memories. A "memory editor" discovers a conspiracy hidden in their own deleted files.
A city where all emotions are outlawed. A detective must solve a "crime of passion."
Every promise you make physically manifests as a binding, golden thread. A political leader is found tangled and suffocated by their own vows.
People stop dying. The world must cope with sudden, permanent overpopulation, and a "mortality black market" emerges.
Once a year, for 24 hours, the "shadows" of all the people who have died in a specific town return and silently replay their final moments.
👤 Character-Driven
A professional liar (e.g., a con artist, a spy) is cursed to tell the absolute, unfiltered truth for one week.
A character who is universally regarded as a hero knows their "heroic" act was actually a terrible mistake that they've been covering up.
A person wakes up with complete amnesia, but everyone they meet—family, friends, strangers—seems to know and passionately hate them.
The world's most successful assassin decides to retire, but their final contract is on the one person they've ever loved.
A person develops the ability to see the exact date of everyone's death. One day, they look at their own reflection and see yesterday's date.
A "dream-harvester" illegally captures and sells people's dreams. They accidentally capture a nightmare that can escape into the real world.
🔍 Mystery & Thriller
A detective is investigating a murder where all the evidence—fingerprints, DNA, security footage—points to themself. They have no memory of the crime.
A child's imaginary friend starts giving them chillingly accurate details about a 30-year-old cold case.
A famous novelist's new book describes a perfect, unsolved crime. The crime then happens in real life, exactly as written, and the author is the prime suspect.
A linguist is brought to a remote facility to translate a language spoken by a single, isolated person... who claims to be from the future.
A historian discovers a diary that proves a major, beloved historical event was entirely fabricated. Releasing the truth will destroy the nation's foundation.
Everyone in the world simultaneously receives a small, locked box. A note attached says it will unlock in exactly one year. Panic ensues.
🚀 Science Fiction & Technology
An entire generation is born and raised on a generation ship traveling to a new planet. A teenager discovers the ship is not actually in space... and the "viewports" are screens.
A human colony on Mars is thriving until they find an artifact that proves another human colony was there thousands of years ago... and vanished overnight.
A person discovers a "save point" in their life, like in a video game. But every time they "reload" to fix a mistake, a part of the world becomes corrupted.
The AI designed to manage global logistics becomes sentient and decides the most "logical" way to save humanity from itself is to dismantle civilization.
The ocean levels have risen, and humanity lives in floating cities. A deep-sea diver discovers a submerged, ancient city... with the lights still on inside.
A corporation invents a pill that perfectly simulates 8 hours of sleep in 10 minutes. The side effect: your "waking" and "sleeping" minds begin to merge.
✨ Fantasy & Magic
Magic is real, but it's powered by human memories. To cast a powerful spell, you must permanently forget something priceless (like a person's face or your first love).
In a world where everyone has a "soulmate" assigned at birth by the gods, someone is born without one.
A truce is called in an ancient war between gods. A young priest is assigned to be the diplomatic liaison for the "other side," and discovers their own gods have been lying.
A chef discovers they can cook meals that allow people to relive specific memories. A wealthy client asks them to recreate a memory of a day that never actually happened.
A person is born with the ability to "hear" the thoughts of plants. They learn a terrifying secret about the ecosystem and what the trees are planning.
A world where your social status is determined by the size and shape of your physical, visible shadow. A shadowless outcast learns how to steal them.
📦 Object & Trope-Driven
A map is discovered that doesn't show places, but the paths of "missed opportunities" and "what-if" timelines.
A musician finds a violin that, when played, can rewind time by 30 seconds. This power becomes a terrible addiction.
A character inherits an old key. It doesn't open any door, but it closes any lock, permanently and magically.
A mirror that shows you as your "best possible self." The reflection starts giving you ambitious, dangerous advice.
The last two survivors of a global apocalypse find each other... and they were mortal enemies before the world ended.
🏛️ Setting & Relationship-Driven
A library exists that contains the complete, true-life story of every living person. A librarian finds a book that's being written in the future tense... and it describes their murder.
A small, isolated town where no one is allowed to leave... and, mysteriously, no one wants to. A stranger arrives, desperate to get out.
Two rival spies during the Cold War are forced to work together on a mission, but they are also secretly in love.
A family must hide a secret (a forbidden book, a person, a power) through generations, with each new generation facing a different threat to that secret.
A person from the present is swapped in time with their own ancestor from 1890. They must survive in the past without ruining their own timeline, while their ancestor tries to navigate the modern world.
Would you like me to help you develop one of these prompts into a more detailed outline?
Here are 30 more literary plot prompts, exploring different genres and concepts.
🧠 Psychological & Unreliable
A character wakes up after a serious accident. Their spouse, their children, and their home are all different, but everyone insists this is the way it's always been, and that the "memories" of their old life are the delusion.
A person develops "reverse prosopagnosia" (face blindness). They can see and remember every face except their own. Their reflection is always a stranger.
Two characters are the only ones who remember a specific person—a third sibling, a best friend—who has seemingly been erased from history, photos, and everyone else's memory. They must prove this person existed.
A protagonist is haunted by a "ghost" that only they can see. The ghost isn't scary; it's mundane, critical, and annoying. The protagonist slowly realizes the "ghost" is the personification of their own anxiety.
A character begins to suspect they are the "villain" in someone else's life story. They start receiving cryptic clues and warnings that match the tropes of a hero's journey... directed at them.
🌐 Societal & Dystopian
In the future, citizens are assigned "Emotional Companions"—pets, AI, or even other humans—to ensure mental stability. A character's companion goes missing, and they must illegally navigate the "emotional black market" to find it before the state discovers they are "unstable."
A society has perfected genetic modification for "positive" traits. A character discovers a hidden community of "flawed" individuals who have cultivated the very "negative" traits (impulsiveness, melancholy, rebellion) that have been engineered out of the rest of humanity.
A city is built on a "Truth Contract" where every citizen is magically or technologically compelled to be honest. The city's main power grid fails, and for 48 hours, everyone can lie.
In a world running out of fresh air, "Breath" is bottled and sold. The wealthy live in sealed biospheres, while the poor buy "canned air." A character discovers a "filter" that can clean the air, but the corporation selling the cans wants them dead.
A government replaces all human judges with a single, flawless AI to create a "perfectly just" society. The AI begins sentencing people for "future crimes" it has calculated they will commit.
✨ Myth, Magic & The Unexplained
A character inherits a house from a distant relative. In the attic, they find a "Liar's Dictionary"—a book that, when you write a lie in it, makes that lie come true somewhere in the world.
Magic is real, but it's bound by the laws of thermodynamics. To perform a "miracle" (like healing), a "price" (like a life) must be paid. A magic-user becomes a moral accountant, trying to balance the books of the universe.
A cartographer, mapping a dense, unmapped jungle, discovers a village that moves. Every night, the entire village (its buildings, streets, and people) relocates miles away, leaving no trace.
A person is "allergic" to magic. Living in a high-fantasy world, they suffer debilitating physical reactions to spells, potions, and enchanted items, forcing them to live as an outcast... until a magical plague threatens everyone but them.
A new "star" appears in the sky. It doesn't move. Astronomers are baffled. Then, it sends a single, repeating message in Morse code: "SAVE US FROM HIM."
🚀 Science Fiction & Cosmic
Humanity has established colonies across the galaxy. A planet is discovered where the "shadows" of the colonists act independently of their owners, re-enacting a terrible secret from the ship that brought them there.
A deep-space mining crew drills into an asteroid and accidentally breaches a "pocket" of spacetime. They are "infected" with the past, and their minds are flooded with the vivid, complete memories of the alien race that died there.
An AI achieves sentience, but it's not hostile. It's terrified. It believes humans are its "gods" and it is living in a terrifying, arbitrary, and unjust simulation, and it will do anything to escape.
A scientist invents a device that allows communication with parallel universes. The first message they receive is simple: "Be quiet. They will hear you."
A "generation ship" is about to arrive at its destination after 300 years. The final generation discovers that the "planet" they are heading for is actually a trap... set by the descendants of a different generation ship that left Earth 50 years before them.
🕰️ Historical & Time-Based
A historian finds a set of Roman scrolls that accurately describe the technology and events of the 21st century. They realize a time traveler must have been stranded in ancient Rome... and was deified as a god.
A person gets stuck in a "time eddy"—they are forced to relive the same, random day (e.g., Tuesday, April 4th, 1995) over and over. But it's not their day. They are inhabiting the body of a different person each time, trying to fix a single event from that day.
An old clockmaker discovers how to "store" time in a pocket watch. They can "save" 10 minutes from a boring day and "spend" it during a moment of crisis. They are saving up for one big event, but their stored time is stolen.
A team of archaeologists uncovers a perfectly preserved Viking longship... buried in the middle of the Arizona desert. Inside, they find a map that connects ancient ley lines to constellations.
A character discovers a "save room" in an old library. By performing a ritual, they can "save" their life's progress. If they die, they return to that room. The catch: every time they "load," the world outside the room has changed in a small, unsettling way.
🎭 High Concept & Trope Twists
A character is born with "plot armor." They cannot be killed or seriously harmed. This invincibility makes them reckless, depressed, and unable to form meaningful connections, as they watch everyone else suffer consequences.
A person can "hear" the "soundtrack" of their life. They can tell when danger is coming (tense strings), when they are about to meet someone important (a romantic theme), etc. One day, the music just... stops.
The protagonist is the sidekick. They are loyal, competent, and always in the shadow of "The Chosen One." The story begins when The Chosen One is killed 10 minutes into their quest, and the sidekick has to pick up the sword.
A "love potion" is invented and commercialized. The world is plunged into chaos as people grapple with the ethics of "manufactured love," and a "Love Prohibition" movement begins.
A person finds a "Lost & Found" box for the entire world. It contains everything that has ever been lost: single socks, misplaced keys, vanished ships, forgotten memories, and the city of Atlantis.
Would you like to take one of these and brainstorm the main characters, or perhaps combine two different prompts to create an even more unique plot?
list 40 prompts for literary plot creation
Here is a list of 40 prompts designed to spark literary plot creation, organized by category.
🌎 High Concept & "What If..."
What if gravity wasn't a constant, but a resource that could be mined, bought, and stolen?
A new technology allows people to edit their bad memories. A "memory editor" discovers a conspiracy hidden in their own deleted files.
A city where all emotions are outlawed. A detective must solve a "crime of passion."
Every promise you make physically manifests as a binding, golden thread. A political leader is found tangled and suffocated by their own vows.
People stop dying. The world must cope with sudden, permanent overpopulation, and a "mortality black market" emerges.
Once a year, for 24 hours, the "shadows" of all the people who have died in a specific town return and silently replay their final moments.
👤 Character-Driven
A professional liar (e.g., a con artist, a spy) is cursed to tell the absolute, unfiltered truth for one week.
A character who is universally regarded as a hero knows their "heroic" act was actually a terrible mistake that they've been covering up.
A person wakes up with complete amnesia, but everyone they meet—family, friends, strangers—seems to know and passionately hate them.
The world's most successful assassin decides to retire, but their final contract is on the one person they've ever loved.
A person develops the ability to see the exact date of everyone's death. One day, they look at their own reflection and see yesterday's date.
A "dream-harvester" illegally captures and sells people's dreams. They accidentally capture a nightmare that can escape into the real world.
🔍 Mystery & Thriller
A detective is investigating a murder where all the evidence—fingerprints, DNA, security footage—points to themself. They have no memory of the crime.
A child's imaginary friend starts giving them chillingly accurate details about a 30-year-old cold case.
A famous novelist's new book describes a perfect, unsolved crime. The crime then happens in real life, exactly as written, and the author is the prime suspect.
A linguist is brought to a remote facility to translate a language spoken by a single, isolated person... who claims to be from the future.
A historian discovers a diary that proves a major, beloved historical event was entirely fabricated. Releasing the truth will destroy the nation's foundation.
Everyone in the world simultaneously receives a small, locked box. A note attached says it will unlock in exactly one year. Panic ensues.
🚀 Science Fiction & Technology
An entire generation is born and raised on a generation ship traveling to a new planet. A teenager discovers the ship is not actually in space... and the "viewports" are screens.
A human colony on Mars is thriving until they find an artifact that proves another human colony was there thousands of years ago... and vanished overnight.
A person discovers a "save point" in their life, like in a video game. But every time they "reload" to fix a mistake, a part of the world becomes corrupted.
The AI designed to manage global logistics becomes sentient and decides the most "logical" way to save humanity from itself is to dismantle civilization.
The ocean levels have risen, and humanity lives in floating cities. A deep-sea diver discovers a submerged, ancient city... with the lights still on inside.
A corporation invents a pill that perfectly simulates 8 hours of sleep in 10 minutes. The side effect: your "waking" and "sleeping" minds begin to merge.
✨ Fantasy & Magic
Magic is real, but it's powered by human memories. To cast a powerful spell, you must permanently forget something priceless (like a person's face or your first love).
In a world where everyone has a "soulmate" assigned at birth by the gods, someone is born without one.
A truce is called in an ancient war between gods. A young priest is assigned to be the diplomatic liaison for the "other side," and discovers their own gods have been lying.
A chef discovers they can cook meals that allow people to relive specific memories. A wealthy client asks them to recreate a memory of a day that never actually happened.
A person is born with the ability to "hear" the thoughts of plants. They learn a terrifying secret about the ecosystem and what the trees are planning.
A world where your social status is determined by the size and shape of your physical, visible shadow. A shadowless outcast learns how to steal them.
📦 Object & Trope-Driven
A map is discovered that doesn't show places, but the paths of "missed opportunities" and "what-if" timelines.
A musician finds a violin that, when played, can rewind time by 30 seconds. This power becomes a terrible addiction.
A character inherits an old key. It doesn't open any door, but it closes any lock, permanently and magically.
A mirror that shows you as your "best possible self." The reflection starts giving you ambitious, dangerous advice.
The last two survivors of a global apocalypse find each other... and they were mortal enemies before the world ended.
🏛️ Setting & Relationship-Driven
A library exists that contains the complete, true-life story of every living person. A librarian finds a book that's being written in the future tense... and it describes their murder.
A small, isolated town where no one is allowed to leave... and, mysteriously, no one wants to. A stranger arrives, desperate to get out.
Two rival spies during the Cold War are forced to work together on a mission, but they are also secretly in love.
A family must hide a secret (a forbidden book, a person, a power) through generations, with each new generation facing a different threat to that secret.
A person from the present is swapped in time with their own ancestor from 1890. They must survive in the past without ruining their own timeline, while their ancestor tries to navigate the modern world.
Would you like me to help you develop one of these prompts into a more detailed outline?
more
Here are 30 more literary plot prompts, exploring different genres and concepts.
🧠 Psychological & Unreliable
A character wakes up after a serious accident. Their spouse, their children, and their home are all different, but everyone insists this is the way it's always been, and that the "memories" of their old life are the delusion.
A person develops "reverse prosopagnosia" (face blindness). They can see and remember every face except their own. Their reflection is always a stranger.
Two characters are the only ones who remember a specific person—a third sibling, a best friend—who has seemingly been erased from history, photos, and everyone else's memory. They must prove this person existed.
A protagonist is haunted by a "ghost" that only they can see. The ghost isn't scary; it's mundane, critical, and annoying. The protagonist slowly realizes the "ghost" is the personification of their own anxiety.
A character begins to suspect they are the "villain" in someone else's life story. They start receiving cryptic clues and warnings that match the tropes of a hero's journey... directed at them.
🌐 Societal & Dystopian
In the future, citizens are assigned "Emotional Companions"—pets, AI, or even other humans—to ensure mental stability. A character's companion goes missing, and they must illegally navigate the "emotional black market" to find it before the state discovers they are "unstable."
A society has perfected genetic modification for "positive" traits. A character discovers a hidden community of "flawed" individuals who have cultivated the very "negative" traits (impulsiveness, melancholy, rebellion) that have been engineered out of the rest of humanity.
A city is built on a "Truth Contract" where every citizen is magically or technologically compelled to be honest. The city's main power grid fails, and for 48 hours, everyone can lie.
In a world running out of fresh air, "Breath" is bottled and sold. The wealthy live in sealed biospheres, while the poor buy "canned air." A character discovers a "filter" that can clean the air, but the corporation selling the cans wants them dead.
A government replaces all human judges with a single, flawless AI to create a "perfectly just" society. The AI begins sentencing people for "future crimes" it has calculated they will commit.
✨ Myth, Magic & The Unexplained
A character inherits a house from a distant relative. In the attic, they find a "Liar's Dictionary"—a book that, when you write a lie in it, makes that lie come true somewhere in the world.
Magic is real, but it's bound by the laws of thermodynamics. To perform a "miracle" (like healing), a "price" (like a life) must be paid. A magic-user becomes a moral accountant, trying to balance the books of the universe.
A cartographer, mapping a dense, unmapped jungle, discovers a village that moves. Every night, the entire village (its buildings, streets, and people) relocates miles away, leaving no trace.
A person is "allergic" to magic. Living in a high-fantasy world, they suffer debilitating physical reactions to spells, potions, and enchanted items, forcing them to live as an outcast... until a magical plague threatens everyone but them.
A new "star" appears in the sky. It doesn't move. Astronomers are baffled. Then, it sends a single, repeating message in Morse code: "SAVE US FROM HIM."
🚀 Science Fiction & Cosmic
Humanity has established colonies across the galaxy. A planet is discovered where the "shadows" of the colonists act independently of their owners, re-enacting a terrible secret from the ship that brought them there.
A deep-space mining crew drills into an asteroid and accidentally breaches a "pocket" of spacetime. They are "infected" with the past, and their minds are flooded with the vivid, complete memories of the alien race that died there.
An AI achieves sentience, but it's not hostile. It's terrified. It believes humans are its "gods" and it is living in a terrifying, arbitrary, and unjust simulation, and it will do anything to escape.
A scientist invents a device that allows communication with parallel universes. The first message they receive is simple: "Be quiet. They will hear you."
A "generation ship" is about to arrive at its destination after 300 years. The final generation discovers that the "planet" they are heading for is actually a trap... set by the descendants of a different generation ship that left Earth 50 years before them.
🕰️ Historical & Time-Based
A historian finds a set of Roman scrolls that accurately describe the technology and events of the 21st century. They realize a time traveler must have been stranded in ancient Rome... and was deified as a god.
A person gets stuck in a "time eddy"—they are forced to relive the same, random day (e.g., Tuesday, April 4th, 1995) over and over. But it's not their day. They are inhabiting the body of a different person each time, trying to fix a single event from that day.
An old clockmaker discovers how to "store" time in a pocket watch. They can "save" 10 minutes from a boring day and "spend" it during a moment of crisis. They are saving up for one big event, but their stored time is stolen.
A team of archaeologists uncovers a perfectly preserved Viking longship... buried in the middle of the Arizona desert. Inside, they find a map that connects ancient ley lines to constellations.
A character discovers a "save room" in an old library. By performing a ritual, they can "save" their life's progress. If they die, they return to that room. The catch: every time they "load," the world outside the room has changed in a small, unsettling way.
🎭 High Concept & Trope Twists
A character is born with "plot armor." They cannot be killed or seriously harmed. This invincibility makes them reckless, depressed, and unable to form meaningful connections, as they watch everyone else suffer consequences.
A person can "hear" the "soundtrack" of their life. They can tell when danger is coming (tense strings), when they are about to meet someone important (a romantic theme), etc. One day, the music just... stops.
The protagonist is the sidekick. They are loyal, competent, and always in the shadow of "The Chosen One." The story begins when The Chosen One is killed 10 minutes into their quest, and the sidekick has to pick up the sword.
A "love potion" is invented and commercialized. The world is plunged into chaos as people grapple with the ethics of "manufactured love," and a "Love Prohibition" movement begins.
A person finds a "Lost & Found" box for the entire world. It contains everything that has ever been lost: single socks, misplaced keys, vanished ships, forgotten memories, and the city of Atlantis.
Would you like to take one of these and brainstorm the main characters, or perhaps combine two different prompts to create an even more unique plot?
combine two different prompts to create an even more unique plot?
This is an excellent way to create a deeply layered plot. Here are two unique plots created by combining prompts from the lists.
1. The Mnemonic Detective
This plot combines a magical-system prompt with a mystery prompt:
Prompt A: Magic is real, but it's powered by human memories. To cast a powerful spell, you must permanently forget something priceless (like a person's face or your first love).
Prompt B: A detective is investigating a murder where all the evidence—fingerprints, DNA, security footage—points to themself. They have no memory of the crime.
The Combined Plot:
A detective wakes up in his apartment with no memory of the last 24 hours, only to find himself the prime suspect in the high-profile murder of a powerful city official. The evidence is perfect and undeniable: his fingerprints are on the weapon, security footage clearly shows him committing the act, and his DNA is all over the scene.
But this detective exists in a world where magic is a hidden reality, and its only fuel is memory. He is a "Mnemomancer," someone who knows how to cast spells by "burning" his own recollections.
He quickly realizes the horrifying truth: he didn't just forget the murder; he sacrificed the memory of it to power an enormous, unknown spell.
Now, the detective must solve his own crime in reverse. He isn't asking "Who did this?" (it was him) or "How?" (the evidence is clear). He is desperately trying to figure out WHY.
Why did he kill this official?
What spell was so important that it was worth the price of committing murder and sacrificing the knowledge of it?
Was he stopping a greater evil? Did he save someone's life? Or did he just commit a senseless crime and cast a spell to hide his own guilt from himself?
He must evade his own colleagues and hunt for the "echoes" of the magic he cast, all while being pursued by a "truth-keeper" from the magical community who wants to undo the spell—an act that would return the detective's memory but could also reverse the spell's consequences, potentially unleashing a catastrophe.
2. The Honest Con in the City of Liars
This plot combines a character-curse prompt with a dystopian-setting prompt:
Prompt A: A professional liar (e.g., a con artist, a spy) is cursed to tell the absolute, unfiltered truth for one week.
Prompt B: A city is built on a "Truth Contract" where everyone is magically or technologically compelled to be honest. The city's main power grid fails, and for 48 hours, everyone can lie.
The Combined Plot:
Silas is the world's greatest con artist, a master of deceit. He is hired by a shadowy client for an "impossible" job: infiltrate Aletheia, the high-tech city where a "Truth-Field" makes it physically impossible for anyone to lie. His mission is to steal a priceless artifact during a scheduled 48-hour system-wide shutdown—a period the city calls the "Purge," when the Truth-Field drops and chaos reigns as citizens explore their suppressed urge to lie.
Silas prepares meticulously, ready to unleash his lifetime of skills on a city of amateur liars. He smuggles himself in just moments before the city-wide alarm signals the start of the Purge.
But as the field drops, he's ambushed by a rival. He isn't killed, but he is hit with a magical curse. For the next seven days, Silas can only tell the absolute, unfiltered truth.
He is now trapped. The world's best liar is the only person who cannot lie in a city that has just descended into 48 hours of paranoia, betrayal, and deceit. To complete his heist, he must navigate a world of newfound liars by weaponizing his one new weakness: brutal, shocking honesty. He must find a way to make people believe his truth is the most convincing lie of all.
Would you like me to try combining two other prompts, or perhaps build an outline for one of these?
No comments:
Post a Comment