Daniel Flieger
Author
Daniel Flieger
QA Consultant

May 12, 2025

The era of Artificial Intelligence is in full swing, with Large Language Models (LLMs) at the heart of this revolution. However, to truly unlock the immense potential of these impressive technologies, more than just casual questioning is required. This is where Prompt Engineering comes into play – the art and science of crafting precise and effective instructions (prompts) to elicit the desired, high-quality responses from LLMs.

Lee Boonstra's white paper, "Prompt Engineering" published by Google, serves as an excellent and comprehensive guide to this discipline. It's aimed at anyone working with LLMs, emphasizing that one doesn't need to be a data scientist to write good prompts – although creating optimal prompts is an iterative and potentially complex process.

The Foundation: How LLMs Work and How We Shape Their Responses

The white paper begins by fundamentally explaining that LLMs are essentially prediction engines. They take text as input and, based on the vast datasets they were trained on, predict the most probable next word or token. Prompt engineering, therefore, is the process of skillfully guiding this predictive capability through clever input design.

Before diving into specific prompting techniques, it's crucial to understand the LLM output configuration parameters. These include:

  1. Output Length: Determines the maximum number of tokens to be generated.
  2. Sampling Controls: This category encompasses:
    • Temperature: Controls the degree of randomness. Low values lead to more deterministic, focused responses, while higher values can encourage creativity but also potential inaccuracy.
    • Top-K: Limits the selection of the next token to the K most probable options.
    • Top-P (Nucleus Sampling): Selects from the smallest set of tokens whose cumulative probability reaches a certain threshold.

The document explains how these settings interact and provides recommendations for starting points. It also warns about the "repetition loop bug," a state where models can get stuck in repetitive outputs due to inappropriate settings.

The Prompt Engineer's Toolkit: A Multitude of Techniques

The core of the white paper is a detailed presentation of various prompting techniques, which can be employed depending on the use case and desired outcome:

  • General Prompting / Zero-Shot: The simplest form, where the prompt contains only the task description without examples.
  • One-Shot & Few-Shot Prompting: The LLM is provided with one (One-Shot) or several (Few-Shot) examples of the desired output. This is often very effective in teaching the model the desired format and style.
  • System, Contextual, and Role Prompting: These techniques help define the framework of the interaction.
    • System Prompting sets the overall context or "personality" of the model (e.g., "You are an assistant that writes and explains Python code.").
    • Contextual Prompting provides specific background information for the current task.
    • Role Prompting assigns the LLM a specific role (e.g., "Act as an experienced travel journalist."), which strongly influences the tone and content of the response.

For more complex challenges, the white paper introduces advanced approaches:

  • Step-Back Prompting: The LLM is asked to first answer a more general, related question, the insights from which are then incorporated into the specific prompt to activate relevant knowledge.
  • Chain of Thought (CoT) Prompting: The model is instructed to lay out its reasoning process or intermediate steps to solve a task before giving the final answer. This particularly improves accuracy in computational or logical tasks.
  • Self-Consistency: An extension of CoT, where multiple reasoning paths are generated, and the most frequent answer is selected as the most robust.
  • Tree of Thoughts (ToT): Allows LLMs to explore multiple different reasoning paths simultaneously, similar to a tree structure, which is useful for tasks requiring significant exploration.
  • ReAct (Reason & Act): A powerful paradigm that enables LLMs to solve complex tasks through a combination of logical reasoning (Reason) and the use of external tools (Act) – such as web search.
  • Automatic Prompt Engineering (APE): A fascinating approach where an LLM itself is used to generate effective prompts or optimize existing ones.

Specialized Applications and Overarching Concepts

The document also touches upon Code Prompting, demonstrating how LLMs can assist in writing, explaining, translating, and even debugging code. A brief mention of multimodal prompting, which utilizes input formats like images or audio in addition to text, rounds out this section.

The Path to Mastery: Indispensable Best Practices

Finally, the white paper presents a valuable collection of best practices that should serve as a guide for every prompt engineer:

  1. Provide examples: One-shot or few-shot is often key to success.
  2. Simplicity and Precision (Design with simplicity & be specific): Prompts should be clear and concise, yet provide specific details about the desired output.
  3. Instructions over Constraints: Tell the model what to do, rather than just what to avoid.
  4. Control output length (Control the max token length): Adjust the length to suit the need.
  5. Utilize variables (Use variables in prompts): For dynamic and reusable prompts.
  6. Experiment (Experiment with input/output formats and writing styles): Test different approaches, including structured output formats like JSON, and use tools like json-repair or schema definitions.
  7. Adaptability (Adapt to model updates): Stay flexible as models evolve.
  8. Collaboration (Experiment together): Exchanging ideas with others can open new perspectives.
  9. Adhere to specific CoT rules: Set the temperature to 0 for CoT and place the answer after the reasoning.
  10. Documentation, Documentation, Documentation (Document the various prompt attempts): This point cannot be overemphasized. Systematically recording prompts, settings, model versions, and results is essential for learning progress and troubleshooting.

Conclusion: Bridging the Gap to Intelligent Interaction

Lee Boonstra's "Prompt Engineering" white paper is an indispensable read for anyone looking to fully harness the potential of LLMs. It vividly demonstrates that effective prompting is a skill born from a blend of analytical thinking, creativity, and disciplined, iterative experimentation. The techniques presented offer a rich repertoire, and the best practices, especially the call for meticulous documentation, lay the foundation for sustainable success. Those who master the art of prompt engineering build the crucial bridge to truly intelligent and productive interaction with artificial intelligence.

Sources:

Prompt Engineering V4.