您的浏览器禁用了JavaScript(一种计算机语言,用以实现您与网页的交互),请解除该禁用,或者联系我们。[Google]:提示词工程 - 发现报告

提示词工程

信息技术2025-10-24-Google记***
提示词工程

Author: Lee Boonstra Acknowledgements Content contributorsMichael ShermanYuan CaoErick ArmbrustAnant NawalgariaAntonio GulliSimone Cammel Curators and EditorsAntonio GulliAnant NawalgariaGrace Mollison Technical WriterJoey Haymaker DesignerMichael Lanning Table of contents Introduction6Prompt engineering7LLM output configuration8Output length8Sampling controls9Temperature9Top-K and top-P10Putting it all together11Prompting techniques13General prompting / zero shot13One-shot & few-shot15System, contextual and role prompting18System prompting19Role prompting21Contextual prompting23 Step-back prompting25Chain of Thought (CoT)29Self-consistency32Tree of Thoughts (ToT)36ReAct (reason & act)37Automatic Prompt Engineering40Code prompting42Prompts for writing code42Prompts for explaining code44Prompts for translating code46Prompts for debugging and reviewing code48What about multimodal prompting?54 Best Practices54 Provide examples54Design with simplicity55Be specific about the output56Use Instructions over Constraints56Control the max token length58Use variables in prompts58Experiment with input formats and writing styles59For few-shot prompting with classification tasks, mix up the classes59Adapt to model updates60Experiment with output formats60 JSON Repair61Working with Schemas62Experiment together with other prompt engineers63CoT Best practices64Document the various prompt attempts64Summary66Endnotes68 You don’t need to be a datascientist or a machine learningengineer – everyone can writea prompt. Introduction When thinking about a large language model input and output, a text prompt (sometimesaccompanied by other modalities such as image prompts) is the input the model usesto predict a specific output. You don’t need to be a data scientist or a machine learningengineer – everyone can write a prompt. However, crafting the most effective prompt can becomplicated. Many aspects of your prompt affect its efficacy: the model you use, the model’straining data, the model configurations, your word-choice, style and tone, structure, andcontext all matter. Therefore, prompt engineering is an iterative process. Inadequate promptscan lead to ambiguous, inaccurate responses, and can hinder the model’s ability to providemeaningful output. When you chat with the Gemini chatbot,1you basically write prompts, however thiswhitepaper focuses on writing prompts for the Gemini model within Vertex AI or by usingthe API, because by prompting the model directly you will have access to the configurationsuch as temperature etc. This whitepaper discusses prompt engineering in detail. We will look into the variousprompting techniques to help you getting started and share tips and best practices tobecome a prompting expert. We will also discuss some of the challenges you can facewhile crafting prompts. Prompt engineering Remember how an LLM works; it’s a prediction engine. The model takes sequential text asan input and then predicts what the following token should be, based on the data it wastrained on. The LLM is operationalized to do this over and over again, adding the previouslypredicted token to the end of the sequential text for predicting the following token. The nexttoken prediction is based on the relationship between what’s in the previous tokens and whatthe LLM has seen during its training. When you write a prompt, you are attempting to set up the LLM to predict the right sequenceof tokens. Prompt engineering is the process of designing high-quality prompts that guideLLMs to produce accurate outputs. This process involves tinkering to find the best prompt,optimizing prompt length, and evaluating a prompt’s writing style and structure in relationto the task. In the context of natural language processing and LLMs, a prompt is an inputprovided to the model to generate a response or prediction. These prompts can be used to achieve various kinds of understanding and generationtasks such as text summarization, information extraction, question and answering, textclassification, language or code translation, code generation, and code documentationor reasoning. Please feel free to refer to Google’s prompting guides2,3with simple and effectiveprompting examples. When prompt engineering, you will start by choosing a model. Prompts might need to beoptimized for your specific model, regardless of whether you use Gemini language models inVertex AI, GPT, Claude, or an open source model like Gemma or LLaMA. Besides the prompt, you will also need to tinker with the various configurations of a LLM. LLM output configuration Once you choose your model you will need to figure out the model configuration. Most LLMscome with various configuration options that control the LLM’s output. Effective promptengineering requires setting these configurations optimally for your task. Output length An important configuration setting is the number of tokens to generate in a response.Generating more tokens requires more computation from