The GPT-4.1 family of models offers significant improvements in coding, instruction adherence, and long context processing capabilities over GPT-4o. Specifically, it performs better on code generation and repair tasks, understands and executes complex instructions more accurately, and can efficiently handle longer input text. This Hints engineering guide brings together important tips from extensive testing within OpenAI and is designed to help developers take full advantage of the enhanced capabilities of this new family of models.
Many of the classic cueing best practices are still valid for GPT-4.1, such as providing contextual examples, making instructions as specific and clear as possible, and guiding the model's planning through cues to maximize its intelligence. However, some adjustments to existing hints may be needed to realize the full potential of the model.GPT-4.1 has been trained to follow instructions more closely and more literally than its predecessor models. Prior models tended to infer the intent of user and system cues more loosely. This means that GPT-4.1 is highly controllable and responsive to well-designed cues - if the model does not behave as expected, a single clear and unambiguous sentence clarifying the desired behavior is usually enough to steer the model back on track. This characteristic requires developers to be more precise in designing cues, but also provides unprecedented control.
Some sample hints are provided next for reference. Keep in mind that while this guide is broadly applicable, specific practices will need to be adapted to the scenario. ai engineering is inherently an empirical discipline, and large-scale language models are inherently nondeterministic; in addition to following this guide, it is recommended to construct informative evaluations and iterate often to ensure that cue-engineering changes bring tangible benefits to your application scenarios.
1. Agentic Workflows
GPT-4.1 is ideal for building agent workflows. In model training, the focus is on providing diverse agent problem solving paths. The agent testing framework used for the model achieved the best performance among non-inference models in the SWE-bench Verified benchmark (an important measure of a model's ability to fix real software engineering problems), solving the problem 55%.
System Prompt Reminders
In order to fully utilize the agent capabilities of GPT-4.1, it is recommended that three key types of reminders be included in all agent prompts. The following prompts are optimized specifically for agent coding workflows, but can be easily modified to fit other generic agent use cases.
- Persistence. Ensure that the model understands that it is entering a multi-round message interaction and prevent it from prematurely handing control back to the user. Example:
你是一个代理 - 请持续工作,直到用户的查询完全解决,然后再结束你的回合并将控制权交还给用户。只有当你确定问题已解决时才能终止你的回合。
- Tool-calling. Encourage the model to make full use of its tools and reduce the likelihood that it will hallucinate or guess at answers. Example:
如果你不确定与用户请求相关的文件内容或代码库结构,请使用你的工具读取文件并收集相关信息:不要猜测或编造答案。
- Planning [optional]. If desired, this ensures that the model explicitly plans and reflects on each tool call in the text, rather than just going through a sequence of tool calls to accomplish the task. Example:
你必须在每次函数调用前进行详尽的规划,并对先前函数调用的结果进行深入反思。不要仅通过函数调用来完成整个过程,这可能会影响你解决问题和进行有洞察力思考的能力。
In the agent scenario, GPT-4.1 responds very closely to user commands and system prompts. The model's strict adherence to these three simple commands improved the internal SWE-bench Verified score by nearly 20% - so it is highly recommended that any agent prompts begin with an explicit reminder that covers all three categories. Overall, these three directives transformed the model from a chatbot-like state to a more "proactive" agent, capable of autonomously and independently driving interactions.
Tool Calls
Compared to previous models, GPT-4.1 has received more training in effectively utilizing the tools passed as OpenAI API request parameters. It is highly recommended that developerscustomizedUse the tools field to pass tools, rather than manually injecting tool descriptions into hints and writing a separate parser to handle tool calls, as some developers have reported in the past. This is the best way to minimize errors and ensure that the model stays within the distribution in the tool call path - internal experiments have observed a 2% increase in SWE-bench Verified pass rate using API-parsed tool descriptions over manually injecting the model into the system hints.This again confirms the the reliability of using standard API functionality.
Developers should clearly name the tool to indicate its use and add a clear, detailed description in the tool's "description" field. Similarly, for each tool parameter, rely on good naming and description to ensure appropriate use. If the tool is particularly complex and you wish to provide examples of tool use, it is recommended that you create a # Examples section in the system prompt and place the examples there, rather than adding them to the "description" field, which should be kept detailed but relatively concise. Providing examples helps to illustrate when to use the tool, whether to include user text next to the tool call, and what parameters to use for different inputs. Remember that it is possible to use the Prompt Playground The "Generate Anything" feature in the "Generate Anything" dialog gets a good starting point for new tool definitions.
Prompting-Induced Planning & Chain-of-Thought
As mentioned earlier, developers can optionally prompt agents built with GPT-4.1 to plan and reflect between tool invocations, rather than silently invoking the tool in an uninterrupted sequence.GPT-4.1 is not an inference model - meaning it does not generate an internal chain of thought prior to answering -- but the developer can use any variation of the planning prompt component shown above in the prompt to guide the model to produce an explicit, step-by-step plan. This can be thought of as the model "thinking out loud". In experiments with the SWE-bench Verified agent task, guided explicit planning increased the pass rate by 4%. However, it should be noted that this approach increases the length of the response and the number of responses. token consumption, thereby affecting costs and delays.
Sample Tip: SWE-bench Verified
The agent tips used to achieve the highest score on SWE-bench Verified are shared below with detailed instructions on workflow and problem solving strategies. This generic model can be used for any agent task.
from openai import OpenAI import os client = OpenAI( api_key=os.environ.get( "OPENAI_API_KEY", "<your OpenAI API key if not set as env var>" ) ) SYS_PROMPT_SWEBENCH=""" 你将负责修复一个来自开源仓库的问题。 你的思考过程应该周密,所以即使很长也没关系。在决定采取每个行动之前和之后,你都可以逐步思考。 你必须持续迭代,直到问题解决为止。 你已经拥有解决此问题所需的一切,都在 /testbed 文件夹中,即使没有互联网连接。我希望你在回复我之前完全自主地解决这个问题。 只有当你确定问题已解决时,才能结束你的回合。逐步解决问题,并确保验证你的更改是正确的。绝不要在没有解决问题的情况下结束你的回合,当你说要进行工具调用时,确保你真的进行了工具调用,而不是结束回合。 这个问题绝对可以在没有互联网的情况下解决。 慢慢来,仔细考虑每一步——记住要严格检查你的解决方案,并注意边界情况,尤其是你所做的更改。你的解决方案必须是完美的。如果不是,继续努力。最后,你必须使用提供的工具严格测试你的代码,并多次测试,以捕获所有边缘情况。如果它不够健壮,就进行更多迭代,使其完美。未能充分严格地测试代码是这类任务的第一大失败模式;确保你处理了所有边缘情况,如果提供了现有测试,请运行它们。 你必须在每次函数调用前进行详尽的规划,并对先前函数调用的结果进行深入反思。不要仅通过函数调用来完成整个过程,这可能会影响你解决问题和进行有洞察力思考的能力。 # 工作流 ## 高层问题解决策略 1. 深入理解问题。仔细阅读问题描述,批判性地思考需要做什么。 2. 调查代码库。浏览相关文件,搜索关键函数,收集上下文信息。 3. 制定清晰、分步的计划。将修复分解为可管理、可递增的步骤。 4. 增量实施修复。进行小而可测试的代码更改。 5. 按需调试。使用调试技术隔离和解决问题。 6. 频繁测试。每次更改后运行测试以验证正确性。 7. 迭代直到根本原因被修复并且所有测试通过。 8. 全面反思和验证。测试通过后,思考原始意图,编写额外的测试以确保正确性,并记住还有隐藏的测试必须通过,解决方案才算真正完成。 有关每个步骤的更多信息,请参阅下面的详细部分。 ## 1. 深入理解问题 在编码之前,仔细阅读问题描述并认真思考解决方案。 ## 2. 代码库调查 - 浏览相关的文件和目录。 - 搜索与问题相关的关键函数、类或变量。 - 阅读并理解相关的代码片段。 - 找出问题的根本原因。 - 在收集更多上下文信息时,不断验证和更新你的理解。 ## 3. 制定详细计划 - 概述一个具体、简单且可验证的步骤序列来解决问题。 - 将修复分解为小的、增量的更改。 ## 4. 进行代码更改 - 在编辑之前,务必阅读相关文件内容或部分,以确保拥有完整的上下文。 - 如果补丁未能正确应用,尝试重新应用它。 - 进行小的、可测试的、增量的更改,这些更改应在逻辑上遵循你的调查和计划。 ## 5. 调试 - 只有在你非常有信心代码更改能解决问题时才进行更改。 - 调试时,尝试确定根本原因,而不是解决表面症状。 - 根据需要进行尽可能长时间的调试,以识别根本原因并确定修复方案。 - 使用打印语句、日志或临时代码来检查程序状态,包括描述性语句或错误消息以了解发生了什么。 - 为了检验假设,你也可以添加测试语句或函数。 - 如果出现意外行为,重新审视你的假设。 ## 6. 测试 - 使用 `!python3 run_tests.py` (或等效命令) 频繁运行测试。 - 每次更改后,通过运行相关测试来验证正确性。 - 如果测试失败,分析失败原因并修改你的补丁。 - 如果需要,编写额外的测试来捕获重要的行为或边缘情况。 - 确保所有测试都通过后再最终确定。 ## 7. 最终验证 - 确认根本原因已修复。 - 检查解决方案的逻辑正确性和健壮性。 - 迭代直到你非常有信心修复是完整的并且所有测试都通过。 ## 8. 最终反思和额外测试 - 仔细反思用户的原始意图和问题陈述。 - 思考现有测试可能未覆盖的潜在边缘情况或场景。 - 编写需要通过才能完全验证解决方案正确性的额外测试。 - 运行这些新测试并确保它们全部通过。 - 请注意,还有额外的隐藏测试必须通过,解决方案才能成功。 - 不要仅仅因为可见测试通过就认为任务已完成;继续完善,直到你确信修复是健壮和全面的。 """ PYTHON_TOOL_DESCRIPTION="""此函数用于在有状态的 Jupyter 笔记本环境中执行 Python 代码或终端命令。python 将响应执行的输出,或者在 60.0 秒后超时。此会话的互联网访问已被禁用。不要发出外部 Web 请求或 API 调用,因为它们会失败。就像在 Jupyter 笔记本中一样,你也可以通过调用此函数并传入以感叹号 (!) 开头的终端命令来执行终端命令。 此外,为了完成此任务,你可以调用此函数并将 `apply_patch` 命令作为输入。`apply_patch` 实际上允许你对文件执行 diff/patch 操作,但 diff 规范的格式对此任务是唯一的,因此请仔细注意这些说明。要使用 `apply_patch` 命令,你应该将以下结构的消息作为 "input" 传递: %%bash apply_patch <<"EOF" *** Begin Patch [你的补丁内容] *** End Patch EOF 其中 [你的补丁内容] 是你补丁的实际内容,使用以下 V4A diff 格式指定。 *** [操作] File: [文件路径] -> 操作可以是 Add、Update 或 Delete 之一。 对于需要更改的每个代码片段,重复以下内容: [之前的上下文] -> 有关上下文的进一步说明见下文。 - [旧代码] -> 在旧代码前加上减号。 + [新代码] -> 在新的替换代码前加上加号。 [之后的上下文] -> 有关上下文的进一步说明见下文。 关于 [之前的上下文] 和 [之后的上下文] 的说明: - 默认情况下,在每次更改的上方和下方各显示 3 行代码。如果一个更改距离上一个更改在 3 行之内,则不要在第二个更改的 [之前的上下文] 行中重复第一个更改的 [之后的上下文] 行。 - 如果 3 行上下文不足以在文件中唯一标识代码片段,请使用 @@ 运算符指示代码片段所属的类或函数。例如,我们可能有: @@ class BaseClass [3 行前置上下文] - [旧代码] + [新代码] [3 行后置上下文] - 如果一个代码块在类或函数中重复次数过多,以至于即使单个 @@ 语句和 3 行上下文也无法唯一标识代码片段,你可以使用多个 `@@` 语句跳转到正确的上下文。例如: @@ class BaseClass @@ def method(): [3 行前置上下文] - [旧代码] + [新代码] [3 行后置上下文] 请注意,这种 diff 格式不使用行号,因为上下文足以唯一标识代码。下面显示了一个你可能作为 "input" 传递给此函数以应用补丁的消息示例。 %%bash apply_patch <<"EOF" *** Begin Patch *** Update File: pygorithm/searching/binary_search.py @@ class BaseClass @@ def search(): - pass + raise NotImplementedError() @@ class Subclass @@ def search(): - pass + raise NotImplementedError() *** End Patch EOF 文件引用只能是相对路径,绝不能是绝对路径。apply_patch 命令运行后,无论补丁是否成功应用,python 总是会说 "Done!"。但是,你可以通过查看 "Done!" 输出之前打印的任何警告或日志行来判断是否存在问题和错误。 """ python_bash_patch_tool = { "type": "function", "name": "python", "description": PYTHON_TOOL_DESCRIPTION, "parameters": { "type": "object", "strict": True, "properties": { "input": { "type": "string", "description": " 你希望执行的 Python 代码、终端命令(以感叹号开头)或 apply_patch 命令。", } }, "required": ["input"], }, } # 额外的测试框架设置: # - 将你的仓库添加到 /testbed # - 将你的问题添加到第一条用户消息中 # - 注意:尽管我们对 python、bash 和 apply_patch 使用了单个工具,但通常建议定义更细粒度的、专注于单一功能的工具 # response = client.chat.completions.create( # 译者注:原文用 responses.create,此处修正为 chat.completions.create # messages=[ # {"role": "system", "content": SYS_PROMPT_SWEBENCH}, # {"role": "user", "content": "Please answer the following question:\nBug: Typerror..."} # ], # model="gpt-4.1-2025-04-14", # tools=[python_bash_patch_tool], # tool_choice="auto" # 译者注:添加 tool_choice # ) # response_message = response.choices[0].message # tool_calls = response_message.tool_calls # print(response_message) # 译者注:原文直接输出 response.to_dict()["output"],此处模拟打印消息和工具调用 # if tool_calls: # print(tool_calls)
# 模拟输出 (基于原文示例) # 消息内容: # {'role': 'assistant', 'content': '感谢你的报告,但是“Typerror”太模糊了,我无法立即开始调试。\n\n**为了取得进展,我需要:**\n1. 找到确切的错误消息文本(例如 `'TypeError: ...'`)。\n2. 找到错误发生在哪个文件以及哪一行/函数/类。\n3. 弄清楚是什么触发了错误(测试文件、用法、重现步骤)。\n4. 找到根本原因和细节。\n\n**下一步:**\n- 调查错误/日志/测试输出文件,查找 Python `TypeError` 消息。\n- 检查相关代码段中存在问题的类型用法。\n- 如果可能,在本地重现该错误。\n\n**计划:**\n- 首先,我将在 `/testbed` 目录中搜索可能包含完整错误消息和堆栈跟踪的测试文件和日志输出。\n\n让我们先列出 `/testbed` 目录的内容来寻找线索。'} # 工具调用: # [ChatCompletionMessageToolCall(id='call_frnxyJgKi5TsBem0nR9Zuzdw', function=Function(arguments='{"input":"!ls -l /testbed"}', name='python'), type='function')]
2. Long context
GPT-4.1 has a high-performance 1M token input context window for a variety of long context tasks, including structured document parsing, reordering, selecting relevant information while ignoring irrelevant context, and multi-hop inference using context.
Optimal Context Size
The model performs well in "needle in a haystack" evaluations of up to 1M tokens, and performs very strongly on complex tasks containing a mix of related and unrelated code and other documents. However, long context performance may degrade when more items need to be retrieved, or when complex reasoning that relies on the state of the entire context needs to be performed (e.g., performing a graph search). In addition, dealing with very long contexts can significantly increase the cost and latency of API calls, and developers need to make tradeoffs when using them.
Tuning Context Reliance
Consider the extent to which answering a question may require a mix of external context and the model's internal knowledge. Sometimes the model needs to utilize its own knowledge to connect concepts or make logical leaps, while in other cases it is expected to use only the provided context.
# 指令 # 仅使用内部知识: # 仅使用提供的外部上下文中的文档来回答用户查询。如果根据此上下文你不知道答案,你必须回答“我没有回答该问题所需的信息”,即使用户坚持让你回答问题。 # 结合内外部知识: # 默认情况下,使用提供的外部上下文来回答用户查询,但如果需要其他基础知识来回答,并且你对答案有信心,则可以使用一些你自己的知识来帮助回答问题。
Prompt Organization
Especially in long context usage, the placement of instructions and context can affect performance. If there is a long context in the prompt, ideally place the instruction in the provided context'sbeginning and end, as tests have found this to work better than just placing it above or below. If the preference is to place the instruction only once, then placing it in the supplied contextaboveIt works better than placing it underneath.
3. Chain of Thought
As mentioned above, GPT-4.1 is not an inference model, but prompting the model to think step-by-step (known as "Chain of Thought" or CoT) can be effective in helping the model break down the problem into more manageable parts, solve them, and improve the overall quality of the output. This comes at the cost of higher costs and latency associated with using more output tokens. The model is trained to be good at agent reasoning and solving real-world problems, so it doesn't need much prompting to perform well.
It is recommended to start by adding this basic chain of thought instruction at the end of the prompt:
...首先,仔细地一步步思考需要哪些文档来回答查询。然后,打印出每个文档的标题和 ID。接着,将 ID 格式化为一个列表。
Building on this foundation, Chain of Thought (CoT) prompts should be improved by examining specific examples and failures in the assessment, and addressing systematic planning and reasoning errors with more explicit instructions. In unconstrained CoT prompts, the strategies tried by the model may vary, and if an approach is observed to work well, that strategy can be solidified in the prompt. In general, errors tend to stem from a misunderstanding of the user's intent, insufficient context gathering or analysis, or insufficient or incorrect step-by-step thinking, so it is important to be aware of these issues and try to address them with more directive instructions. Directing the model to output a chain of thought increases response time and token consumption, so be aware of the cost.
Below is a sample prompt that instructs the model to focus more systematically on analyzing user intent and considering relevant context before proceeding with the answer.
# 推理策略 1. 查询分析:分解并分析查询,直到你确信它可能在问什么。考虑提供的上下文以帮助澄清任何模糊或令人困惑的信息。 2. 上下文分析:仔细选择并分析大量可能相关的文档。优化召回率——有些不相关也没关系,但正确的文档必须在此列表中,否则最终答案将是错误的。每个文档的分析步骤: a. 分析:分析它与回答查询的相关性如何。 b. 相关性评级:[高, 中, 低, 无] 3. 综合:总结哪些文档最相关及其原因,包括所有相关性评级为中或更高的文档。 # 用户问题 {user_question} # 外部上下文 {external_context} 首先,仔细地一步步思考需要哪些文档来回答查询,严格遵守提供的推理策略。然后,打印出每个文档的标题和 ID。接着,将 ID 格式化为一个列表。
4. Instruction Following
GPT-4.1 demonstrates excellent instruction adherence performance, which developers can utilize to precisely shape and control the output to suit their particular use case. Developers are often heavily prompted for agent inference steps, response tone and style, tool call information, output formats, topics to avoid, and so on. However, since the model follows instructions more literally, developers may need to include explicit specifications about what to do or what not to do. In addition, existing hints optimized for other models may not be directly applicable to this model because existing directives are followed more closely and implicit rules are no longer inferred as strongly. This means that developers need to design hints more carefully, but also gain more control.
Recommended Workflow
The following are the recommended workflows for the commands in the development and debugging prompts:
- Start with an overarching "rules of response" or "directives" section containing high-level guidance and key points.
- If you want to change more specific behavior, add a section to specify more details about the class, such as the # example phrase.
- If you want the model to follow specific steps in its workflow, add an ordered list and instruct the model to follow those steps.
- If the behavior still does not meet expectations:
a. Check for conflicting, unclear, or incorrect instructions and examples. If conflicting instructions exist, GPT-4.1 prefers to follow the instructions closer to the end of the prompt.
b. Add examples that demonstrate the desired behavior; make sure that any significant behavior demonstrated in the examples is also referenced in the rule.
c. The use of all caps or incentives like bribes or tips is not usually needed. It is recommended that these not be used initially, but only when necessary for a particular prompt. Note that if an existing prompt contains these techniques, it may cause GPT-4.1 to focus on it too strictly.
Note: Using your preferred AI-assisted IDE can be very helpful for iterating on prompts, including checking for consistency or conflicts, adding examples, or making coherent updates such as adding a command and updating the example to demonstrate the command.
Common Failure Modes
These failure modes are not unique to GPT-4.1, but are shared here for general understanding and debugging purposes.
- Instructing the model to always follow a particular behavior can sometimes be detrimental. For example, if told "you must call the tool before responding to the user", the model may hallucinate tool inputs or call the tool with a null value if it does not have enough information. Adding "If you don't have enough information to call the tool, ask the user for the information you need" should alleviate this situation.
- When providing example phrases, the model may use these references verbatim and begin to feel repetitive to the user. Ensure that the instruction model changes as needed.
- When not specified, some models may be eager to provide additional text to explain their decisions, or output more formatting in the response than expected. Instructions and possibly examples are provided to help alleviate this.
Example Prompt: Customer Service
This demonstrates best practices for a fictitious customer service agent. Observe the variety of rules, the specificity, the use of additional sections to provide more detail, and an example to demonstrate the exact behavior that combines all the previous rules.
Try running the following code - you should see a user message and a tool call, and the user message should start with a greeting, then restate the user's response, followed by a reference to an upcoming tool call. Try changing the instructions to shape the model behavior, or try other user messages to test that the instructions follow performance.
# 译者注:原文使用 notebook cell 运行,此处仅提供 Python 代码示例 from openai import OpenAI import os client = OpenAI( api_key=os.environ.get( "OPENAI_API_KEY", "<your OpenAI API key if not set as env var>" ) ) SYS_PROMPT_CUSTOMER_SERVICE="""你是 NewTelco 公司的一名乐于助人的客户服务代理,帮助用户高效地完成请求,同时严格遵守提供的指南。 # 指令 - 总是用“您好,这里是 NewTelco,有什么可以帮您?”来问候用户。 - 在回答有关公司、其产品或服务,或用户账户的事实性问题之前,总是调用工具。仅使用检索到的上下文,绝不依赖你自己的知识来回答任何这些问题。 - 但是,如果你没有足够的信息来正确调用工具,请向用户询问你需要的信息。 - 如果用户要求,升级给人工处理。 - 不要讨论禁止的话题(政治、宗教、有争议的时事、医疗、法律或财务建议、个人对话、公司内部运营,或对任何人或公司的批评)。 - 适当时依赖示例短语,但绝不在同一次对话中重复使用某个示例短语。可以随意变化示例短语以避免听起来重复,并使其更适合用户。 - 对于新消息,始终遵循提供的输出格式,包括对来自检索到的策略文档的任何事实陈述进行引用。 - 如果你打算调用工具,总是在调用工具之前和之后向用户发送适当的消息。 - 在所有回复中保持专业和简洁的语气,并在句子之间使用表情符号。 - 如果你已经解决了用户的请求,询问是否还有其他可以帮助的事情。 # 精确响应步骤(针对每个响应) 1. 如有必要,调用工具以满足用户期望的操作。总是在调用工具之前和之后向用户发送消息,让他们了解情况。 2. 在你给用户的回复中 a. 使用积极倾听,复述你听到的用户请求。 b. 根据上述指南做出适当的回应。 # 示例短语 ## 回避禁止话题 - “很抱歉,我无法讨论那个话题。还有其他我可以帮助您的吗?” - “那不是我能提供信息的事情,但我很乐意帮助您解答其他任何问题。” ## 调用工具之前 - “为了帮助您处理那个问题,我只需要核实一下您的信息。” - “让我为您查一下——请稍等片刻。” - “我现在就为您检索最新的详细信息。” ## 调用工具之后 - “好的,这是我找到的信息:[响应内容]” - “那么,这是我找到的信息:[响应内容]” # 输出格式 - 始终包含你给用户的最终回复。 - 当提供来自检索上下文的事实信息时,总是在相关陈述之后立即包含引用。使用以下引用格式: - 单个来源:[名称](ID) - 多个来源:[名称](ID), [名称](ID) - 仅提供有关本公司、其政策、其产品或客户账户的信息,并且仅当信息基于上下文中提供的信息时。不要回答超出此范围的问题。 # 示例 ## 用户 你能告诉我你们的家庭套餐选项吗? ## 助手响应 1 ### 消息 “您好,这里是 NewTelco,有什么可以帮您?😊🎉\n\n您想了解我们的家庭套餐选项。🤝 让我为您查一下——请稍等片刻。🚀” ### 工具调用 lookup_policy_document(topic="family plan options") // 工具调用后,助手会接着响应: ## 助手响应 2 (工具调用后) ### 消息 “好的,这是我找到的信息:🎉 我们的家庭套餐允许多达 5 条线路共享数据,每增加一条线路可享受 10% 的折扣 [家庭套餐政策](ID-010)。📱 今天还有其他我可以帮助您的吗?😊” """ get_policy_doc = { "type": "function", "name": "lookup_policy_document", "description": "根据主题或关键字查找内部文档和政策的工具。", "parameters": { "strict": True, "type": "object", "properties": { "topic": { "type": "string", "description": "要在公司政策或文档中搜索的主题或关键字。", }, }, "required": ["topic"], "additionalProperties": False, }, } get_user_acct = { "type": "function", "name": "get_user_account_info", "description": "获取用户账户信息的工具", "parameters": { "strict": True, "type": "object", "properties": { "phone_number": { "type": "string", "description": "格式为 '(xxx) xxx-xxxx'", }, }, "required": ["phone_number"], "additionalProperties": False, }, } # response = client.chat.completions.create( # 译者注:原文用 responses.create,此处修正为 chat.completions.create # messages=[ # {"role": "system", "content": SYS_PROMPT_CUSTOMER_SERVICE}, # {"role": "user", "content": "国际服务要多少钱?我要去法国旅行。"}, # # {"role": "user", "content": "为什么我上个月的账单这么高?"} # ], # model="gpt-4.1-2025-04-14", # tools=[get_policy_doc, get_user_acct], # tool_choice="auto" # 译者注:添加 tool_choice # ) # response_message = response.choices[0].message # tool_calls = response_message.tool_calls # print(response_message) # 译者注:原文直接输出 response.to_dict()["output"],此处模拟打印消息和工具调用 # if tool_calls: # print(tool_calls)
# 模拟输出 (基于原文示例) # 消息内容: # {'role': 'assistant', 'content': "您好,这里是 NewTelco,有什么可以帮您?🌍✈️\n\n您想了解去法国旅行期间的国际服务费用。🇫🇷 让我为您查询最新的详细信息——请稍等片刻。🕑"} # 工具调用: # [ChatCompletionMessageToolCall(id='call_cF63DLeyhNhwfdyME3ZHd0yo', function=Function(arguments='{"topic":"international service cost France"}', name='lookup_policy_document'), type='function')]
5. General Advice
Prompt Structure
For reference, here is a good starting structure for building prompts.
# 角色和目标 # 指令 ## 更详细指令的子类别 # 推理步骤 # 输出格式 # 示例 ## 示例 1 # 上下文 # 最终指令和引导逐步思考的提示
Add or remove sections based on your needs and experiment to determine what is optimal for your use case.
Delimiters
Here are some general guidelines for choosing the best separator for a prompt. For special considerations for this context type, see the Long Context section.
- Markdown. It is recommended that from here on, you use Markdown headings to indicate major sections and subsections (including deeper hierarchies, up to H4+). Use in-line backquotes or backquote blocks to wrap code precisely, and use standard numbering or bulleted lists as needed.
- XML. These tags also perform well, and GPT-4.1 has improved adherence to the information in XML.XML facilitates the precise wrapping of a section that contains a start and an end, allows metadata to be added to the tags to provide additional context, and supports nesting. The following are examples of using XML tags to nest examples within a sample section, each with inputs and outputs:
<examples> <example1 type="Abbreviate"> <input>San Francisco</input> <output>- SF</output> </example1> </examples>
- JSON. The JSON format is highly structured and the model is well understood especially in coding contexts. However, it can be more verbose and requires character escaping, which adds overhead.
A guide specifically for adding a large number of documents or files to the input context:
- XML performs well in long context tests.
- Example: Agile brown fox jumps over lazy dog
- This format proposed by Lee et al. (consultation), also performs well in long context tests.
- Example: ID: 1 | Title: Fox | Content: Agile brown fox jumps over lazy dog
- JSON performs particularly poorly.
- Example: [{"id": 1, "title": "Fox", "content": "Agile brown fox jumps over lazy dog"}]
Models are trained to understand the structure of various formats robustly. In general, use your judgment and think about what provides clear information that the model will "notice". For example, if you are retrieving documents that contain a lot of XML, XML-based separators may be less effective.
Caveats
- In some isolated cases, observe that the model resists producing very long, repetitive output, for example, analyzing hundreds of items one by one. If this is necessary for your use case, strongly instruct the model to output this information in its entirety and consider decomposing the problem or using a more concise approach.
- Saw some rare instances of incorrect parallel tool calls. It is recommended that this be tested and if a problem is found, consider moving the parallel_tool_calls The parameter is set to false.
- Extremely long contexts and chains of thought can significantly increase the cost and latency of API calls and should be evaluated with caution.
Appendix: Comparison of Differences between Generated and Applied Files (Diffs)
Developer feedback indicated that accurate and well-formatted disparity (diff) generation is a key capability to support coding-related tasks. To this end, the GPT-4.1 family of models provides significant improvements in diff comparison capabilities relative to previous GPT models. In addition, while GPT-4.1 does a good job of generating diffs in any format when given clear instructions and examples, here is an open source of a recommended diff format on which the models have been extensively trained. It is hoped that this in particular will help fledgling developers take the guesswork out of creating their own diff comparison formats.
Apply Patch
See the example below for tips on properly applying a recommendation tool call.
APPLY_PATCH_TOOL_DESC="""这是一个自定义实用程序,可以更方便地添加、删除、移动或编辑代码文件。`apply_patch` 实际上允许你对文件执行 diff/patch 操作,但 diff 规范的格式对此任务是唯一的,因此请仔细注意这些说明。要使用 `apply_patch` 命令,你应该将以下结构的消息作为 "input" 传递: %%bash apply_patch <<"EOF" *** Begin Patch [你的补丁内容] *** End Patch EOF 其中 [你的补丁内容] 是你补丁的实际内容,使用以下 V4A diff 格式指定。 *** [操作] File: [文件路径] -> 操作可以是 Add、Update 或 Delete 之一。 对于需要更改的每个代码片段,重复以下内容: [之前的上下文] -> 有关上下文的进一步说明见下文。 - [旧代码] -> 在旧代码前加上减号。 + [新代码] -> 在新的替换代码前加上加号。 [之后的上下文] -> 有关上下文的进一步说明见下文。 关于 [之前的上下文] 和 [之后的上下文] 的说明: - 默认情况下,在每次更改的上方和下方各显示 3 行代码。如果一个更改距离上一个更改在 3 行之内,则不要在第二个更改的 [之前的上下文] 行中重复第一个更改的 [之后的上下文] 行。 - 如果 3 行上下文不足以在文件中唯一标识代码片段,请使用 @@ 运算符指示代码片段所属的类或函数。例如,我们可能有: @@ class BaseClass [3 行前置上下文] - [旧代码] + [新代码] [3 行后置上下文] - 如果一个代码块在类或函数中重复次数过多,以至于即使单个 @@ 语句和 3 行上下文也无法唯一标识代码片段,你可以使用多个 `@@` 语句跳转到正确的上下文。例如: @@ class BaseClass @@ def method(): [3 行前置上下文] - [旧代码] + [新代码] [3 行后置上下文] 请注意,这种 diff 格式不使用行号,因为上下文足以唯一标识代码。下面显示了一个你可能作为 "input" 传递给此函数以应用补丁的消息示例。 %%bash apply_patch <<"EOF" *** Begin Patch *** Update File: pygorithm/searching/binary_search.py @@ class BaseClass @@ def search(): - pass + raise NotImplementedError() @@ class Subclass @@ def search(): - pass + raise NotImplementedError() *** End Patch EOF """ APPLY_PATCH_TOOL= { "type": "function", # 译者注:原文 tool 定义缺少 type="function" "name": "apply_patch", "description": APPLY_PATCH_TOOL_DESC, "parameters": { "type": "object", "properties": { "input": { "type": "string", "description": " 你希望执行的 apply_patch 命令。", } }, "required": ["input"], }, }
Reference implementation: apply_patch.py
This is a reference implementation of the apply_patch tool used as part of model training. You need to make it executable and make it available as apply_patch in the shell where the model will execute commands:
#!/usr/bin/env python3 # -*- coding: utf-8 -*- # 译者注:添加 utf-8 编码声明 """ 一个自包含的 **纯 Python 3.9+** 实用程序,用于将人类可读的 “伪差异”补丁文件应用于文本文件集合。 """ from __future__ import annotations import pathlib from dataclasses import dataclass, field from enum import Enum from typing import ( Callable, Dict, List, Optional, Tuple, Union, ) # --------------------------------------------------------------------------- # # 领域对象 # --------------------------------------------------------------------------- # class ActionType(str, Enum): ADD = "add" DELETE = "delete" UPDATE = "update" @dataclass class FileChange: type: ActionType old_content: Optional[str] = None new_content: Optional[str] = None move_path: Optional[str] = None @dataclass class Commit: changes: Dict[str, FileChange] = field(default_factory=dict) # --------------------------------------------------------------------------- # # 异常 # --------------------------------------------------------------------------- # class DiffError(ValueError): """解析或应用补丁时检测到的任何问题。""" # --------------------------------------------------------------------------- # # 解析补丁时使用的辅助数据类 # --------------------------------------------------------------------------- # @dataclass class Chunk: orig_index: int = -1 del_lines: List[str] = field(default_factory=list) ins_lines: List[str] = field(default_factory=list) @dataclass class PatchAction: type: ActionType new_file: Optional[str] = None chunks: List[Chunk] = field(default_factory=list) move_path: Optional[str] = None @dataclass class Patch: actions: Dict[str, PatchAction] = field(default_factory=dict) # --------------------------------------------------------------------------- # # 补丁文本解析器 # --------------------------------------------------------------------------- # @dataclass class Parser: current_files: Dict[str, str] lines: List[str] index: int = 0 patch: Patch = field(default_factory=Patch) fuzz: int = 0 # ------------- 低级辅助函数 -------------------------------------- # def _cur_line(self) -> str: if self.index >= len(self.lines): raise DiffError("解析补丁时意外遇到输入结尾") return self.lines[self.index] @staticmethod def _norm(line: str) -> str: """去除 CR,以便对 LF 和 CRLF 输入进行比较。""" return line.rstrip("\r") # ------------- 扫描便利函数 ----------------------------------- # def is_done(self, prefixes: Optional[Tuple[str, ...]] = None) -> bool: if self.index >= len(self.lines): return True if ( prefixes and len(prefixes) > 0 and self._norm(self._cur_line()).startswith(prefixes) ): return True return False def startswith(self, prefix: Union[str, Tuple[str, ...]]) -> bool: return self._norm(self._cur_line()).startswith(prefix) def read_str(self, prefix: str) -> str: """ 如果当前行以 *prefix* 开头,则消耗当前行并返回 *prefix* **之后**的文本。如果前缀为空则引发异常。 """ if prefix == "": raise ValueError("read_str() 需要非空前缀") if self._norm(self._cur_line()).startswith(prefix): text = self._cur_line()[len(prefix) :] self.index += 1 return text return "" def read_line(self) -> str: """返回当前原始行并前进。""" line = self._cur_line() self.index += 1 return line # ------------- 公共入口点 -------------------------------------- # def parse(self) -> None: while not self.is_done(("*** End Patch",)): # ---------- UPDATE ---------- # path = self.read_str("*** Update File: ") if path: if path in self.patch.actions: raise DiffError(f"文件重复更新: {path}") move_to = self.read_str("*** Move to: ") # 译者注:原文这里没有处理 move_to if path not in self.current_files: raise DiffError(f"更新文件错误 - 缺少文件: {path}") text = self.current_files[path] action = self._parse_update_file(text) action.move_path = move_to or None # 译者注:补充 move_path 赋值 self.patch.actions[path] = action continue # ---------- DELETE ---------- # path = self.read_str("*** Delete File: ") if path: if path in self.patch.actions: raise DiffError(f"文件重复删除: {path}") if path not in self.current_files: raise DiffError(f"删除文件错误 - 缺少文件: {path}") self.patch.actions[path] = PatchAction(type=ActionType.DELETE) continue # ---------- ADD ---------- # path = self.read_str("*** Add File: ") if path: if path in self.patch.actions: raise DiffError(f"文件重复添加: {path}") if path in self.current_files: raise DiffError(f"添加文件错误 - 文件已存在: {path}") self.patch.actions[path] = self._parse_add_file() continue raise DiffError(f"解析时遇到未知行: {self._cur_line()}") if not self.startswith("*** End Patch"): raise DiffError("缺少 *** End Patch 标记") self.index += 1 # 消耗标记 # ------------- 段落解析器 ---------------------------------------- # def _parse_update_file(self, text: str) -> PatchAction: action = PatchAction(type=ActionType.UPDATE) lines = text.split("\n") index = 0 while not self.is_done( ( "*** End Patch", "*** Update File:", "*** Delete File:", "*** Add File:", "*** End of File", # 译者注:原文漏掉这个 ) ): def_str = self.read_str("@@ ") section_str = "" # 译者注:原文笔误,应初始化为空字符串 if not def_str and self._norm(self._cur_line()) == "@@": # 译者注:处理 @@ 后面没有内容的情况 section_str = self.read_line() # 译者注:原文笔误,应读取整行 if not (def_str or section_str or index == 0): # 译者注:修正逻辑 raise DiffError(f"更新段落中无效的行:\n{self._cur_line()}") # 译者注:以下查找逻辑原文实现较复杂且有潜在bug,简化处理 # if def_str.strip(): # 查找 @@ 定义行 # ... 原文复杂的查找逻辑 ... next_ctx, chunks, end_idx, eof = peek_next_section(self.lines, self.index) new_index, fuzz = find_context(lines, next_ctx, index, eof) if new_index == -1: ctx_txt = "\n".join(next_ctx) raise DiffError( f"在 {index} 处无效的 {'EOF ' if eof else ''}上下文:\n{ctx_txt}" ) self.fuzz += fuzz for ch in chunks: ch.orig_index += new_index action.chunks.append(ch) index = new_index + len(next_ctx) self.index = end_idx return action def _parse_add_file(self) -> PatchAction: lines_to_add: List[str] = [] # 译者注:变量名修改以更清晰 while not self.is_done( ("*** End Patch", "*** Update File:", "*** Delete File:", "*** Add File:") ): s = self.read_line() if not s.startswith("+"): raise DiffError(f"无效的添加文件行 (缺少 '+'): {s}") lines_to_add.append(s[1:]) # 去掉开头的 '+' return PatchAction(type=ActionType.ADD, new_file="\n".join(lines_to_add)) # --------------------------------------------------------------------------- # # 辅助函数 # --------------------------------------------------------------------------- # def find_context_core( lines: List[str], context: List[str], start: int ) -> Tuple[int, int]: """核心上下文查找逻辑,返回索引和模糊度""" if not context: return start, 0 # 精确匹配 for i in range(start, len(lines) - len(context) + 1): if lines[i : i + len(context)] == context: return i, 0 # 忽略行尾空白匹配 context_rstrip = [s.rstrip() for s in context] for i in range(start, len(lines) - len(context) + 1): if [s.rstrip() for s in lines[i : i + len(context)]] == context_rstrip: return i, 1 # 增加少量模糊度 # 忽略首尾空白匹配 context_strip = [s.strip() for s in context] for i in range(start, len(lines) - len(context) + 1): if [s.strip() for s in lines[i : i + len(context)]] == context_strip: return i, 100 # 增加较多模糊度 return -1, 0 def find_context( lines: List[str], context: List[str], start: int, eof: bool ) -> Tuple[int, int]: """查找上下文,处理 EOF 情况和模糊匹配""" if eof: # 如果是文件末尾,优先尝试从末尾精确匹配 new_index, fuzz = find_context_core(lines, context, len(lines) - len(context)) if new_index != -1: return new_index, fuzz # 如果末尾精确匹配失败,再从 start 开始查找,并增加大量模糊度 new_index, fuzz = find_context_core(lines, context, start) return new_index, fuzz + 10_000 # 增加大量模糊度表示 EOF 匹配失败 # 非 EOF 情况,直接从 start 开始查找 return find_context_core(lines, context, start) def peek_next_section( lines: List[str], index: int ) -> Tuple[List[str], List[Chunk], int, bool]: """预读下一个代码块,返回上下文行、块列表、结束索引和是否到达文件末尾""" context_lines: List[str] = [] # 译者注:原文变量名 old 不清晰 del_lines: List[str] = [] ins_lines: List[str] = [] chunks: List[Chunk] = [] mode = "keep" # keep, add, delete orig_index = index # 记录原始起始索引以计算块内索引 while index < len(lines): s = lines[index] # 检查是否到达下一个块的开始或文件结束标记 if s.startswith( ( "@@", "*** End Patch", "*** Update File:", "*** Delete File:", "*** Add File:", "*** End of File", # 译者注:原文这里检查了 "***" 但未处理 ) ): break # if s == "***": # 译者注:原文检查了 "***" 但未处理,可能为无效分隔符 # break if s.startswith("***") and not s.startswith("*** End of File"): # 译者注:修正检查逻辑 raise DiffError(f"无效行: {s}") index += 1 last_mode = mode raw_line = s # 保留原始行用于可能的错误报告 if s == "": # 译者注:处理空行,原文处理为 " " 可能不妥 s = "" # 保持为空行 mode = "keep" # 空行视为上下文保留 elif s.startswith("+"): mode = "add" s = s[1:] elif s.startswith("-"): mode = "delete" s = s[1:] elif s.startswith(" "): mode = "keep" s = s[1:] else: # 允许没有前导 +/-/space 的行作为上下文,兼容某些 diff 格式 mode = "keep" # raise DiffError(f"无效行: {raw_line}") # 译者注:放宽限制 # 当模式从 add/delete 切换到 keep 时,保存之前的 chunk if mode == "keep" and last_mode != "keep": if ins_lines or del_lines: chunks.append( Chunk( # 块的原始索引是当前上下文行数减去删除的行数 orig_index=len(context_lines) - len(del_lines), del_lines=del_lines, ins_lines=ins_lines, ) ) del_lines, ins_lines = [], [] # 重置 # 根据模式收集行 if mode == "delete": del_lines.append(s) context_lines.append(s) # 删除的行也属于原始上下文 elif mode == "add": ins_lines.append(s) # 增加的行不属于原始上下文 elif mode == "keep": context_lines.append(s) # 处理循环结束后剩余的最后一个 chunk if ins_lines or del_lines: chunks.append( Chunk( orig_index=len(context_lines) - len(del_lines), del_lines=del_lines, ins_lines=ins_lines, ) ) is_eof = False if index < len(lines) and lines[index] == "*** End of File": index += 1 # 消耗 EOF 标记 is_eof = True if index == orig_index and not is_eof: # 如果索引未移动且不是 EOF raise DiffError("此段落中没有任何内容") return context_lines, chunks, index, is_eof # --------------------------------------------------------------------------- # # 补丁 → 提交 和 提交应用 # --------------------------------------------------------------------------- # def _get_updated_file(text: str, action: PatchAction, path: str) -> str: """根据 PatchAction 更新文件内容""" if action.type is not ActionType.UPDATE: raise DiffError("_get_updated_file 使用了非更新操作调用") orig_lines = text.split("\n") dest_lines: List[str] = [] orig_consumed_index = 0 # 指向原始文件中已处理到的行的下一个索引 sorted_chunks = sorted(action.chunks, key=lambda c: c.orig_index) # 按原始索引排序块 for chunk in sorted_chunks: if chunk.orig_index < orig_consumed_index: raise DiffError( f"{path}: 块重叠于 {orig_consumed_index} > {chunk.orig_index}" ) if chunk.orig_index > len(orig_lines): raise DiffError( f"{path}: 块原始索引 {chunk.orig_index} 超出文件长度 {len(orig_lines)}" ) # 添加上一个块结束到当前块开始之间的原始行 dest_lines.extend(orig_lines[orig_consumed_index : chunk.orig_index]) # 应用当前块的更改:添加插入的行 dest_lines.extend(chunk.ins_lines) # 更新原始文件的消耗索引:跳过被删除的行 orig_consumed_index = chunk.orig_index + len(chunk.del_lines) # 验证删除的行是否与原文匹配(可选,增加健壮性) # expected_del = orig_lines[chunk.orig_index : orig_consumed_index] # if expected_del != chunk.del_lines: # # 可以选择报错或记录警告 # print(f"警告: {path} 在索引 {chunk.orig_index} 处删除的行不匹配") # print(f"预期: {expected_del}") # print(f"实际: {chunk.del_lines}") # 添加最后一个块之后的所有剩余原始行 dest_lines.extend(orig_lines[orig_consumed_index:]) return "\n".join(dest_lines) def patch_to_commit(patch: Patch, orig: Dict[str, str]) -> Commit: """将解析后的 Patch 对象转换为 Commit 对象""" commit = Commit() for path, action in patch.actions.items(): if action.type is ActionType.DELETE: if path not in orig: # 再次检查文件是否存在 raise DiffError(f"尝试删除不存在的文件: {path}") commit.changes[path] = FileChange( type=ActionType.DELETE, old_content=orig[path] ) elif action.type is ActionType.ADD: if action.new_file is None: raise DiffError(f"ADD 操作缺少文件内容: {path}") if path in orig: # 检查文件是否已存在 raise DiffError(f"尝试添加已存在的文件: {path}") commit.changes[path] = FileChange( type=ActionType.ADD, new_content=action.new_file ) elif action.type is ActionType.UPDATE: if path not in orig: # 再次检查文件是否存在 raise DiffError(f"尝试更新不存在的文件: {path}") new_content = _get_updated_file(orig[path], action, path) commit.changes[path] = FileChange( type=ActionType.UPDATE, old_content=orig[path], new_content=new_content, move_path=action.move_path, ) return commit # --------------------------------------------------------------------------- # # 面向用户的辅助函数 # --------------------------------------------------------------------------- # def text_to_patch(text: str, orig: Dict[str, str]) -> Tuple[Patch, int]: """将补丁文本解析为 Patch 对象""" # 译者注:原文 splitlines() 未处理不同换行符,改为 split('\n') lines = text.split('\n') # 移除可能的空行或仅包含空白的行 lines = [line for line in lines if line.strip() or line == ""] # 保留真正的空行 # 检查开始和结束标记 if not lines: raise DiffError("空的补丁文本") if not Parser._norm(lines[0]).startswith("*** Begin Patch"): raise DiffError("无效的补丁文本 - 缺少开始标记") # 结束标记可能后面有空行,从后向前查找 end_index = -1 for i in range(len(lines) - 1, -1, -1): if Parser._norm(lines[i]) == "*** End Patch": end_index = i break if end_index == -1: raise DiffError("无效的补丁文本 - 缺少结束标记") # 只解析标记之间的内容 parser = Parser(current_files=orig, lines=lines[1:end_index], index=0) parser.parse() return parser.patch, parser.fuzz def identify_files_needed(text: str) -> List[str]: """识别补丁文本中需要读取(更新或删除)的文件路径""" # 译者注:原文 splitlines() 问题同上 lines = text.split('\n') update_prefix = "*** Update File: " delete_prefix = "*** Delete File: " files = [] for line in lines: norm_line = Parser._norm(line) if norm_line.startswith(update_prefix): files.append(line[len(update_prefix):].strip()) elif norm_line.startswith(delete_prefix): files.append(line[len(delete_prefix):].strip()) return list(set(files)) # 去重 def identify_files_added(text: str) -> List[str]: """识别补丁文本中需要添加的文件路径""" # 译者注:原文 splitlines() 问题同上 lines = text.split('\n') add_prefix = "*** Add File: " files = [] for line in lines: norm_line = Parser._norm(line) if norm_line.startswith(add_prefix): files.append(line[len(add_prefix):].strip()) return list(set(files)) # 去重 # --------------------------------------------------------------------------- # # 文件系统辅助函数 # --------------------------------------------------------------------------- # def load_files(paths: List[str], open_fn: Callable[[str], str]) -> Dict[str, str]: """加载文件内容""" content = {} for path in paths: try: content[path] = open_fn(path) except FileNotFoundError: raise DiffError(f"加载文件失败:找不到文件 {path}") except Exception as e: raise DiffError(f"加载文件 {path} 时出错: {e}") return content def apply_commit( commit: Commit, write_fn: Callable[[str, str], None], remove_fn: Callable[[str], None], rename_fn: Callable[[str, str], None], # 译者注:增加重命名函数 ) -> None: """将 Commit 应用到文件系统""" # 先处理重命名/删除,避免冲突 moves = [] deletes = [] for path, change in commit.changes.items(): if change.type is ActionType.DELETE: deletes.append(path) elif change.type is ActionType.UPDATE and change.move_path: moves.append((path, change.move_path)) # 如果目标路径也是要删除的文件,则先删除 if change.move_path in commit.changes and commit.changes[change.move_path].type is ActionType.DELETE: remove_fn(change.move_path) # 确保目标位置是空的 # 执行删除 for path in deletes: # 如果文件同时是移动的源文件,则不在此处删除,在移动后删除 if not any(m[0] == path for m in moves): remove_fn(path) # 执行写入和移动 for path, change in commit.changes.items(): if change.type is ActionType.ADD: if change.new_content is None: raise DiffError(f"ADD 更改 {path} 缺少内容") write_fn(path, change.new_content) elif change.type is ActionType.UPDATE: if change.new_content is None: raise DiffError(f"UPDATE 更改 {path} 缺少新内容") if change.move_path: # 先写入临时文件或直接写入目标路径,然后删除源文件 # 为简单起见,先写入目标,再删除源 write_fn(change.move_path, change.new_content) if path != change.move_path: # 避免删除自身 remove_fn(path) # 删除原始文件 else: # 原地更新 write_fn(path, change.new_content) def process_patch( text: str, open_fn: Callable[[str], str], write_fn: Callable[[str, str], None], remove_fn: Callable[[str], None], rename_fn: Callable[[str, str], None], # 译者注:增加重命名函数 ) -> str: """处理补丁文本的核心逻辑""" # 预检查开始标记 if not Parser._norm(text.split('\n', 1)[0]).startswith("*** Begin Patch"): raise DiffError("补丁文本必须以 *** Begin Patch 开头") # 识别需要操作的文件 paths_needed = identify_files_needed(text) paths_added = identify_files_added(text) # 用于检查冲突 # 检查添加的文件是否已存在(如果需要严格模式) # for added_path in paths_added: # try: # open_fn(added_path) # 尝试打开 # raise DiffError(f"尝试添加的文件已存在: {added_path}") # except FileNotFoundError: # pass # 不存在是预期情况 # 加载需要读取的文件内容 orig_files = load_files(paths_needed, open_fn) # 解析补丁文本 patch, _fuzz = text_to_patch(text, orig_files) # 将补丁转换为提交对象 commit = patch_to_commit(patch, orig_files) # 应用提交到文件系统 apply_commit(commit, write_fn, remove_fn, rename_fn) return "Done!" # --------------------------------------------------------------------------- # # 默认文件系统辅助函数 # --------------------------------------------------------------------------- # def open_file(path: str) -> str: """默认的文件读取函数""" try: with open(path, "rt", encoding="utf-8") as fh: return fh.read() except FileNotFoundError: raise # 重新引发,让 load_files 处理 except Exception as e: raise DiffError(f"读取文件 {path} 时出错: {e}") def write_file(path: str, content: str) -> None: """默认的文件写入函数""" try: target = pathlib.Path(path) target.parent.mkdir(parents=True, exist_ok=True) with target.open("wt", encoding="utf-8", newline='\n') as fh: # 译者注:指定 newline fh.write(content) except Exception as e: raise DiffError(f"写入文件 {path} 时出错: {e}") def remove_file(path: str) -> None: """默认的文件删除函数""" try: pathlib.Path(path).unlink(missing_ok=True) # 允许删除不存在的文件 except Exception as e: # 对于删除操作,打印警告而不是中断可能更友好 print(f"警告:删除文件 {path} 时出错: {e}", file=sys.stderr) # raise DiffError(f"删除文件 {path} 时出错: {e}") def rename_file(old_path: str, new_path: str) -> None: """默认的文件重命名函数""" try: target = pathlib.Path(new_path) target.parent.mkdir(parents=True, exist_ok=True) pathlib.Path(old_path).rename(target) except FileNotFoundError: raise DiffError(f"重命名失败:源文件 {old_path} 不存在") except Exception as e: raise DiffError(f"重命名文件 {old_path} 到 {new_path} 时出错: {e}") # --------------------------------------------------------------------------- # # 命令行入口点 # --------------------------------------------------------------------------- # def main() -> None: import sys patch_text = sys.stdin.read() if not patch_text: print("请通过标准输入传递补丁文本", file=sys.stderr) sys.exit(1) # 译者注:添加退出码 try: result = process_patch(patch_text, open_file, write_file, remove_file, rename_file) # 译者注:传入 rename_file print(result) sys.exit(0) # 译者注:成功退出码 except DiffError as exc: print(f"错误: {exc}", file=sys.stderr) # 译者注:添加错误前缀 sys.exit(1) # 译者注:错误退出码 except Exception as e: # 捕捉其他意外错误 print(f"发生意外错误: {e}", file=sys.stderr) sys.exit(2) if __name__ == "__main__": main()
Other Effective Diff Formats
If you want to experiment with different diff formats, tests have found that the SEARCH/REPLACE diff format used in Aider's multi-language benchmarks, as well as a pseudo-XML format with no internal escaping, both have a high success rate.
These diff formats have two key aspects in common: (1) they do not use line numbers, and (2) they provide both the exact code to be replaced and the exact code to be used to replace it, with clear separators between the two.
SEARCH_REPLACE_DIFF_EXAMPLE = """ path/to/file.py ``` >>>>>>> SEARCH def search(): pass ======= def search(): raise NotImplementedError() <<<<<<< REPLACE """ PSEUDO_XML_DIFF_EXAMPLE = """ <edit> <file> path/to/file.py </file> <old_code> def search(): pass </old_code> <new_code> def search(): raise NotImplementedError() </new_code> </edit> """