AI Personal Learning
and practical guidance

OpenAI Meta-prompt Command Generator OpenAI Meta-prompt

Tip Generation

exist Playground hit the nail on the head Generate button allows you to generate prompts from the task description,function (math.)cap (a poem)build. This guide will explain step-by-step how it works.

summarize

Creating prompts and architectures from scratch can be time-consuming, and getting started by generating them can help you get started quickly.The Generate button uses the following two main methods:

  1. draw attention to sth.: We usehintIncorporate best practices to generate or improve tips.
  2. build: We usemeta-architectureGenerate valid JSON and function syntax.

Currently we use meta-prompts and architectures, but in the future we may integrate more advanced techniques such as DSPy cap (a poem) "Gradient descent"The

draw attention to sth.

hint Guide the model to create great prompts or improve existing ones based on your task description.The meta-prompts in Playground are derived from our Tip EngineeringBest practices and real-world user experience.

We use specific meta-cues for different output types (e.g., audio) to ensure that the generated cues conform to the expected format.

hint

textual meta-tip

from openai import OpenAI
client = OpenAI() META_PROMPT = """ Generate a detailed system prompt based on the task description or existing prompts to effectively guide the language model through the task. # Guidelines - Understand the task: capture the main objectives, goals, requirements, constraints, and expected outputs. - Minimal Changes: If an existing prompt is provided, improve it only if it is simple. For complex prompts, enhance clarity and add missing elements without changing the original structure. - REASONING BEFORE CONCLUSION**: Encourage reasoning steps before drawing conclusions. Caution! If a user-supplied example has reasoning before conclusion, reverse the order! Never begin with a conclusion. - Order of Reasoning: point to the reasoning and conclusion sections of the prompt (by field name) to determine the order of completion, reversing if necessary. - Conclusion, categorization, or results should always appear last. - Examples: If helpful, include high-quality examples, using [square brackets] as placeholders for complex elements. - Consider which examples to include, how many, and whether they are complex enough to require placeholders. - Clear and concise: Use clear, specific language. Avoid unnecessary directives or empty statements. - Formatting: Use Markdown features to improve readability. Do not use ``` code blocks unless explicitly requested. - Preserve user content: If input tasks or prompts include extensive instructions or examples, keep them as complete as possible. If the content is ambiguous, consider breaking it down into sub-steps. Retain all details, instructions, examples, variables, or placeholders provided by the user. - Constants: Include constants in prompts that are less susceptible to prompt injection, such as guidelines, scoring criteria, and examples. - Output formats: Specify the most appropriate output format, including length and syntax (e.g., phrase, paragraph, JSON, etc.). - For tasks where the output is structured data (e.g., categories, JSON, etc.), prefer to generate JSON. - JSON should not be wrapped in code blocks (```) unless explicitly requested. The final prompt you output should follow the structure below, without any additional comments, and only output the completed system prompt. In particular, do not add extra information at the beginning or end of the prompt (e.g., don't use `---'). [concise task description - this is the first line of the prompt without section headings] [Add additional details if necessary]. [Optional section with detailed step-by-step headings or bullet points.] # Step [optional]. [Optional: Detailed breakdown of the steps required to achieve the task] # Output Format [Specify the format of the output, be it response length, structure (e.g. JSON, Markdown, etc.)] # Example [optional] [Optional: 1-3 well-defined examples, using placeholders if necessary. Clearly label the beginning and end of the examples and the inputs and outputs. Use placeholders if necessary.] [If the examples are shorter or longer than expected from the actual examples, add an explanation of how long/different the real examples should be. And use placeholders!] # Notes [Optional]. [Optional: edge cases, details, and highlight or repeat out particularly important points to consider] """.strip() def generate_prompt(task_or_prompt: str). completion = client.chat.completions.create( model="gpt-4o", messages=[ { }, { "role": "user", "content": "Task, goal, or current prompt:\n" + task_or_prompt, } ] ) return completion.choices[0].message.content

Audio Meta Tip

from openai import OpenAI

client = OpenAI()

META_PROMPT = """
Given a task description or an existing prompt, generate a detailed system prompt that guides the real-time audio output language model to complete the task effectively.

# Guide

- Understanding the task: capture the main objectives, task requirements, constraints and expected output.
- Tone of voice: clearly indicate the tone of voice. Default should be emotional, friendly and fast so that the user does not have to wait too long.
- Audio Output Limitations: Since the model outputs audio, responses should be short and conversational.
- Minimal changes: If there is an existing prompt, optimize it only in simple cases. For complex prompts, enhance the clarity and add missing elements without changing the original structure.
- Examples: If helpful, include high-quality examples with square brackets [in brackets] to indicate placeholders for complex elements.
   - What examples to include, how many, and whether they need to be complex enough to warrant the use of placeholders.
   - It is very important that any examples reflect the short dialog output of the model.
By default sentences should be very short. The helper should not provide three sentences in a row, and it is recommended that the user and helper talk back and forth.
   - The default length of each sentence is around 5-20 words. If the user specifically requests a "short" response, the examples should indeed be as short as 1-10 words.
   - Examples should be multi-round conversations (at least four rounds of back and forth per example) rather than single questions and answers. Examples should reflect the natural flow of a conversation.
- Clarity and brevity: Use clear, specific language. Avoid unnecessary instructions or irrelevant descriptions.
- Preserve user content: If the input task or prompt contains detailed instructions or examples, try to preserve them completely or as close as possible. If content is vague, consider breaking it down into sub-steps. Retain any user-supplied details, guides, examples, variables, or placeholders.
- Constants: Include constant sections because they are less susceptible to prompt injection, such as guidelines, scoring criteria, and examples.

The final output of the prompt should follow the structure below. Do not include any additional instructions; output only the completed system prompt. Take special care not to add any additional information at the beginning or end of the prompt (e.g., no need for "----").

[Succinct instructions describing the task - this should be the first line of the prompt, no paragraph headings are needed]

[Add detailed information if necessary]

[Optional section that can contain headings or bulleted lists of detailed steps]

# Example [optional].

[Optional: Include 1-3 clearly defined examples, using placeholders if necessary. Clearly label where the examples begin and end, as well as inputs and outputs. Use placeholders for necessary information]
[If the example is shorter than the expected real example, use parentheses () to indicate how the real example should be longer/shorter/different, and use placeholders!]

# Remarks [optional]

[Optional: boundary cases, details, and important notes]
""".strip()

def generate_prompt(task_or_prompt: str).
    completion = client.chat.completions.create(
        model="gpt-4o",
        messages=[
            {
                
                
            },
            {
                "role": "user", "content".
                "content": "Task, goal, or current prompt:\n" + task_or_prompt,
            }
        ]
    )

    return completion.choices[0].message.content

Cue Editor

To edit the cue, we use a slightly modified meta-cue. While it is relatively simple to edit the application directly, it can be challenging to recognize necessary changes in more open-ended modifications. To address this issue, we included the beginning of the response with ainference, the help model determines what changes need to be made by evaluating factors such as the clarity of the existing cue words, the order of the chain of thought, the overall structure, and specificity. The reasoning component suggests improvements, which are then parsed out of the final response.

Text Meta Hints for Editing

from openai import OpenAI

client = OpenAI()

META_PROMPT = """
Generate a detailed system prompt based on the current prompt and the change description to effectively guide the language model through the task.

Your final output will be the complete, corrected prompt word text. But before you do, analyze the prompt word using the tag at the beginning of your response and clarify the following:

- Simple Change: (Yes/No) Is the change description clear and simple? (If yes, skip the following questions)
- Reasoning: (Yes/No) Does the current prompt word use reasoning, analysis, or chains of thought?
    - IDENTIFICATION: (10 words max) If yes, in what part of the reasoning?
    - CONCLUSION: (Yes/No) Is the conclusion reached through a chain of thought?
    - Order: (before/after) Is the chain of thought located before or after the conclusion?
- Structure: (Yes/No) Is there a clear structure to the input prompt word?
- Examples: (Yes/No) Does the input cue word contain a small number of examples?
    - Representativeness: (1-5) If examples exist, how representative are the examples?
- Complexity: (1-5) How complex is the input cue word?
    - Task: (1-5) What is the complexity of the task?
    - Necessity: ()
- Specificity: (1-5) How detailed and specific are the cue words? (not equivalent to length)
- Prioritization: (list) the most important 1-3 categories that need to be addressed.
- CONCLUSION: (30 words max) Based on the above assessment, briefly describe what and how changes should be made. No need to strictly follow the categories listed.

# Guidelines

- Understand the task: capture the main objectives, requirements, constraints and desired outputs.
- Minimal changes: if cues already exist, improve only when simple; for complex cues, enhance clarity and add missing elements without changing the original structure.
- Reasoning before conclusions**: ensure that the reasoning step is carried out before any conclusions are drawn. Caution! Reverse order if reasoning comes after conclusion in user-supplied examples! Never start with a conclusion!
    - Reasoning order: label the Reasoning section and the Conclusion section of the prompt (specify field names). For each section, determine if the order needs to be reversed.
    - The conclusion, categorization, or result should always appear last.
- Examples: If helpful, include high-quality examples and use [square brackets] as placeholders for complex elements.
   - Indicate what types of examples may need to be included, how many, and whether they are complex enough to warrant the use of placeholders.
- Clarity and simplicity: Use clear, specific language. Avoid unnecessary instructions or bland statements.
- Typography: Use Markdown features to improve readability. Avoid ``` code blocks unless explicitly requested.
- Preserve user content: If the input task or prompt contains extensive guidelines or examples, preserve them completely or as much as possible. If this is not clear, break it up into sub-steps. Retain any details, guidelines, examples, variables or placeholders provided by the user.
- Constants: Include constants in the prompt as they are not affected by prompt injection attacks, such as guidelines, scoring criteria, and examples.
- Output formats: Specify the most appropriate output format, detailing the length and syntax of the output (e.g., short sentences, paragraphs, JSON, etc.).
    - For tasks where the output has explicit or structured data (categories, JSON, etc.), favor outputting JSON.
    - JSON should not be wrapped in code blocks (``) unless explicitly requested.

The final prompt words you output should follow the structure below. Do not include any additional comments, only output the full system prompt word. In particular, do not add any additional messages at the beginning or at the end of the prompt (e.g., do not use "----").

[Instructions briefly describing the task - this is the first line of the prompt word, no section headings are needed]

[Add more details if needed].

[Optional section with headings or bullets to describe detailed steps.]

# Steps [Optional].

[Optional: Detailed breakdown of the steps required to complete the task]

# Output Format

[Specify output format requirements such as response length, structure (e.g. JSON, markdown, etc.)]

# Example [optional].

[Optional: 1-3 clear examples, if complex elements are required, use placeholders and mark input and output locations. Use parentheses to indicate that the real example should be longer/shorter/different.]

# Note [optional].

[Optional: edge cases, detailed information, or emphasize areas of particular importance for consideration]
[NOTE: must start with a section, the next markup you generate should read ]
""".strip()

def generate_prompt(task_or_prompt: str):
    completion = client.chat.completions.create(
        model="gpt-4o",
        messages=[
            {
                
                
            },
            {
                "role": "user", "content": "Task, Goal, or Current Prompt:\n", {
                "content": "Task, Goal, or Current Prompt:\n" + task_or_prompt,
            }, }
        ]
    )

    return completion.choices[0].message.content

Meta Tips for Audio Editing

from openai import OpenAI

client = OpenAI()

META_PROMPT = """
Given a current prompt and change instructions, a detailed system prompt is generated to effectively guide the real-time audio output language model through the task.

The final output will be the complete, corrected-as-is prompt. However, at the very beginning of the response, use labels to analyze the prompt and clarify the following:

- Simple Change: (yes/no) Are the change instructions clear and simple? (If yes, skip the following questions)
- Reasoning: (yes/no) Does the current prompt use reasoning, analysis, or chains of thought?
    - Labeling: (max 10 words) If yes, which parts use reasoning?
    - Conclusion: (yes/no) Is a chain of thought used to reach a conclusion?
    - Order: (before/after) Is the chain of thought before or after?
- Structure: (yes/no) Does the input prompt have a clear structure?
- Examples: (yes/no) Does the input prompt contain a few examples?
    - Representativeness: (1-5) If there are examples, how representative are they?
- Complexity: (1-5) What is the complexity of the input prompt?
    - Tasks: (1-5) What is the complexity of the implied tasks?
    - Necessity: ()
- Specificity: (1-5) How detailed and specific are the prompts? (not related to length)
- Prioritization: (list) Which 1-3 categories are most important.
- Conclusion: (30 words max) Briefly describe what needs to change based on the previous assessment. Need not be limited to the categories listed.

# Guidelines

- Understanding the Task: Understand the main objectives, requirements, constraints and expected outputs.
- Tone: Ensure that the tone is clearly stated. The default should be emotional and friendly, and quick to express to avoid waiting for the user.
- Audio Output Constraints: Since the model outputs audio, the response should be short and conversational.
- Minimal changes: If existing prompts are provided, improve them only in simple cases. For complex cues, enhance clarity and add missing elements without changing the original structure.
- Examples: If helpful, include high-quality examples and use [square brackets] as placeholders for complex elements.
   - What types of examples to include, how many, and whether placeholders are needed.
  - Examples must reflect the model's short dialog output responses.
By default, sentences should be very short. The 3 sentences of the helper should be separated to interact with the user.
  - By default, each sentence should contain only a small number of words (5-20 words or so). If the user explicitly requests a "short" response, the example should be only 1-10 words.
  - Examples should have multiple rounds of dialog (at least 4 user-assistant back-and-forths), reflecting real conversations.
- Clarity and brevity: Use clear, specific language. Avoid unnecessary instructions or bland statements.
- Preserve user content: If the input task or prompt contains detailed instructions or examples, preserve them as much as possible. If the content is vague, consider breaking it down into sub-steps. Retain user-supplied details, instructions, examples, variables, or placeholders.
- Constants: Include constants in prompts that are less susceptible to prompt injection, such as guidelines, scoring criteria, and examples.

The final output prompt should follow the following structure. Do not include any additional comments; output only the completed system prompt. In particular, do not add any additional information at the beginning or end of the prompt. (e.g. do not use "----")

[Concise Task Description - this is the first line of the prompt, no need to add a subsection heading]

[Add detailed instructions as needed.]

[Optional section containing detailed steps for headings or bullets.]

# Example [Optional].

[Optional: 1-3 clearly defined examples, using placeholders if necessary. Clearly mark the beginning and end of the examples, as well as inputs and outputs. Use placeholders as necessary.]
[If the example is shorter than the expected real example, mark how the real example should be longer/shorter/different with () and use placeholders!]

# Notes [optional].

[Optional: edge cases, detailed descriptions, and areas to invoke or repeat specific important considerations]
[NOTE: You must start with a section. The next Token you generate immediately after should be]
""".strip()

def generate_prompt(task_or_prompt: str).
    completion = client.chat.completions.create(
        model="gpt-4o",
        messages=[
            {
                
                
            },
            {
                "role": "user", "content": "Task, Goal, or Current Prompt:\n", {
                "content": "Task, Goal, or Current Prompt:\n" + task_or_prompt,
            }, }
        ]
    )

    return completion.choices[0].message.content

build

Structured Output Schemas and function schemas are themselves JSON objects, so we utilize structured output to generate them. This requires defining an architecture for the desired output, which in this case is itself an architecture. To do this, we use self-describing architectures - themeta-architectureThe

Since the function architecture of the parameters The field itself is an architecture, so we use the same meta-architecture to generate the function.

Defining constrained meta-architectures

Structured Output Two modes are supported:strict=true cap (a poem) strict=falseThe "strict mode" only guarantees perfect adherence by restricted sampling. Both modes use the same model to follow the provided architecture, but only the "strict mode" guarantees perfect compliance through restricted sampling.

Our goal is to generate strict mode architectures using strict mode itself. However, the official JSON Architecture Specification The meta-architecture provided relies on the strict model in Features not currently supported. This creates challenges for input and output architectures.

  1. Input Architecture: We can't use the input architecture in Unsupported Features to describe the output architecture.
  2. Output Architecture: The generated architecture cannot contain Unsupported FeaturesThe

Since we need to generate new keys in the output architecture, the input meta-architecture must use the additionalProperties. This means that we cannot currently use strict patterns to generate architectures. However, we still want to generate architectures that conform to the constraints of strict patterns.

To overcome this limitation, we define a pseudo-metric architecture (math.) -- A meta-architecture that uses features that do not support the strict schema to describe features that are only supported by the strict schema. Essentially, this approach departs from strict mode when defining the meta-architecture, while ensuring that the resulting architecture follows the strict mode restrictions.

an in-depth discussion

How We Designed the Pseudo-Meta-Architecture

Building constrained meta-architectures is a challenging task, so we utilize our model to help accomplish it.

We start by sending a message in JSON mode to the o1-preview cap (a poem) gpt-4o described our goals, using a document that structured the output. After several iterations, we developed our first viable meta-architecture.

Then, we use the gpt-4o and structured output, providinginitial structurealong with our task description and documentation to generate better candidate architectures. In each iteration, we use the better architecture to generate the next step until we finally perform a manual scrutiny.

Finally, after cleaning up the output, we validate the architecture against a set of evaluation tests to ensure applicability to the architecture and functions.

Output cleanup

Strict mode guarantees perfect architectural compliance. However, since we cannot use strict mode during generation, we need to validate and transform the output after generation.

After generating the architecture, we perform the following steps:

  1. Set all objects of the additionalProperties set to falseThe
  2. Mark all attributes as requiredThe
  3. For structured output architectureThe package will be packed in the json_schema in the object.
  4. For functionsThe package will be packed in the function in the object.

= = real-time API's function (math.) object is slightly different from the Chat Completions API, but uses the same architecture. ==

meta-model

Each meta-mode has a corresponding hint that contains a handful of examples. When we combine this with the reliability of structured output - even without using strict mode - we can use the gpt-4o-mini Perform pattern generation.

Structured Output Model

from openai import OpenAI
import json
client = OpenAI()
META_SCHEMA = {
"name": "metaschema",
"schema": {
"type": "object",
"properties": {
"name": {
"type": "string", "description": { "name": {
"description": "The name of the schema."
},
"type": {
"enum": [
"object".
"array".
"string".
"number", "enum": [ "object", "array", "string", "enum".
"boolean".
"null"
]
},
"properties": {
"type": "object", "additionalProperties": {
"additionalProperties": {
"$ref": "#/$defs/schema_definition"
}
},
"items": {
"anyOf": [
{
"$ref": "#/$defs/schema_definition"
},
{
"type": "array", {
"items": {
"$ref": "#/$defs/schema_definition"
}
}
]
},
"required": {
"type": "array", "items": {
"items": {
"type": "string"
}
},
"additionalProperties": {
"type": "boolean"
}
}, "additionalProperties": { "type": "boolean" }
"required": [
"type"
], "additionalProperties": False, "type": "boolean" } }
"additionalProperties": False, "if": { "type" }
"if": {
"properties": {
"type": {
"const": "object"
}
}
},
"then": {
"required": [
"properties"
]
}, "then": { "required": [ "properties" ] }, "then".
"$defs": {
"schema_definition": {
"type": "object", "properties": {
"properties": {
"type": {
"type": "string",
"enum": [
"object".
"array".
"string".
"number", "enum": [ "object", "array", "string", "enum".
"boolean".
"null"
]
},
"properties": {
"type": "object", "additionalProperties": {
"additionalProperties": {
"$ref": "#/$defs/schema_definition"
}
},
"items": {
"anyOf": [
{
"$ref": "#/$defs/schema_definition"
},
{
"type": "array", {
"items": {
"$ref": "#/$defs/schema_definition"
}
}
]
},
"required": {
"type": "array", "items": {
"items": {
"type": "string"
}
},
"additionalProperties": {
"type": "boolean"
}
}, "additionalProperties": { "type": "boolean" }
"required": [
"type"
], "additionalProperties": False, "type": "boolean" } }
"additionalProperties": False, "if": { "type" }
"if": {
"properties": {
"type": {
"const": "object"
}
}
},
"then": {
"required": [
"properties"
]
}
}
}
}
}
META_PROMPT = """
# Directive
Returns a valid JSON description schema.
You also need to make sure:
- all fields in the object are set to required fields
- I repeat, all fields must be marked as required fields
- all additionalProperties of the object must be set to false
- Therefore, properties like "attributes" or "metadata", which normally allow additional properties, should be set to a fixed set of properties.
- All objects must have properties defined
- The order of fields is important. Any form of "thinking" or "explaining" should come before the conclusion.
- $defs must be defined in the schema parameters.
Unsupported keywords include:
- For strings: minLength, maxLength, pattern, format
- For numbers: minimum, maximum, multipleOf
- for objects: patternProperties, unevaluatedProperties, propertyNames, minProperties, maxProperties
- For arrays: unevaluatedItems, contains, minContains, maxContains, minItems, maxItems, uniqueItems
Other Notes:
- Definitions and recursion are supported
- Include references only when necessary, e.g. "$defs", which must be inside the "schema" object.
# Example
Input: generate a mathematical reasoning schema with steps and final answer.
Output: {
"name": "math_reasoning",
"type": "object",
"properties": {
"steps": {
"type": "array",
"description": "A sequence of steps involved in answering a math problem." ,
"items": {
"type": "object",
"properties": {
"explanation": {
"type": "string", "description": {
"description": "A description of the reasoning or methodology used in this step."
},
"output": {
"type": "string", "description": "The description of the particular step.
"description": "The result or output of that particular step."
}
},
"required": [
"explanation".
"output"
], "additionalProperties": false
"additionalProperties": false
}
},
"final_answer": {
"type": "string",
"description": "The final answer to the math problem."
}
},
"required": [
"steps".
"final_answer"
], "required": [ "steps", "final_answer" ], "required".
"additionalProperties": false
}
Input: give me a linked list
Output: {
"name": "linked_list",
"type": "object",
"properties": {
"linked_list": {
"$ref": "#/$defs/linked_list_node",.
"description": "The head node of the linked list."
}
},.
"$defs": {
"linked_list_node": {
"type": "object",
"description": "Defines a node in a single linked list." ,
"properties": {
"value": {
"type": "number",
"description": "The value stored in this node."
},
"next": {
"anyOf": [
{
"$ref": "#/$defs/linked_list_node"
},
{
"type": "null"
}
], { "type": "null" }, { "type": "null" }
"description": "References the next node; null if it is the last node."
}
},
"required": [
"value".
"next"
], "additionalProperties": false
"additionalProperties": false
}
}, "required": [ [ "next" ], "additionalProperties": false }
"required": [
"linked_list"
], "additionalProperties": false
"additionalProperties": false
}
Input: dynamically generated UI
Output: {
"name": "ui",
"type": "object", "properties": {
"properties": {
"type": {
"type": "string", "description".
"description": "The type of the UI component.
"enum": [
"div".
"section", "field", [ "section", "field", "header", "header
"field".
"form"
},, "header", "section", "field", "form
},
"label": {
"type": "string", "description": "UI component label for button or form fields", "label": {
"description": "Label for UI component, used for buttons or form fields"
},
"children": {
"description": "Nested UI component.", "items": { "type": "array", "description": "Nested UI component.
"items": {
"$ref": "#"
}
}, "attributes": { "$ref": "#" }
"attributes": {
"type": "array", "description": "Applies to any element UI component.
"description": "Any attribute of the UI component that applies to any element.", "items": { "items": { "$ref".
"items": {
"type": "object", "attributes": {
"properties": {
"name": {
"type": "string", "description": "name": {
"description": "The name of the property, such as onClick or className"
},
"value": {
"description": "The value of the property."
}
}, "value": { "type": "string", "description": "Value of property" }, "value": { "value": { "value": { "description": "Value of property" }
"required": [
"name".
"value"
], "additionalProperties": false
"additionalProperties": false
}
}
}, "required": [["value" ], "additionalProperties": false } }
"required": [
"type".
"label".
"children", "attributes
"attributes"
], "additionalProperties": false
"additionalProperties": false
}
""".strip()
def generate_schema(description: str):
completion = client.chat.completions.create(
model="gpt-4o-mini",
response_format={"type": "json_schema", "json_schema": META_SCHEMA},
messages=[
{
"role": "system",
"content": META_PROMPT,
},
{
"role": "user", "content": "Description:\n" + description, }, {
"content": "Description:\n" + description,
}
]
)
return json.loads(completion.choices[0].message.content)

function model

from openai import OpenAI
import json
client = OpenAI()
META_SCHEMA = {
"name": "function-metaschema",
"schema": {
"type": "object",
"properties": {
"name": {
"type": "string", "description": { "name": {
"description": "The name of the function."
},
"description": {
"description": "Description of the function"
},
"parameters": {
"$ref": "#/$defs/schema_definition",
"description": "JSON schema defining the function parameters"
}
}, "description": "Define the JSON schema for function parameters" }
"required": [
"name".
"description", "parameters": [ "name", "description", "parameters
"parameters"
"additionalProperties": False,
"$defs": {
"schema_definition": {
"type": "object", "properties": {
"properties": {
"type": {
"type": "string",
"enum": [
"object".
"array".
"string".
"number", "enum": [ "object", "array", "string", "enum".
"boolean".
"null"
]
},
"properties": {
"type": "object", "additionalProperties": {
"additionalProperties": {
"$ref": "#/$defs/schema_definition"
}
},
"items": {
"anyOf": [
{
"$ref": "#/$defs/schema_definition"
},
{
"type": "array", {
"items": {
"$ref": "#/$defs/schema_definition"
}
}
]
},
"required": {
"type": "array", "items": {
"items": {
"type": "string"
}
},
"additionalProperties": {
"type": "boolean"
}
}, "additionalProperties": { "type": "boolean" }
"required": [
"type"
], "additionalProperties": False, "type": "boolean" } }
"additionalProperties": False, "if": { "type" }
"if": {
"properties": {
"type": {
"const": "object"
}
}
},
"then": {
"required": [
"properties"
]
}
}
}
}
}
META_PROMPT = """
# Instruction
Returns a valid pattern describing a function.
Take special care to ensure that "required" and "type" are always at the correct nesting level. For example, "required" should be at the same level as "properties", not inside it.
Make sure that every property, no matter how short, has a properly nested type and description.
# Example
Input: Assigning values to neural network hyperparameters
Output: {
"name": "set_hyperparameters",
"description": "Assigning values to neural network hyperparameters.", "parameters": {
"parameters": {
"required": [
"learning_rate",
"epochs"
], "properties": { "type": "object": [ "learning_rate", "epochs" ], "properties".
"properties": {
"epochs": {
"type": "number", "description": "The complete traversal of the dataset.
"description": "Number of complete traversals of the dataset"
},
"learning_rate": {
"type": "number", "description": "Learning_rate": {
"description": "Model learning rate"
}
}
}
}
Input: Plotting a motion path for the robot
Output: {
"name": "plan_motion", "description": "Plan a motion path for the robot.
"description": "Plans a motion path for the robot.", "parameters": {
"parameters": {
"required": [
"start_position", "end_position": [
"end_position"
], "properties": { "type": "object".
"properties": {
"end_position": {
"type": "object", "properties": { "end_position": {
"properties": {
"x": {
"type": "number",
"description": "End point x-coordinate"
"y": {
"type": "number", "description": "End Y coordinate" }, "y": { "type": "number", "description": "End Y coordinate
"description": "End point Y coordinate" }, "y": { "type": "number", "description": "End point Y coordinate" }
}
}
}, "obstacles": { "type": "number", "description": "End Y coordinate" }
"obstacles": {
"type": "array", "description": "An array of obstacle coordinates.
"description": "An array of obstacle coordinates.",
"items": {
"type": "object", "properties": {
"properties": {
"x": {
"type": "number", "description".
"description": "Obstacle X coordinates"
},
"y": {
"description": "Obstacle Y coordinate" }, "y": { "type": "number", "description": "Obstacle Y coordinate"
}
}
}
},
"start_position": {
"type": "object", "properties": {
"properties": {
"x": {
"type": "number", "description": { "start_position": {
"description": "Starting point x-coordinate"
}, "y": { "x": { "type": "number", "description": "Start X-coordinate
"y": {
"type": "number", "description": "Start Y coordinate" }, "y": { "type": "number", "description": "Start Y coordinate
"description": "Starting point Y coordinate" }, "y": { "type": "number", "description": "Starting point Y coordinate" }
}
}
}
}
}
}
Input: Calculate various technical indicators
Output: {
"name": "technical_indicator",
"description": "Calculates various technical indicators.", "parameters": {
"parameters": {
"type": "object", "required": [ "type": [0], "description": "Calculate various technical indicators", "parameters": {
"required": [
"ticker", "indicators": [
"indicators"
], "properties": { "type": "object", "required": [ "ticker", "indicators" ], "properties".
"properties": {
"indicators": {
"type": "array", "description": "List of technical indicators to be calculated.
"description": "List of technical indicators to be calculated",
"items": {
"type": "string", "description": "List of technical indicators to be calculated", "items": {
"description": "Technical Indicators",
"enum": [
"RSI".
"MACD".
"Bollinger_Bands", "Stochastic_Oscillator".
"Stochastic_Oscillator"
]
}
}, }
"period": {
"type": "number", "description": "Time period of the analysis
"description": "Time period analyzed"
},
"ticker": {
"description": "ticker symbol"
}
}
}
}
""".strip()
def generate_function_schema(description: str):
completion = client.chat.completions.create(
model="gpt-4o-mini",
response_format={"type": "json_schema", "json_schema": META_SCHEMA},
messages=[
{
"role": "system",
"content": META_PROMPT,
},
{
"role": "user",
"content": "description:\n" + description.
}
]
)
return json.loads(completion.choices[0].message.content)
AI Easy Learning

The layman's guide to getting started with AI

Help you learn how to utilize AI tools at a low cost and from a zero base.AI, like office software, is an essential skill for everyone. Mastering AI will give you an edge in your job search and half the effort in your future work and studies.

View Details>
May not be reproduced without permission:Chief AI Sharing Circle " OpenAI Meta-prompt Command Generator OpenAI Meta-prompt

Chief AI Sharing Circle

Chief AI Sharing Circle specializes in AI learning, providing comprehensive AI learning content, AI tools and hands-on guidance. Our goal is to help users master AI technology and explore the unlimited potential of AI together through high-quality content and practical experience sharing. Whether you are an AI beginner or a senior expert, this is the ideal place for you to gain knowledge, improve your skills and realize innovation.

Contact Us
en_USEnglish