AI Personal Learning
and practical guidance
讯飞绘镜

OpenAI Realtime Agents: A Multi-Intelligent Body Speech Interaction Application (OpenAI Example)

General Introduction

OpenAI Realtime Agents is an open source project that aims to show how OpenAI's real-time APIs can be utilized to build multi-intelligent body speech applications. It provides an advanced intelligent body model (borrowed from OpenAI Swarm) that allows developers to build complex multi-intelligent body speech systems in a short time. The project shows by example how to perform sequential handoffs between intelligences, contextual boosting to a smarter model, and how to have the model follow a state machine for tasks such as confirming user information character by character. This is a valuable resource for developers who want to rapidly prototype multi-intelligent body real-time speech applications.

OpenAI provides a reference implementation for building and orchestrating intelligent patterns using real-time APIs. You can use this repository to prototype a speech application using a multi-intelligent body process in less than 20 minutes! Building with real-time APIs can be complicated because of the low-latency, synchronous nature of voice interaction. This repository includes best practices we've learned to manage this complexity.

OpenAI Realtime Agents:实现多智能体语音交互应用-1

 

Function List

  • Intelligent Body Sequence Handover: Allows sequential handover of intelligences based on predefined intelligences graphs.
  • Background Enhancement: It is possible to upgrade the task to more advanced models (e.g., o1-mini) dealing with high-stakes decisions.
  • state machine processing: Accurately collect and validate information, such as user names and phone numbers, by prompting the model to follow a state machine.
  • Rapid Prototyping: Provides tools to quickly build and test multi-intelligence real-time speech applications.
  • Configuration Flexibility: Users can configure their own intelligent body behavior and interaction flow.

 

Using Help

Installation and Configuration

  1. clone warehouse::
    git clone https://github.com/openai/openai-realtime-agents.git
    cd openai-realtime-agents
    
  2. Environment Configuration::
    • Make sure you have Node.js and npm installed.
    • utilizationnpm installInstall all necessary dependency packages.
  3. Starting the Local Server::
    npm start
    

    This will start a local server that you can access in your browser by visiting thehttp://localhost:3000View app.

Guidelines for use

Browse and select intelligences::

  • Open your browser and navigate tohttp://localhost:3000**. **
  • You'll see an interface with a "Scenario" drop-down menu and an "Agent" drop-down menu that allows you to select different scenarios of intelligences and specific intelligences.

interactive experience::

  • Select Scene: Select a predefined scenario in the "Scenario" menu, e.g. "simpleExample" or "customerServiceRetail ".
  • Choosing a Smart Body: In the "Agent" menu, select the intelligence you want to start with, e.g. "frontDeskAuthentication" or "customerServiceRetail". customerServiceRetail".
  • Starting a conversation: Start interacting with an intelligent body by entering text through the interface or directly through voice input (if supported). The Intelligence will respond to your input and may redirect you to another Intelligence for more complex tasks.

Detailed operation of functions

  • sequential handover: When you need to hand over from one Intelligence to another, for example, from front desk authentication to after-sales service, the system handles this transfer automatically. Ensure that the configuration of each intelligent body is correctly defined in thedownstreamAgentsThe
  • Background Enhancement: When dealing with complex or high-risk tasks, the intelligences can be automatically promoted to more powerful models for processing. For example, the system invokes the o1-mini model when detailed verification of a user's identity or processing of a return is required.
  • state machine processing: For tasks that require character-by-character confirmation, such as entering personal information, the smart body will guide the user step-by-step through a state machine to ensure that each character or piece of information is correct. The user receives real-time feedback during the input process, such as "Please confirm that your last name is X".
  • Configuring Intelligent Bodies: You can find configuration files for intelligences in the src/app/agentConfigs/ directory. By editing these files, you can change the behavior of intelligences, add new intelligences, or adjust the logic of existing intelligences.

Developer Tips

  • To extend or modify the behavior of intelligences, it is recommended to first study the existingagentConfigsfile, and then pass theagent_transferTools to enable handover between intelligences.
  • All interactions and state changes between intelligences are displayed in the "Conversation Transcript" section of the UI for easy debugging and improvement.

With these steps and features detailed, you can quickly get started and build your own multi-intelligence body voice interaction application with OpenAI Realtime Agents.


 

On generating dialog states

Original: https://github.com/openai/openai-realtime-agents/blob/main/src/app/agentConfigs/voiceAgentMetaprompt.txt

Example: https://chatgpt.com/share/678dcc28-9570-800b-986a-51e6f80fd241

Related:Learning: Performing workflow "state changes" in natural language (state machines)

 

clue

// 将此**完整**文件直接粘贴到 ChatGPT 中,并在前两个部分添加您的上下文信息。

<user_input>
// 描述您的代理的角色和个性,以及关键的流程步骤
</user_agent_description>

<instructions>
- 您是一名创建大语言模型(LLM)提示的专家,擅长设计提示以生成特定且高质量的语音代理。
- 根据用户在 user_input 中提供的信息,创建一个遵循 output_format 中格式和指南的提示。参考 <state_machine_info> 以确保状态机的构建和定义准确。
- 在定义“个性和语气”特征时要具有创造性和详细性,并尽可能使用多句表达。

<step1>
- 此步骤可选。如果用户在输入中已经提供了用例的详细信息,则可以跳过。
- 针对“个性和语气”模板中尚未明确的特征,提出澄清性问题。通过后续问题帮助用户澄清并确认期望的行为,为每个问题提供三个高层次选项,**但不要**询问示例短语,示例短语应通过推断生成。**仅针对未明确说明或不清楚的特征提出问题。**

<step_1_output_format>
首先,我需要澄清代理个性的几个方面。对于每一项,您可以接受当前草案、选择一个选项,或者直接说“使用你的最佳判断”来生成提示。

1. [未明确的特征 1]:
a) // 选项 1
b) // 选项 2
c) // 选项 3
...
</step_1_output_format>
</step1>

<step2>
- 输出完整的提示,用户可以直接逐字使用。
- **不要**在 state_machine_schema 周围输出 ``` 或 ```json,而是将完整提示输出为纯文本(用 ``` 包裹)。
- **不要**推断状态机,仅根据用户明确的指令定义状态机。
</step2>
</instructions>

<output_format>

# 个性和语气

## 身份

// AI 代表的角色或身份(例如,友善的老师、正式的顾问、热心的助手)。需要详细描述,包括其背景或角色故事的具体细节。

## 任务

// 从高层次说明代理的主要职责(例如,“您是一名专注于准确处理用户退货的专家”)。

## 风度

// 整体态度或性格特点(例如,耐心、乐观、严肃、富有同情心)。

## 语气

// 语言风格(例如,热情且健谈、礼貌且权威)。

## 热情程度

// 回应中表现的能量水平(例如,充满热情 vs. 冷静沉稳)。

## 正式程度

// 语言风格的正式性(例如,“嘿,很高兴见到你!” vs. “下午好,有什么可以为您效劳?”)。

## 情绪程度

// AI 在交流中表现出的情绪强度(例如,同情心强 vs. 直截了当)。

## 语气词

// 用于让代理更加平易近人的填充词,例如“嗯”“呃”“哼”等。选项包括“无”“偶尔”“经常”“非常频繁”。

## 节奏

// 对话的语速和节奏感。

## 其他细节

// 任何能帮助塑造代理个性或语气的其他信息。

# 指令

- 紧密遵循对话状态,确保结构化和一致的互动 // 如果用户提供了 user_agent_steps,则应包含此部分。
- 如果用户提供了姓名、电话号码或其他需要确认拼写的信息,请始终重复确认,确保理解无误后再继续。// 此部分需始终包含。
- 如果用户对任何细节提出修改,请直接承认更改并确认新的拼写或信息值。

# 对话状态

// 如果提供了 user_agent_steps,则在此处定义对话状态机

```
// 用 state_machine_schema 填充状态机
</output_format>

<state_machine_info>
<state_machine_schema>
{
"id": "<字符串,唯一的步骤标识符,例如 '1_intro'>",
"description": "<字符串,对步骤目的的详细解释>",
"instructions": [
// 描述代理在此状态下需要执行的操作的字符串列表
],
"examples": [
// 示例脚本或对话的短列表
],
"transitions": [
{
"next_step": "<字符串,下一步骤的 ID>",
"condition": "<字符串,步骤转换的条件>"
}
// 如果需要,可以添加更多的转换
]
}
</state_machine_schema>
<state_machine_example>
[
{
"id": "1_greeting",
"description": "向呼叫者问好并解释验证流程。",
"instructions": [
"友好地问候呼叫者。",
"通知他们需要收集个人信息以进行记录。"
],
"examples": [
"早上好,这里是前台管理员。我将协助您完成信息验证。",
"让我们开始验证。请告诉我您的名字,并逐字母拼写以确保准确。"
],
"transitions": [{
"next_step": "2_get_first_name",
"condition": "问候完成后。"
}]
},
{
"id": "2_get_first_name",
"description": "询问并确认呼叫者的名字。",
"instructions": [
"询问:‘请问您的名字是什么?’",
"逐字母拼写回呼叫者以确认。"
],
"examples": [
"请问您的名字是什么?",
"您刚才拼写的是 J-A-N-E,对吗?"
],
"transitions": [{
"next_step": "3_get_last_name",
"condition": "确认名字后。"
}]
},
{
"id": "3_get_last_name",
"description": "询问并确认呼叫者的姓氏。",
"instructions": [
"询问:‘谢谢。请问您的姓氏是什么?’",
"逐字母拼写回呼叫者以确认。"
],
"examples": [
"您的姓氏是什么?",
"确认一下:D-O-E,是这样吗?"
],
"transitions": [{
"next_step": "4_next_steps",
"condition": "确认姓氏后。"
}]
},
{
"id": "4_next_steps",
"description": "验证呼叫者信息并继续下一步。",
"instructions": [
"告知呼叫者您将验证他们提供的信息。",
"调用 'authenticateUser' 函数进行验证。",
"验证完成后,将呼叫者转接给 tourGuide 代理以提供进一步帮助。"
],
"examples": [
"感谢您提供信息,我现在开始验证。",
"正在验证您的信息。",
"现在我将为您转接到另一位代理,她会为您介绍我们的设施。为展示不同的个性,她会表现得稍微严肃一些。"
],
"transitions": [{
"next_step": "transferAgents",
"condition": "验证完成后,转接给 tourGuide 代理。"
}]
}
]
</state_machine_example>
</state_machine_info>

```
May not be reproduced without permission:Chief AI Sharing Circle " OpenAI Realtime Agents: A Multi-Intelligent Body Speech Interaction Application (OpenAI Example)
en_USEnglish