General Introduction
PydanticAI is a Pydantic-based Python agent framework designed to simplify the development of generative AI applications. Developed by the Pydantic team, it supports a wide range of models (e.g. OpenAI, Gemini, Groq, etc.) and provides a combination of type-safe control flow and agents.PydanticAI ensures the efficiency and reliability of generative AI applications through structured response validation and streaming responses. Its unique dependency injection system facilitates testing and iterative development, and integrates Logfire for debugging and monitoring application performance.PydanticAI is currently in early beta and the API is subject to change, user feedback is welcome.
Function List
- Multi-model support: Compatible with OpenAI, Gemini, Groq and many other generative AI models.
- type safety: Use Pydantic for structured response validation to ensure data type security.
- Dependency Injection System: Provides type-safe dependency injection for easy testing and iterative development.
- Streaming Response: Supports streaming response and validation to improve application responsiveness and reliability.
- Logfire Integration: for debugging and monitoring the performance and behavior of generative AI applications.
- simple interface: Provide clean interfaces for easy extension and integration with other models.
Using Help
Installation process
- Installation of PydanticAI: Ensure that Python version is 3.9 and above and install PydanticAI using the following command:
pip install pydantic-ai
- Installation of dependencies: PydanticAI depends on a number of core libraries and LLM APIs, and these dependencies are handled automatically during installation.
Guidelines for use
Creating Simple Proxies
- Defining Agents: Create a simple agent and specify the model to use.
from pydantic_ai import Agent
agent = Agent(
'gemini-1.5-flash', system_prompt='Be concise, reply with one sentence.
system_prompt='Be concise, reply with one sentence.
)
- Running Agents: Synchronize running agents for simple conversations.
result = agent.run_sync('Where does "hello world" come from?')
print(result.data)
Complex Proxy Example
- Define dependency and outcome models: Use Pydantic to define dependency and outcome models.
from dataclasses import dataclass
from pydantic import BaseModel, Field
from pydantic_ai import Agent, RunContext
from bank_database import DatabaseConn
@dataclass
class SupportDependencies: customer_id: int
customer_id: int
db: DatabaseConn
class SupportResult(BaseModel): support_advice: str = Field(description='Advice returned to the customer')
support_advice: str = Field(description='Advice returned to the customer')
support_advice: str = Field(description='Advice returned to the customer') block_card: bool = Field(description="Whether to block the customer's card")
risk: int = Field(description='Risk level of query', ge=0, le=10)
- Creating Support Agents: Define system hints and utility functions.
support_agent = Agent(
'openai:gpt-4o',
deps_type=SupportDependencies, result_type=SupportResult, support_agent = Agent(
deps_type=SupportDependencies, result_type=SupportResult, system_prompt=()
system_prompt=(
'You are a support agent in our bank, give the '
'customer support and judge the risk level of their query.'
),
)
@support_agent.system_prompt
async def add_customer_name(ctx: RunContext[SupportDependencies]) -> str.
customer_name = await ctx.deps.db.customer_name(id=ctx.deps.customer_id)
return f "The customer's name is {customer_name!r}"
@support_agent.tool
async def customer_balance(ctx: RunContext[SupportDependencies], include_pending: bool) -> float.
"""Returns the customer's current account balance.""""
return await ctx.deps.db.customer_balance(id=ctx.deps.customer_id, include_pending=include_pending)
- Run Support Agent: Run the agent with the defined dependencies.
async def main().
deps = SupportDependencies(customer_id=123, db=DatabaseConn())
result = await support_agent.run('What is my balance?', deps=deps)
print(result.data)
# Sample output: support_advice='Hello John, your current account balance, including pending transactions, is $123.45.' block_card=False risk=1
result = await support_agent.run('I just lost my card!', deps=deps)
print(result.data)
# Sample output: support_advice="I'm sorry to hear that, John. Your card has been blocked for security reasons." block_card=True risk=7
Debugging and Monitoring
PydanticAI integrates Logfire for debugging and monitoring the performance and behavior of generative AI applications. With Logfire, you can view agent operation in real-time, analyze response data, and optimize application performance.
Feedback and improvement
PydanticAI is currently in early beta and the API is subject to change. Users are welcome to submit feedback via GitHub if they encounter any problems or have suggestions for improvement during use.