General Introduction
Dynamiq is an open source AI orchestration framework designed for agent AI and Large Language Model (LLM) applications. It is designed to simplify the development of AI-driven applications, especially in the area of Retrieval Augmented Generation (RAG) and orchestration of LLM agents.Dynamiq provides a rich set of functional modules and detailed documentation to help developers get up to speed quickly and build complex AI applications efficiently.
specificities
Dynamiq is an innovative AI framework that enables AI to solve real-world problems by combining the reasoning capabilities of LLMs (the brain) with the ability to perform specific actions (the hands), enabling AI to understand tasks, reason and take practical actions to solve real-world problems, just as humans do
// Definition of ReAct:
- ReAct is a framework that combines the reasoning capabilities of LLMs with the ability to perform operations
- It enables AI to understand, plan and interact with the real world
// How ReAct agents work:
It integrates two key components:
- Brain (thinking skills provided by LLM)
- Hands (ability to perform operations)
// Framework components:
- Task
- Agent (Intelligence, including LLMs and tools)
- Environment
- Response
// Practical application example:
The authors illustrate the workflow of the ReAct agent with a scenario that determines whether an umbrella is required:
- Receive tasks from users asking if they need to bring an umbrella
- Use the tool to check the weather report
- conduct inference analysis
- Give a suggested response
// Dynamiq framework shared by Akshay:
Dynamiq is a comprehensive framework for next-generation AI development focused on streamlining the development process for AI applications, with key features such as the ability to orchestrate and manage AI applications based on the RAG and LLM's Agents system.
// Main features:
All-in-one: A one-stop ("all-in-one") framework that integrates the various tools and features needed to develop AI applications
Areas of specialization:
- Organization of the RAG system
- Management of LLM Agent
- Development process optimization for AI applications
Positioning:
- Acts as an orchestration framework, focusing on the coordination and management of individual AI components
- Development for Agentic AI applications
- Simplifying the complexity for developers when building AI applications
Function List
- Installation and configuration: Provides a detailed installation guide to support Python environments.
- Documentation and examples: Rich documentation and sample code to help users get started quickly.
- Simple LLM Process: Provide simple LLM workflow examples for easy understanding and use.
- ReAct Agent: Agent to support complex coding tasks with integrated E2B code interpreter.
- multiagent scheduling: Supports multi-agent collaboration for solutions to complex tasks.
- RAG document indexing and retrieval: Supports preprocessing, vector embedding and storage of PDF documents, as well as retrieval of related documents and question answering.
- Chatbot with Memory: A simple chatbot that supports storing and retrieving conversation history.
Using Help
Installation and configuration
- Installing Python: Make sure you have Python installed on your computer.
- Installing Dynamiq::
pip install dynamiq
Or build from source code:
git clone https://github.com/dynamiq-ai/dynamiq.git cd dynamiq poetry install
usage example
Simple LLM Process
Below is a simple LLM workflow example:
from dynamiq.nodes.llms.openai import OpenAI
from dynamiq.connections import OpenAI as OpenAIConnection
from dynamiq import Workflow
from dynamiq.prompts import Prompt, Message
# Define a translation prompt template
prompt_template = """
Translate the following text into English: {{ text }}
"""
prompt = Prompt(messages=[Message(content=prompt_template, role="user")])
# Setting up the LLM node
llm = OpenAI(
id="openai",
connection=OpenAIConnection(api_key="$OPENAI_API_KEY"),
model="gpt-4o",
temperature=0.3, max_tokens=1000
max_tokens=1000, prompt=prompt
prompt=prompt
)
# Create the workflow object
workflow = Workflow()
workflow.flow.add_nodes(llm)
# Run the workflow
result = workflow.run(input_data={"text": "Hola Mundo!"})
print(result.output)
ReAct Agent
The following is an example of a ReAct agent that supports complex coding tasks:
from dynamiq.nodes.llms.openai import OpenAI
from dynamiq.connections import OpenAI as OpenAIConnection, E2B as E2BConnection
from dynamiq.nodes.agents.react.import ReActAgent
from dynamiq.nodes.tools.e2b_sandbox import E2BInterpreterTool
# Initialize the E2B tool
e2b_tool = E2BInterpreterTool(connection=E2BConnection(api_key="$API_KEY"))
# Setting up the LLM
llm = OpenAI(
id="openai",
connection=OpenAIConnection(api_key="$API_KEY"), model="gpt-4o", "# Set LLM")
model="gpt-4o",
temperature=0.3, max_tokens=1000
max_tokens=1000, )
)
# Create the ReAct agent
agent = ReActAgent(
name="react-agent",
llm=llm,
tools=[e2b_tool],
role="Senior Data Scientist",
max_loops=10, )
)
# Run the agent
result = agent.run(input_data={"input": "Add the first 10 numbers and tell if the result is prime."})
print(result.output.get("content"))
multiagent scheduling
The following is an example of multiple agents working together:
from dynamiq.connections import OpenAI as OpenAIConnection, ScaleSerp as ScaleSerpConnection, E2B as E2BConnection
from dynamiq.nodes.llms import OpenAI
from dynamiq.nodes.agents.orchestrators.adaptive import AdaptiveOrchestrator
from dynamiq.nodes.agents.orchestrators.adaptive_manager import AdaptiveAgentManager
from dynamiq.nodes.agents.react.import ReActAgent
from dynamiq.nodes.agents.reflection.import ReflectionAgent
from dynamiq.nodes.tools.e2b_sandbox import E2BInterpreterTool
from dynamiq.nodes.tools.scale_serp import ScaleSerpTool
# Initialization Tool
python_tool = E2BInterpreterTool(connection=E2BConnection(api_key="$E2B_API_KEY"))
search_tool = ScaleSerpTool(connection=ScaleSerpConnection(api_key="$SCALESERP_API_KEY"))
# Initialize LLM
llm = OpenAI(connection=OpenAIConnection(api_key="$OPENAI_API_KEY"), model="gpt-4o", temperature=0.1)
# Define the agent
coding_agent = ReActAgent(
name="coding-agent",
llm=llm,
tools=[python_tool],
role="Expert agent with coding skills. Goal is to provide the solution to the input task using Python software engineering skills.",
max_loops=15, )
)
planner_agent = ReflectionAgent(
name="planner-agent",
planner_agent = ReflectionAgent( name="planner-agent", llm=llm,
role="Expert agent with planning skills. Goal is to analyze complex requests and provide a detailed action plan.", )
)
search_agent = ReActAgent(
name="search-agent",
llm=llm,
tools=[search_tool],
role="Expert agent with web search skills. Goal is to provide the solution to the input task using web search and summarization skills.",
max_loops=10, )
)
# Initialize the Adaptive Agent Manager
agent_manager = AdaptiveAgentManager(llm=llm)
# Create the orchestrator
orchestrator = AdaptiveOrchestrator(
name="adaptive-orchestrator",
agents=[coding_agent, planner_agent, search_agent],
manager=agent_manager, )
)
# Define the input task
input_task = (
"Use coding skills to gather data about Nvidia and Intel stock prices for the last 10 years, "
"calculate the average per year for each company, and create a table. Then craft a report "
"and add a conclusion: what would have been better if I had invested $100 ten years ago?"
)
# Run the orchestrator
result = orchestrator.run(input_data={"input": input_task})
print(result.output.get("content"))
RAG document indexing and retrieval
Dynamiq supports Retrieval Augmentation Generation (RAG), which can be accomplished in the following steps:
- Document Preprocessing: Convert input PDF files into vector embeddings and store them in a vector database.
- document search: Retrieve relevant documents and generate answers based on user queries.
Below is a simple example of a RAG workflow:
from io import BytesIO
from dynamiq import Workflow
from dynamiq.connections import OpenAI as OpenAIConnection, Pinecone as PineconeConnection
from dynamiq.nodes.converters import PyPDFConverter
from dynamiq.nodes.splitters.document import DocumentSplitter
from dynamiq.nodes.embedders import OpenAIDocumentEmbedder
from dynamiq.nodes.writers import PineconeDocumentWriter
# Initialize Workflow
rag_wf = Workflow()
# PDF document converter
converter = PyPDFConverter(document_creation_mode="one-doc-per-page")
rag_wf.flow.add_nodes(converter)
# Document Splitter
document_splitter = (
DocumentSplitter(split_by="sentence", split_length=10, split_overlap=1)
.inputs(documents=converter.outputs.documents)
.depends_on(converter)
)
rag_wf.flow.add_nodes(document_splitter)
# OpenAI vector embedding
embedder = (
OpenAIDocumentEmbedder(connection=OpenAIConnection(api_key="$OPENAI_API_KEY"), model="text-embedding-3-small")
.inputs(documents=document_splitter.outputs.documents)
.depends_on(document_splitter)
)
rag_wf.flow.add_nodes(embedder)
# Pinecone vector store
vector_store = (
PineconeDocumentWriter(connection=PineconeConnection(api_key="$PINECONE_API_KEY"), index_name="default", dimension=1536)
.inputs(documents=embedder.outputs.documents)
.depends_on(embedder)
)
rag_wf.flow.add_nodes(vector_store)
# prepare input PDF files
file_paths = ["example.pdf"]
input_data = {
"files": [BytesIO(open(path, "rb").read()) for path in file_paths],
"metadata": [{"filename": path} for path in file_paths],
}
# Run the RAG indexing process
rag_wf.run(input_data=input_data)
Chatbot with Memory
Below is an example of a simple chatbot with a memory function:
from dynamiq.connections import OpenAI as OpenAIConnection from dynamiq.memory import Memory from dynamiq.memory.backend.in_memory import InMemory from dynamiq.nodes.agents.simple import SimpleAgent from dynamiq.nodes.agents.simple import SimpleAgent AGENT_ROLE = "helpful assistant, goal is to provide useful information and answer questions" llm = OpenAI( connection=OpenAIConnection(api_key="$OPENAI_API_KEY"), model="gpt-4o") model="gpt-4o", temperature=0.1, ) ) memory = Memory(backend=InMemory()) agent = SimpleAgent( name="Agent", name="Agent", llm=llm, name="Agent", llm=llm, role=AGENT_ROLE, id="agent", name="Agent", llm=llm, role=AGENT_ROLE, id="agent", memory=memory, ) ) def main(). print("Welcome to the AI Chat! (Type 'exit' to end)") while True: user_input = input("You:") user_input = input("You: ") if user_input.lower() == "exit": if user_input.lower() == "exit". If user_input.lower() == "exit": break response = agent.run({"input": user_input}) response_content = response.output.get("content") print(f "AI: {response_content}") if __name__ == "__main__". main()