User

Agent Orchestrator

Digital network
Agent Framework

Built with Microsoft Agent Framework

A2A Protocol v0.3.0 Compliant Server

Powered by the Microsoft Agent Framework, this orchestrator enables seamless agent-to-agent communication using the standardized A2A protocol. Built on top of FastAPI and the official A2A SDK, it provides dynamic workflow orchestration, multi-agent coordination, and comprehensive health monitoring for distributed agent systems.

Uses Agent Framework Workflows with a graph-based architecture for defining complex agent interactions. Workflows are composed of nodes (representing agents) and edges (representing connections and message flow between agents), enabling sophisticated multi-step reasoning and task delegation patterns.

GitHub Repository Documentation

API Endpoints & Documentation

Explore the A2A protocol endpoints, agent cards, and workflow orchestration APIs.


Agent Orchestrator Dashboard

Live monitoring of all subsystems and services. Updates every 30 seconds.

Loading system status...


🚀 Quick Start: Chat with Agent

Use this curl command to send a message to the orchestrator agent via the A2A protocol. The agent will return a task ID that you can use to check the status and results.

Terminal Command
curl -X POST http://localhost:5015/invoke \
-H "Content-Type: application/json" \
-d '{"input": {"message": "Explain quantum computing in simple terms"}}'
✅ Response:
{
  "task_id": "abc-123...",
  "task_url": "http://localhost:5015/tasks/abc-123...",
  "status": {"state": "running"}
}
📝 Next Step: Click or curl the task_url to check status and get results.

⚙️ Workflow Configuration

Configure the agent orchestrator to run multi-agent workflows.

Required Environment Variables

Configure these parameters to enable full orchestrator functionality:

Parameter Required Default Description
Workflow Configuration
WORKFLOW_NODES Required [] JSON array defining workflow nodes. Each node represents a step in the workflow (start, agent, end).
WORKFLOW_EDGES Required [] JSON array defining connections between nodes. Controls workflow execution flow.
WORKFLOW_AGENT_LIST Required [] JSON array of A2A agents. Defines available agents with their URLs and capabilities.
Azure OpenAI Configuration
LLM_ENDPOINT Required "" Azure OpenAI endpoint URL (e.g., https://your-resource.openai.azure.com/)
LLM_KEY Required "" Azure OpenAI API key for authentication
LLM_DEPLOYMENT_NAME Optional gpt-4o Azure OpenAI deployment name
LLM_VERSION Optional 2024-02-01 Azure OpenAI API version
LLM_MODEL Optional gpt-4 Azure OpenAI model name
Embedding Service Configuration
EMBEDDING_BASE_URL Optional "" Base URL for embedding service
EMBEDDING_API_KEY Optional "" API key for embedding service authentication
EMBEDDING_PROVIDER Optional azure Embedding provider (azure, openai, etc.)
Database Configuration
DB_HOST or PG_HOST Optional localhost PostgreSQL database host
DB_PORT or PG_PORT Optional 5432 PostgreSQL database port
DB_NAME Optional orchestrator PostgreSQL database name
DB_USERNAME or PG_USER Optional postgres PostgreSQL database username
DB_PASSWORD or PG_PASSWORD Optional "" PostgreSQL database password
Server Configuration
PORT Optional 8080 Server port (5015 for local development)
HOST Optional 0.0.0.0 Server host address
ENVIRONMENT Optional development Application environment (development, production)
DEBUG Optional false Enable debug mode

Example Configuration

.env file
# Workflow Configuration
WORKFLOW_NODES='[{"id":"start","type":"start"},{"id":"agent1","type":"agent","agent_ref":"research-agent"}]'
WORKFLOW_EDGES='[{"source":"start","target":"agent1"}]'
WORKFLOW_AGENT_LIST='[{"agentName":"research-agent","agentUrl":"http://localhost:5016"}]'

# Azure OpenAI Configuration
LLM_ENDPOINT=https://your-openai.openai.azure.com/
LLM_KEY=your-api-key-here
LLM_DEPLOYMENT_NAME=gpt-4o
📚 Need Help?
For detailed workflow configuration and examples, see the DETAILS.md documentation