Agent-to-Agent (A2A) orchestration is where multiple autonomous agents collaborate by passing tasks, context, or results to one another. Instead of one monolithic agent, you compose a workflow of specialized agents. This is very relevant to LangGraph (or LangChain agents) where you model workflows as graphs.
Let me walk you through step-by-step with sample code.
🔹 Conceptual Flow
-
ResearchAgent
- Takes a query.
- Fetches/searches data (stubbed in example).
- Produces structured output.
-
SummarizerAgent
- Takes research output.
- Summarizes into digestible text.
-
ReviewerAgent
- Checks the summary for clarity, correctness.
- Adds final polish.
-
Orchestrator (LangGraph / simple Python driver)
- Connects these agents in sequence.
- Ensures one agent’s output feeds the next.
🔹 Minimal Python Example (LangGraph)
from langchain_openai import ChatOpenAI
from langgraph.graph import StateGraph, END
# --- Define Agents ---
llm = ChatOpenAI(model="gpt-4o-mini")
def research_agent(state):
query = state["query"]
return {
"research": f"Stub: Collected 3 facts about '{query}'."
}
def summarizer_agent(state):
research = state["research"]
response = llm.invoke(f"Summarize this research: {research}")
return {"summary": response.content}
def reviewer_agent(state):
summary = state["summary"]
response = llm.invoke(f"Review and polish this summary: {summary}")
return {"final": response.content}
# --- Build Orchestration Graph ---
workflow = StateGraph(dict)
workflow.add_node("research", research_agent)
workflow.add_node("summarizer", summarizer_agent)
workflow.add_node("reviewer", reviewer_agent)
workflow.set_entry_point("research")
workflow.add_edge("research", "summarizer")
workflow.add_edge("summarizer", "reviewer")
workflow.add_edge("reviewer", END)
graph = workflow.compile()
# --- Run Example ---
result = graph.invoke({"query": "Quantum computing in education"})
print(result["final"])
🔹 How Orchestration Happens (Step-by-Step)
-
State Management: Each agent updates the shared
state
dict. This makes hand-off explicit. (LangGraph automatically manages this shared state). -
Directed Graph:
workflow.add_edge("A", "B")
defines A→B flow. Agents only need to know their inputs/outputs, not each other. -
LLM + Function Nodes: Some nodes use LLM calls (
ChatOpenAI
), others can be deterministic code (research_agent
).
🔹 Why This Is A2A Orchestration
- Autonomy: Each agent is independent, doesn’t know who comes next.
- Orchestration: The graph coordinates their execution.
- Composable: Swap out
SummarizerAgent
for another summarizer without touching orchestration logic.