Session 2: Role Design and Inter-Agent Communication
Synopsis
Covers how to define agent roles, communication protocols, message formats, and shared objectives. Learners examine how clear role boundaries reduce confusion and duplication in collaborative systems.
Session Content
Session 2: Role Design and Inter-Agent Communication
Session Overview
In this session, learners will explore how to design effective roles for GenAI agents and enable clear communication patterns between them. The focus is on building a small multi-agent workflow in Python using the OpenAI Responses API and the gpt-5.4-mini model. By the end of the session, learners will understand how role design affects agent behavior, how to structure message passing between agents, and how to orchestrate simple agent collaboration patterns.
Duration
~45 minutes
Learning Objectives
By the end of this session, learners will be able to:
- Explain why role design matters in agentic systems
- Define distinct responsibilities for multiple agents
- Implement structured inter-agent communication in Python
- Use the OpenAI Responses API to build agent-to-agent workflows
- Evaluate the benefits and trade-offs of multi-agent designs
1. Why Role Design Matters
In single-agent systems, one model handles everything: planning, reasoning, writing, reviewing, and summarizing. This can work for simple tasks, but as workflows become more complex, assigning specialized roles often leads to more predictable and maintainable systems.
Common Agent Roles
Some common roles in agentic applications include:
- Planner: Breaks tasks into steps
- Researcher: Gathers or organizes information
- Writer: Produces user-facing output
- Reviewer: Critiques, validates, or improves output
- Router: Chooses which agent should act next
- Tool Specialist: Calls external systems or APIs
Benefits of Clear Role Design
- Better separation of concerns
- Easier debugging
- More reusable components
- More predictable outputs
- Simpler evaluation of agent behavior
Risks of Poor Role Design
- Role overlap and duplication
- Agents producing inconsistent outputs
- Confusion in handoff points
- Excessive token usage from unnecessary discussion
- Harder orchestration logic
Good Role Prompt Characteristics
A well-designed role prompt should include:
- The agent’s purpose
- Its allowed scope
- Expected output format
- Constraints or guardrails
- When to defer or hand off
Example Role Definitions
- Planner: “Break the user’s request into 3–5 actionable steps. Do not write the final answer.”
- Writer: “Write a concise final response based only on the approved plan and notes.”
- Reviewer: “Check the writer’s output for correctness, clarity, and missing details. Return structured feedback.”
2. Inter-Agent Communication Patterns
When multiple agents collaborate, they need a communication pattern. The simplest approach is to have your Python code orchestrate all handoffs.
Common Communication Patterns
A. Sequential Pipeline
One agent produces output, and the next agent uses it.
Example: 1. Planner creates a plan 2. Writer drafts a response 3. Reviewer critiques the draft
B. Hub-and-Spoke
A central orchestrator manages all communication.
Example: - The orchestrator sends task context to each agent - The orchestrator stores outputs - Agents never directly talk to each other
This is the most practical pattern for Python applications.
C. Iterative Review Loop
A writer and reviewer repeatedly refine output until criteria are met.
D. Router Pattern
A classifier or router agent decides which specialist agent should handle the task.
3. Designing Structured Agent Messages
Free-form text works, but structured outputs are easier to pass between agents.
Why Structure Matters
Structured outputs help with:
- Reliable parsing
- Easier debugging
- Validation of agent outputs
- Reduced ambiguity in handoffs
Recommended Structure
Use JSON-like schemas in prompts and parse outputs safely in Python.
Example structure for a planner:
{
"goal": "string",
"steps": ["step 1", "step 2", "step 3"],
"risks": ["risk 1", "risk 2"]
}
Example structure for a reviewer:
{
"approved": true,
"issues": ["issue 1"],
"suggestions": ["suggestion 1"]
}
Prompting for Structured Output
A strong prompt often includes:
- A clear role
- Exact schema
- Instructions to avoid extra prose
- A reminder to stay within scope
Example:
Return valid JSON with keys:
goal,steps,risks. Do not include markdown fences or extra commentary.
4. Hands-On Exercise 1: Build Two Specialized Agents
Goal
Create two agents:
- A Planner Agent that turns a user request into a short plan
- A Writer Agent that turns the plan into a polished answer
What You Will Learn
- How to define role prompts
- How to call the OpenAI Responses API in Python
- How one agent’s output becomes another agent’s input
Setup
Install the OpenAI Python SDK:
pip install openai python-dotenv
Create a .env file:
OPENAI_API_KEY=your_api_key_here
Python Code
"""
Session 2 - Exercise 1
Build two specialized agents:
1. Planner Agent
2. Writer Agent
This example uses the OpenAI Responses API with model gpt-5.4-mini.
"""
import os
from dotenv import load_dotenv
from openai import OpenAI
# Load environment variables from .env
load_dotenv()
# Create the OpenAI client
client = OpenAI(api_key=os.getenv("OPENAI_API_KEY"))
def run_agent(system_prompt: str, user_prompt: str) -> str:
"""
Runs a single agent using the OpenAI Responses API.
Args:
system_prompt: The role and behavior definition for the agent
user_prompt: The task input for the agent
Returns:
The text output from the model
"""
response = client.responses.create(
model="gpt-5.4-mini",
input=[
{
"role": "system",
"content": [
{
"type": "input_text",
"text": system_prompt,
}
],
},
{
"role": "user",
"content": [
{
"type": "input_text",
"text": user_prompt,
}
],
},
],
)
return response.output_text
# Define specialized role prompts
planner_system_prompt = """
You are a Planner Agent.
Your job is to break a user request into a short, actionable plan.
Rules:
- Do not write the final answer.
- Return exactly 3 bullet points.
- Each bullet should be a clear action step.
- Keep the plan concise.
""".strip()
writer_system_prompt = """
You are a Writer Agent.
Your job is to write a polished response using the provided plan.
Rules:
- Use the plan as your guide.
- Write clearly and concisely.
- Do not mention that you are using a plan.
- Produce a user-facing answer only.
""".strip()
def main():
# Example user task
user_request = "Explain to a beginner how Python decorators work."
print("USER REQUEST:")
print(user_request)
print("\n" + "=" * 60)
# Step 1: Planner creates a plan
planner_output = run_agent(
system_prompt=planner_system_prompt,
user_prompt=user_request,
)
print("PLANNER OUTPUT:")
print(planner_output)
print("\n" + "=" * 60)
# Step 2: Writer uses the plan to create a final answer
writer_input = f"""
User request:
{user_request}
Plan from Planner Agent:
{planner_output}
""".strip()
writer_output = run_agent(
system_prompt=writer_system_prompt,
user_prompt=writer_input,
)
print("WRITER OUTPUT:")
print(writer_output)
if __name__ == "__main__":
main()
Example Output
USER REQUEST:
Explain to a beginner how Python decorators work.
============================================================
PLANNER OUTPUT:
- Define what a decorator is and explain that it wraps another function.
- Show a simple example where a decorator adds behavior before or after a function call.
- Explain the use of `@decorator_name` syntax and why decorators are useful.
============================================================
WRITER OUTPUT:
A Python decorator is a function that takes another function and extends or changes its behavior without modifying the original function directly. This is useful when you want to add reusable functionality, such as logging, timing, or access control.
Here is a simple example:
```python
def my_decorator(func):
def wrapper():
print("Before the function runs")
func()
print("After the function runs")
return wrapper
@my_decorator
def say_hello():
print("Hello!")
say_hello()
When you run this code, Python applies my_decorator to say_hello. The @my_decorator syntax is a shortcut for:
say_hello = my_decorator(say_hello)
Decorators are useful because they let you add common behavior to many functions in a clean and reusable way.
### Exercise Tasks
1. Change the user request to:
- “Explain list comprehensions to a beginner”
- “Write a short introduction to REST APIs”
2. Modify the planner to return 5 steps instead of 3
3. Update the writer prompt so it produces:
- A beginner-friendly explanation
- One code example
- A short summary
---
## 5. Hands-On Exercise 2: Add a Reviewer Agent
### Goal
Extend the pipeline with a third agent:
- **Planner**
- **Writer**
- **Reviewer**
The reviewer will evaluate the writer’s answer and provide structured feedback.
### What You Will Learn
- How to add a review stage
- How to design a critique-oriented role
- How to create simple quality checks in multi-agent systems
### Python Code
```python
"""
Session 2 - Exercise 2
Add a Reviewer Agent to a multi-agent pipeline.
This script demonstrates:
1. Planning
2. Writing
3. Reviewing
"""
import os
import json
from dotenv import load_dotenv
from openai import OpenAI
# Load environment variables
load_dotenv()
# Create API client
client = OpenAI(api_key=os.getenv("OPENAI_API_KEY"))
def run_agent(system_prompt: str, user_prompt: str) -> str:
"""
Executes an agent with a system prompt and user prompt.
Args:
system_prompt: Agent role definition
user_prompt: Task input
Returns:
Text output from the model
"""
response = client.responses.create(
model="gpt-5.4-mini",
input=[
{
"role": "system",
"content": [{"type": "input_text", "text": system_prompt}],
},
{
"role": "user",
"content": [{"type": "input_text", "text": user_prompt}],
},
],
)
return response.output_text
planner_system_prompt = """
You are a Planner Agent.
Return exactly 3 bullet points describing how to answer the user's request.
Do not write the final answer.
""".strip()
writer_system_prompt = """
You are a Writer Agent.
Write a clear answer based only on the user request and the provided plan.
Include:
- a simple explanation
- one short example
- a brief conclusion
""".strip()
reviewer_system_prompt = """
You are a Reviewer Agent.
Review the writer's response for:
- correctness
- clarity
- completeness
Return valid JSON only with this structure:
{
"approved": true,
"issues": ["issue1", "issue2"],
"suggestions": ["suggestion1", "suggestion2"]
}
Rules:
- Use true or false for approved.
- If there are no issues, return an empty list for issues.
- Do not include markdown fences.
- Do not add extra text.
""".strip()
def safe_parse_json(text: str) -> dict:
"""
Safely parse JSON output from the model.
Args:
text: Raw text expected to contain JSON
Returns:
Parsed dictionary, or a fallback structure if parsing fails
"""
try:
return json.loads(text)
except json.JSONDecodeError:
return {
"approved": False,
"issues": ["Reviewer returned invalid JSON."],
"suggestions": ["Update the reviewer prompt or add stricter parsing."],
}
def main():
user_request = "Explain how Python context managers work."
print("USER REQUEST:")
print(user_request)
print("\n" + "=" * 60)
# Step 1: Planner
plan = run_agent(planner_system_prompt, user_request)
print("PLAN:")
print(plan)
print("\n" + "=" * 60)
# Step 2: Writer
writer_input = f"""
User request:
{user_request}
Plan:
{plan}
""".strip()
draft = run_agent(writer_system_prompt, writer_input)
print("DRAFT:")
print(draft)
print("\n" + "=" * 60)
# Step 3: Reviewer
reviewer_input = f"""
User request:
{user_request}
Draft response:
{draft}
""".strip()
review_raw = run_agent(reviewer_system_prompt, reviewer_input)
review = safe_parse_json(review_raw)
print("REVIEW RESULT:")
print(json.dumps(review, indent=2))
if __name__ == "__main__":
main()
Example Output
USER REQUEST:
Explain how Python context managers work.
============================================================
PLAN:
- Define a context manager and describe its purpose in resource handling.
- Show a simple example using the `with` statement.
- Explain how context managers help ensure cleanup happens automatically.
============================================================
DRAFT:
A context manager in Python is a tool that helps you set up and clean up resources automatically. It is commonly used with files, database connections, or locks. The main idea is that Python handles the cleanup for you, even if an error occurs.
Here is a simple example:
```python
with open("example.txt", "r") as file:
contents = file.read()
print(contents)
In this example, Python opens the file and assigns it to file. When the block ends, Python automatically closes the file. This makes code safer and cleaner.
In short, context managers are useful because they simplify resource management and reduce the chance of forgetting cleanup steps.
```json
{
"approved": true,
"issues": [],
"suggestions": [
"Optionally mention that custom context managers can be created with classes or the contextlib module."
]
}
Exercise Tasks
- Change the reviewer schema to also include a numeric score from 1 to 5
- Make the writer revise its draft if
approvedisfalse - Add a maximum of 2 review-revision loops
6. Hands-On Exercise 3: Implement a Review-Revision Loop
Goal
Build a simple orchestration loop where the writer improves its response based on reviewer feedback.
What You Will Learn
- How to use agent feedback iteratively
- How to stop a loop with clear conditions
- How to keep agent responsibilities focused
Python Code
"""
Session 2 - Exercise 3
Implement a writer-reviewer loop with a maximum number of revisions.
"""
import os
import json
from dotenv import load_dotenv
from openai import OpenAI
# Load configuration
load_dotenv()
# Initialize client
client = OpenAI(api_key=os.getenv("OPENAI_API_KEY"))
def run_agent(system_prompt: str, user_prompt: str) -> str:
"""
Call an OpenAI model with a role prompt and task prompt.
Args:
system_prompt: The agent's role instructions
user_prompt: The task-specific content
Returns:
The model's text output
"""
response = client.responses.create(
model="gpt-5.4-mini",
input=[
{
"role": "system",
"content": [{"type": "input_text", "text": system_prompt}],
},
{
"role": "user",
"content": [{"type": "input_text", "text": user_prompt}],
},
],
)
return response.output_text
def safe_parse_json(text: str) -> dict:
"""
Safely parse JSON, returning a fallback if parsing fails.
"""
try:
return json.loads(text)
except json.JSONDecodeError:
return {
"approved": False,
"score": 1,
"issues": ["Invalid JSON from reviewer."],
"suggestions": ["Tighten the reviewer output constraints."],
}
writer_system_prompt = """
You are a Writer Agent.
Write a clear beginner-friendly response.
Include:
- a simple explanation
- one short Python example
- a short summary
If reviewer feedback is provided, revise the answer to address all issues.
""".strip()
reviewer_system_prompt = """
You are a Reviewer Agent.
Evaluate the draft for:
- correctness
- clarity
- completeness
- beginner friendliness
Return valid JSON only in this format:
{
"approved": true,
"score": 5,
"issues": ["issue1"],
"suggestions": ["suggestion1"]
}
Rules:
- approved must be true or false
- score must be an integer from 1 to 5
- do not include markdown fences
- do not add any extra commentary
""".strip()
def main():
user_request = "Explain Python generators to a beginner."
max_revisions = 2
draft = ""
for attempt in range(max_revisions + 1):
if attempt == 0:
writer_input = f"""
User request:
{user_request}
Write the initial draft.
""".strip()
else:
writer_input = f"""
User request:
{user_request}
Previous draft:
{draft}
Reviewer feedback:
{json.dumps(review, indent=2)}
Revise the draft to address the feedback.
""".strip()
draft = run_agent(writer_system_prompt, writer_input)
review_input = f"""
User request:
{user_request}
Draft:
{draft}
""".strip()
review_raw = run_agent(reviewer_system_prompt, review_input)
review = safe_parse_json(review_raw)
print(f"\nATTEMPT {attempt + 1}")
print("=" * 60)
print("DRAFT:")
print(draft)
print("\nREVIEW:")
print(json.dumps(review, indent=2))
if review.get("approved") is True:
print("\nFinal draft approved.")
break
else:
print("\nReached maximum revisions without approval.")
if __name__ == "__main__":
main()
Example Output
ATTEMPT 1
============================================================
DRAFT:
A generator in Python is a special kind of function that returns values one at a time instead of all at once. This can save memory and make your programs more efficient when working with large amounts of data.
Here is a simple example:
```python
def count_up_to(n):
for i in range(1, n + 1):
yield i
for number in count_up_to(3):
print(number)
This code prints one number at a time. The yield keyword pauses the function and remembers its state so it can continue later.
In summary, generators are useful when you want to produce values lazily instead of storing everything in memory at once.
REVIEW: { "approved": true, "score": 5, "issues": [], "suggestions": [ "Optionally mention that generators return iterator objects." ] }
Final draft approved. ```
Exercise Tasks
- Change the user request to:
- “Explain Python iterators”
- “Explain
*argsand**kwargs” - Add logic to save each draft and review in a Python list
- Print a final history report after the loop completes
7. Best Practices for Inter-Agent Communication
Keep Roles Narrow
A role should do one thing well. If an agent is both planning and writing and reviewing, role boundaries become unclear.
Use Explicit Handoffs
Pass clearly labeled context between agents, for example:
- Original user request
- Planner output
- Draft version
- Reviewer feedback
Prefer Structured Outputs for Control Points
Whenever Python needs to make a decision, such as whether to continue looping, use structured data like JSON.
Keep the Orchestrator in Charge
Let your application code control:
- Which agent runs next
- When the workflow stops
- How outputs are stored
- What happens on failures
Add Validation
Always validate structured outputs before using them in logic.
Limit Iterations
Review loops should have a maximum number of retries to prevent runaway cost and latency.
8. Common Design Mistakes
Mistake 1: Vague Role Definitions
Bad:
Help solve the problem.
Better:
Produce a 3-step plan for answering the user’s request. Do not write the final answer.
Mistake 2: Passing Too Much Context
Too much context can make outputs noisy and expensive. Pass only what the next agent needs.
Mistake 3: Letting Agents Decide Everything
If the model decides all control flow, the system becomes harder to test and debug. Keep logic in Python.
Mistake 4: Unstructured Review Output
If the reviewer returns long prose, your code must guess what to do next. Prefer JSON for machine-readable decisions.
9. Mini Challenge
Build a 3-agent system for this task:
“Create a beginner-friendly explanation of Python virtual environments.”
Requirements
- Planner returns 3 bullet points
- Writer produces the explanation
- Reviewer returns JSON with:
approvedscoreissuessuggestions
Stretch Goal
If score < 4, revise the draft once.
10. Recap
In this session, you learned how to:
- Design focused agent roles
- Structure communication between agents
- Build a sequential multi-agent workflow
- Add a reviewer for quality control
- Implement a revision loop with stop conditions
These patterns form the foundation of more advanced agentic systems, where multiple specialized agents collaborate under application-level orchestration.
Useful Resources
- OpenAI Responses API guide: https://developers.openai.com/api/docs/guides/migrate-to-responses
- OpenAI API reference: https://platform.openai.com/docs/api-reference
- OpenAI Python SDK: https://github.com/openai/openai-python
- Python
jsonmodule: https://docs.python.org/3/library/json.html - Python
dotenvpackage: https://pypi.org/project/python-dotenv/
Suggested Homework
- Refactor the examples into reusable Python classes:
PlannerAgentWriterAgent-
ReviewerAgent -
Add logging for:
- Prompt inputs
- Raw outputs
-
Parsed review decisions
-
Build a router agent that decides whether a task needs:
- only a writer
- planner + writer
-
planner + writer + reviewer
-
Compare single-agent vs multi-agent behavior for the same prompt and note:
- output quality
- latency
- code complexity
- token usage
Back to Chapter | Back to Master Plan | Previous Session | Next Session