Session 3: Coordinator, Specialist, and Router Patterns
Synopsis
Explains architectural patterns in which one component delegates work to specialized subcomponents, tools, or prompts. Learners understand how to organize complexity by assigning clear responsibilities within a system.
Session Content
Session 3: Coordinator, Specialist, and Router Patterns
Session Overview
In this session, you will learn three foundational multi-agent design patterns used in GenAI applications:
- Coordinator pattern: one agent manages the workflow and delegates tasks
- Specialist pattern: dedicated agents handle specific domains or skills
- Router pattern: an agent or rule-based layer decides where requests should go
These patterns are essential for building reliable, maintainable, and scalable agentic systems in Python.
Learning Objectives
By the end of this session, you will be able to:
- Explain the difference between coordinator, specialist, and router patterns
- Implement each pattern in Python using the OpenAI Responses API
- Design workflows that split tasks across multiple agents
- Choose the right pattern for a given use case
- Build a simple multi-agent system with practical routing logic
Agenda (~45 minutes)
- 0–10 min: Theory: why multi-agent patterns matter
- 10–20 min: Coordinator pattern
- 20–30 min: Specialist pattern
- 30–38 min: Router pattern
- 38–45 min: Hands-on mini project: combine the patterns
1. Why These Patterns Matter
As GenAI applications become more capable, a single prompt to a single model is often not enough.
Common problems include:
- Prompts becoming too large and hard to maintain
- One model response trying to do too many things at once
- Difficulty enforcing domain-specific behavior
- Poor observability when complex workflows fail
- Higher cost due to unnecessary model calls
Multi-agent patterns help address these issues by separating responsibilities.
Real-World Analogy
Imagine a consulting firm:
- A coordinator project manager receives the client request and breaks the work into steps
- Specialists such as a finance expert or legal expert handle their own parts
- A router receptionist sends incoming requests to the right team
This same idea works well for LLM-powered systems.
2. Pattern 1: Coordinator
What It Is
A coordinator is the central agent or control layer that:
- understands the user request
- decides what work needs to happen
- calls other agents or functions
- combines the results into a final answer
The coordinator usually does not perform every task itself. Its main job is orchestration.
When to Use It
Use a coordinator when:
- a request has multiple steps
- work must happen in a specific order
- outputs from one step become inputs to another
- you want one place to manage state, retries, and logging
Example Use Cases
- Research assistant that gathers facts, summarizes, and drafts an answer
- Support workflow that classifies issue, retrieves policy, and drafts response
- Content pipeline that plans, writes, and reviews text
Coordinator Responsibilities
A coordinator commonly handles:
- task decomposition
- state management
- sequencing
- error handling
- response composition
Strengths
- clear orchestration logic
- easier debugging
- centralized control
- reusable specialist components
Trade-Offs
- can become a bottleneck
- may be overly complex if too much logic is centralized
- needs careful prompt and workflow design
3. Pattern 2: Specialist
What It Is
A specialist is an agent designed for a narrow task or domain.
Examples:
- code reviewer
- SQL assistant
- compliance reviewer
- math explainer
- summarizer
- sentiment classifier
Instead of making one general-purpose prompt do everything, you create focused agents with clear instructions.
Why Specialists Work Well
Specialists improve:
- consistency
- prompt clarity
- output quality
- maintainability
- testability
Design Principles for Specialists
A good specialist should have:
- a narrow responsibility
- a stable instruction set
- a predictable output format
- limited scope
- clear input/output expectations
Good Example
A "support tone rewriter" specialist:
- input: raw support draft
- output: professional, empathetic customer-facing rewrite
Poor Example
A "do everything" specialist:
- classify issues
- write SQL
- answer legal questions
- summarize reports
That is not a specialist. That is just an overloaded general agent.
4. Pattern 3: Router
What It Is
A router decides where a request should go.
Routing can be:
- rule-based: simple Python logic
- LLM-based: use a model to classify intent
- hybrid: rules first, LLM fallback
When Routing Is Useful
Use routing when:
- different requests need different specialists
- you want to reduce unnecessary model calls
- you want to separate simple vs complex flows
- you have multiple tools or backends
Examples
- finance question → finance specialist
- HR policy question → HR specialist
- code debugging question → engineering specialist
Rule-Based vs LLM-Based Routing
Rule-Based Routing
Best when categories are simple and stable.
Pros: - fast - cheap - deterministic
Cons: - brittle for ambiguous input - hard to scale to nuanced classification
LLM-Based Routing
Best when requests are ambiguous or varied.
Pros: - flexible - handles natural language variation - easier to extend semantically
Cons: - extra cost - less deterministic - requires careful output constraints
5. Comparing the Patterns
| Pattern | Core Role | Best For | Strength | Risk |
|---|---|---|---|---|
| Coordinator | Orchestrates steps | Multi-step workflows | Centralized control | Bottleneck |
| Specialist | Handles one domain/task | Focused expertise | Better quality per task | Too many agents to manage |
| Router | Chooses destination | Request triage | Efficient dispatch | Misrouting |
Important Note
These patterns are often combined:
- A router sends a request to the right workflow
- A coordinator manages the workflow
- specialists perform the individual tasks
This combined design is common in production-grade agent systems.
6. Hands-On Exercise 1: Build a Simple Specialist Agent
Goal
Create a specialist that rewrites rough customer support text into a professional reply.
What You Will Practice
- calling the OpenAI Responses API from Python
- writing focused instructions for a specialist
- generating a predictable output
Code
from openai import OpenAI
# Initialize the OpenAI client.
# Make sure your OPENAI_API_KEY environment variable is set.
client = OpenAI()
def support_tone_specialist(raw_message: str) -> str:
"""
Rewrite a rough customer-support draft into a professional, empathetic reply.
Args:
raw_message: The draft text to rewrite.
Returns:
A polished support response as a string.
"""
response = client.responses.create(
model="gpt-5.4-mini",
instructions=(
"You are a customer support communication specialist. "
"Rewrite the user's draft into a professional, concise, empathetic email reply. "
"Preserve meaning, avoid adding policies not mentioned, and output only the rewritten message."
),
input=raw_message,
)
return response.output_text
if __name__ == "__main__":
draft = (
"your refund is late because the billing team is busy. "
"wait a few more days and check again."
)
rewritten = support_tone_specialist(draft)
print("Rewritten message:\n")
print(rewritten)
Example Output
Rewritten message:
Hi,
Thank you for your patience. Your refund is taking longer than expected because our billing team is currently experiencing a high volume of requests.
Please allow a few more days for processing, and feel free to check back with us if you do not see an update.
Best regards,
Support Team
Discussion
This is a good specialist because it has:
- one job
- a clear tone
- constrained behavior
- direct input/output
7. Hands-On Exercise 2: Build Multiple Specialists
Goal
Create two specialists:
- a summarizer specialist
- a risk-review specialist
This shows how distinct prompts can improve role clarity.
Code
from openai import OpenAI
client = OpenAI()
def summarizer_specialist(text: str) -> str:
"""
Summarize a piece of text into 3 bullet points.
"""
response = client.responses.create(
model="gpt-5.4-mini",
instructions=(
"You are a summarization specialist. "
"Summarize the input into exactly 3 concise bullet points. "
"Output only the bullet list."
),
input=text,
)
return response.output_text
def risk_review_specialist(text: str) -> str:
"""
Review text for operational or business risks.
"""
response = client.responses.create(
model="gpt-5.4-mini",
instructions=(
"You are a business risk review specialist. "
"Identify potential operational, legal, or reputational risks in the input. "
"Output a short list with labels: Operational, Legal, Reputational. "
"If none are found for a category, say 'None noted'."
),
input=text,
)
return response.output_text
if __name__ == "__main__":
document = """
We plan to launch the new payment feature next week without a full rollback plan.
The marketing team has already announced the release date publicly.
Customer transaction logs will be retained for analysis, but the data handling review is still pending.
"""
print("=== Summary Specialist ===")
print(summarizer_specialist(document))
print("\n=== Risk Review Specialist ===")
print(risk_review_specialist(document))
Example Output
=== Summary Specialist ===
- A new payment feature is scheduled to launch next week.
- The release has been publicly announced by the marketing team.
- Data handling review is still pending while transaction logs will be retained for analysis.
=== Risk Review Specialist ===
Operational: Launching without a full rollback plan creates deployment and incident recovery risk.
Legal: Data handling review is still pending, which may create compliance or privacy concerns.
Reputational: Publicly announcing the release before readiness may harm trust if delays or issues occur.
Reflection Questions
- Why is a summarizer prompt not ideal for risk review?
- How does narrowing scope improve consistency?
- What output formats make specialists easier to combine?
8. Hands-On Exercise 3: Build a Rule-Based Router
Goal
Route user requests to the right specialist using plain Python rules.
Why Start with Rules?
Rule-based routers are simple and often enough for early-stage systems.
Code
from openai import OpenAI
client = OpenAI()
def finance_specialist(question: str) -> str:
"""
Answer finance-related questions at a high level.
"""
response = client.responses.create(
model="gpt-5.4-mini",
instructions=(
"You are a finance specialist for internal business users. "
"Provide a concise, practical answer. "
"If the topic requires legal or tax advice, clearly say that a qualified professional should review it."
),
input=question,
)
return response.output_text
def engineering_specialist(question: str) -> str:
"""
Answer software engineering-related questions.
"""
response = client.responses.create(
model="gpt-5.4-mini",
instructions=(
"You are a software engineering specialist. "
"Provide a practical and technically accurate answer for Python developers. "
"Use short paragraphs or bullets where helpful."
),
input=question,
)
return response.output_text
def route_request(user_input: str) -> str:
"""
Route a request using simple keyword-based logic.
Returns:
The route name: 'finance' or 'engineering'
"""
text = user_input.lower()
finance_keywords = ["budget", "revenue", "profit", "expense", "forecast", "invoice"]
engineering_keywords = ["python", "bug", "api", "database", "function", "error"]
if any(keyword in text for keyword in finance_keywords):
return "finance"
if any(keyword in text for keyword in engineering_keywords):
return "engineering"
# Default route
return "engineering"
def handle_request(user_input: str) -> str:
"""
Route the request and call the matching specialist.
"""
route = route_request(user_input)
if route == "finance":
return finance_specialist(user_input)
return engineering_specialist(user_input)
if __name__ == "__main__":
requests = [
"How should we think about quarterly revenue forecasting for a new product?",
"I keep getting a Python API timeout error. Where should I start debugging?",
]
for request in requests:
print(f"\nUser request: {request}")
print("Response:")
print(handle_request(request))
Example Output
User request: How should we think about quarterly revenue forecasting for a new product?
Response:
Start with a few scenario-based forecasts rather than a single estimate. For a new product, build conservative, expected, and optimistic cases using assumptions such as customer acquisition, conversion rate, pricing, and churn. Review assumptions regularly and update the forecast as real data becomes available.
User request: I keep getting a Python API timeout error. Where should I start debugging?
Response:
Start by checking whether the timeout is coming from your HTTP client, your application server, or the upstream API. Then inspect request latency, retry behavior, payload size, and network reliability. In Python, also confirm that timeout values are explicitly set and that you are logging the failing endpoint and response timing.
Limitations of This Router
- keyword matching is brittle
- cannot easily handle ambiguity
- difficult to scale as domains grow
This leads us to LLM-based routing.
9. Hands-On Exercise 4: Build an LLM Router
Goal
Use the model to classify the user request into one of several routes.
Design Strategy
We will ask the model to output only one label:
financeengineeringgeneral
This is a practical pattern because it reduces output ambiguity.
Code
from openai import OpenAI
client = OpenAI()
VALID_ROUTES = {"finance", "engineering", "general"}
def llm_route_request(user_input: str) -> str:
"""
Use the model to classify the incoming request into a route label.
"""
response = client.responses.create(
model="gpt-5.4-mini",
instructions=(
"You are a routing classifier. "
"Classify the user's request into exactly one of these labels: "
"finance, engineering, general. "
"Output only the label and nothing else."
),
input=user_input,
)
route = response.output_text.strip().lower()
# Defensive validation: never trust model output blindly.
if route not in VALID_ROUTES:
return "general"
return route
def finance_specialist(question: str) -> str:
response = client.responses.create(
model="gpt-5.4-mini",
instructions=(
"You are a finance specialist. Provide a concise answer for internal business users."
),
input=question,
)
return response.output_text
def engineering_specialist(question: str) -> str:
response = client.responses.create(
model="gpt-5.4-mini",
instructions=(
"You are a software engineering specialist for Python developers."
),
input=question,
)
return response.output_text
def general_specialist(question: str) -> str:
response = client.responses.create(
model="gpt-5.4-mini",
instructions=(
"You are a general business assistant. Provide a concise and useful answer."
),
input=question,
)
return response.output_text
def handle_request(user_input: str) -> tuple[str, str]:
"""
Route the request, then call the appropriate specialist.
Returns:
A tuple of (route, answer)
"""
route = llm_route_request(user_input)
if route == "finance":
return route, finance_specialist(user_input)
if route == "engineering":
return route, engineering_specialist(user_input)
return route, general_specialist(user_input)
if __name__ == "__main__":
examples = [
"How can we reduce cloud infrastructure costs next quarter?",
"Why does my Python function raise a KeyError when parsing JSON?",
"What makes an internal project update more effective?",
]
for text in examples:
route, answer = handle_request(text)
print(f"\nInput: {text}")
print(f"Route: {route}")
print("Answer:")
print(answer)
Example Output
Input: How can we reduce cloud infrastructure costs next quarter?
Route: finance
Answer:
Start by identifying the largest cost drivers, such as compute, storage, and data transfer. Then evaluate rightsizing, reserved capacity, workload scheduling, and usage monitoring. Set targets by team and review savings against forecast regularly.
Input: Why does my Python function raise a KeyError when parsing JSON?
Route: engineering
Answer:
A KeyError usually means your code is accessing a dictionary key that is missing. Check the JSON structure you actually received, confirm the expected keys exist, and consider using dict.get() or validation before access.
Input: What makes an internal project update more effective?
Route: general
Answer:
An effective internal project update is clear, concise, and action-oriented. It should include current status, major accomplishments, risks or blockers, next steps, and any decisions needed from stakeholders.
10. Hands-On Exercise 5: Build a Coordinator
Goal
Create a coordinator that:
- takes a project update
- asks a summarizer specialist for key points
- asks a risk specialist for concerns
- combines both into a final management-ready response
Why This Is a Coordinator
The coordinator does not do all the analysis directly. It delegates work and composes the output.
Code
from openai import OpenAI
client = OpenAI()
def summarizer_specialist(text: str) -> str:
"""
Generate a concise summary of the project update.
"""
response = client.responses.create(
model="gpt-5.4-mini",
instructions=(
"You are a project update summarization specialist. "
"Summarize the input into exactly 3 bullet points. "
"Output only the bullets."
),
input=text,
)
return response.output_text
def risk_specialist(text: str) -> str:
"""
Identify project risks from the update.
"""
response = client.responses.create(
model="gpt-5.4-mini",
instructions=(
"You are a project risk specialist. "
"Identify the top risks from the update. "
"Output 2 to 4 bullet points. Focus on schedule, scope, dependency, and compliance risks."
),
input=text,
)
return response.output_text
def coordinator(project_update: str) -> str:
"""
Orchestrate multiple specialists and combine their outputs into
a management-ready response.
"""
summary = summarizer_specialist(project_update)
risks = risk_specialist(project_update)
combined_prompt = f"""
Project update:
{project_update}
Summary from summarizer specialist:
{summary}
Risk analysis from risk specialist:
{risks}
"""
response = client.responses.create(
model="gpt-5.4-mini",
instructions=(
"You are a coordinator agent preparing a management-ready project brief. "
"Using the provided specialist outputs, produce a response with exactly these sections:\n"
"1. Executive Summary\n"
"2. Key Risks\n"
"3. Recommended Next Actions\n"
"Be concise, clear, and professional."
),
input=combined_prompt,
)
return response.output_text
if __name__ == "__main__":
update = """
The mobile checkout project is 80% complete. Frontend development is on track,
but the payment gateway integration is delayed because the vendor sandbox has been unstable.
Marketing still expects launch by the end of the month.
The team has not yet completed the fallback workflow for failed payments.
A privacy review of retained payment metadata is scheduled for next week.
"""
print(coordinator(update))
Example Output
1. Executive Summary
The mobile checkout project is largely complete, with frontend work on track. However, the payment gateway integration is delayed due to vendor sandbox instability, creating potential launch risk. Additional work remains on failed-payment fallback handling and privacy review.
2. Key Risks
- Vendor instability may delay integration testing and overall launch readiness.
- The missing fallback workflow for failed payments creates operational and customer experience risk.
- The pending privacy review may raise compliance concerns related to retained payment metadata.
- Public launch expectations may create pressure to release before full readiness.
3. Recommended Next Actions
- Escalate with the payment vendor and define contingency testing options.
- Prioritize completion of the failed-payment fallback workflow.
- Complete the privacy review before launch approval.
- Reassess the end-of-month launch date based on integration and compliance readiness.
Key Takeaway
The coordinator pattern creates cleaner architecture by separating:
- orchestration
- task execution
- synthesis
11. Mini Project: Combine Router + Coordinator + Specialists
Goal
Build a small system where:
- an LLM router classifies the request
- a coordinator handles project-update style requests
- specialists answer direct finance or engineering questions
Architecture
- Router
- sends project updates to coordinator
- sends finance questions to finance specialist
- sends engineering questions to engineering specialist
-
otherwise uses general specialist
-
Coordinator
- for project updates, creates summary + risks + final brief
Code
from openai import OpenAI
client = OpenAI()
VALID_ROUTES = {"project_update", "finance", "engineering", "general"}
def llm_route_request(user_input: str) -> str:
"""
Classify the request into one route label.
"""
response = client.responses.create(
model="gpt-5.4-mini",
instructions=(
"You are a routing classifier. "
"Classify the user's input into exactly one label from this set: "
"project_update, finance, engineering, general. "
"Use project_update for status updates, progress reports, or project summaries needing analysis. "
"Output only the label."
),
input=user_input,
)
route = response.output_text.strip().lower()
if route not in VALID_ROUTES:
return "general"
return route
def finance_specialist(question: str) -> str:
"""
Handle finance-related questions.
"""
response = client.responses.create(
model="gpt-5.4-mini",
instructions=(
"You are a finance specialist. "
"Provide concise, practical business finance guidance. "
"Avoid pretending to provide regulated financial advice."
),
input=question,
)
return response.output_text
def engineering_specialist(question: str) -> str:
"""
Handle engineering-related questions.
"""
response = client.responses.create(
model="gpt-5.4-mini",
instructions=(
"You are a software engineering specialist for Python developers. "
"Provide practical, technically accurate guidance."
),
input=question,
)
return response.output_text
def general_specialist(question: str) -> str:
"""
Handle general requests.
"""
response = client.responses.create(
model="gpt-5.4-mini",
instructions=(
"You are a general-purpose business assistant. "
"Provide clear, concise, practical answers."
),
input=question,
)
return response.output_text
def summarizer_specialist(text: str) -> str:
"""
Summarize a project update.
"""
response = client.responses.create(
model="gpt-5.4-mini",
instructions=(
"You are a project summarization specialist. "
"Summarize the input into exactly 3 bullet points. "
"Output only the bullet list."
),
input=text,
)
return response.output_text
def risk_specialist(text: str) -> str:
"""
Identify risks in a project update.
"""
response = client.responses.create(
model="gpt-5.4-mini",
instructions=(
"You are a project risk specialist. "
"Identify the top 2 to 4 project risks in bullet points."
),
input=text,
)
return response.output_text
def project_update_coordinator(project_update: str) -> str:
"""
Coordinate specialist outputs into a final project brief.
"""
summary = summarizer_specialist(project_update)
risks = risk_specialist(project_update)
coordinator_input = f"""
Original update:
{project_update}
Summary:
{summary}
Risks:
{risks}
"""
response = client.responses.create(
model="gpt-5.4-mini",
instructions=(
"You are a coordinator agent. "
"Create a management-ready project brief with these exact sections:\n"
"1. Executive Summary\n"
"2. Key Risks\n"
"3. Recommended Next Actions\n"
"Use concise professional language."
),
input=coordinator_input,
)
return response.output_text
def handle_request(user_input: str) -> tuple[str, str]:
"""
Route the request to the right handler.
Returns:
A tuple of (route, response_text)
"""
route = llm_route_request(user_input)
if route == "project_update":
return route, project_update_coordinator(user_input)
if route == "finance":
return route, finance_specialist(user_input)
if route == "engineering":
return route, engineering_specialist(user_input)
return route, general_specialist(user_input)
if __name__ == "__main__":
examples = [
"""
The analytics dashboard redesign is mostly complete, but QA found performance issues
on large reports. The data team is waiting on a schema change from the platform team.
Leadership still wants a demo on Friday.
""",
"How should we estimate next quarter's operating expenses for a new team?",
"What is a practical way to debug intermittent API failures in Python?",
"How can I make a weekly team update more useful?",
]
for idx, example in enumerate(examples, start=1):
route, answer = handle_request(example.strip())
print(f"\n=== Example {idx} ===")
print(f"Route: {route}")
print("Answer:")
print(answer)
Example Output
=== Example 1 ===
Route: project_update
Answer:
1. Executive Summary
The analytics dashboard redesign is nearing completion, but QA has identified performance issues on large reports. Progress is also dependent on a pending schema change from the platform team. Leadership expectations for a Friday demo create schedule pressure.
2. Key Risks
- Performance issues may affect readiness for the planned demo.
- Dependency on the platform team's schema change may delay completion.
- Demo expectations may outpace actual delivery readiness.
3. Recommended Next Actions
- Prioritize investigation and mitigation of report performance issues.
- Escalate and confirm delivery timing for the schema change dependency.
- Align leadership on demo scope and any readiness constraints.
=== Example 2 ===
Route: finance
Answer:
Estimate next quarter's operating expenses by separating fixed and variable costs. Start with headcount, tools, infrastructure, vendor contracts, and planned one-time expenses. Build a baseline forecast, add a contingency range, and compare assumptions against current run-rate data.
=== Example 3 ===
Route: engineering
Answer:
A practical approach is to add structured logging around every failing API call, including endpoint, payload size, retry count, timeout, and response status. Then check for patterns in failure timing, upstream latency, concurrency spikes, and rate limits. In Python, also verify timeout settings, retry strategy, and exception handling.
=== Example 4 ===
Route: general
Answer:
A useful weekly team update should highlight progress, current priorities, blockers, risks, and decisions needed. Keep it concise, focus on what changed since the last update, and make ownership and next steps clear.
12. Best Practices
Coordinator Best Practices
- keep orchestration logic separate from specialist logic
- log all delegation steps
- validate outputs before combining them
- keep the coordinator focused on flow, not domain depth
Specialist Best Practices
- make each specialist narrow and testable
- use explicit formatting instructions
- avoid overlapping responsibilities when possible
- version specialist prompts over time
Router Best Practices
- start with rules if routing is simple
- use LLM routing for ambiguity
- validate route labels
- maintain a safe fallback route
- log route decisions for debugging and evaluation
13. Common Mistakes
1. Creating Too Many Agents Too Early
Start simple. If two specialists solve the problem, do not build ten.
2. Weak Role Separation
If multiple specialists have nearly identical responsibilities, routing becomes unclear and quality drops.
3. Trusting Router Output Blindly
Always validate model-generated route labels.
4. Letting the Coordinator Become the Entire System
A coordinator should orchestrate, not absorb all specialist behavior.
5. Ignoring Output Structure
Unstructured specialist outputs are harder to combine and test.
14. Choosing the Right Pattern
Use a Coordinator When
- tasks have multiple stages
- you need sequencing
- you need to combine intermediate outputs
Use Specialists When
- tasks are clearly separable
- each task benefits from its own instructions
- you want reusable components
Use a Router When
- different requests need different handlers
- you want to reduce unnecessary work
- request types vary significantly
A Good Rule of Thumb
- One task, one role → specialist
- Many steps, one workflow → coordinator
- Many possible destinations → router
15. Practice Challenges
Try these on your own after the session:
- Add a legal specialist and extend the router.
- Convert the rule-based router into a hybrid router:
- rules first
- LLM fallback
- Add logging to your coordinator:
- input
- route chosen
- specialist outputs
- Make specialist outputs structured as JSON-like text for easier downstream parsing.
- Add a retry wrapper for model calls.
Useful Resources
- OpenAI Responses API migration guide: https://developers.openai.com/api/docs/guides/migrate-to-responses
- OpenAI API docs: https://platform.openai.com/docs
- OpenAI Python SDK: https://github.com/openai/openai-python
- Prompting guide: https://platform.openai.com/docs/guides/text
- Python virtual environments: https://docs.python.org/3/library/venv.html
Session Summary
In this session, you learned:
- Coordinator pattern for orchestration
- Specialist pattern for focused task execution
- Router pattern for dispatching requests to the right handler
You also implemented:
- a focused specialist
- multiple domain specialists
- a rule-based router
- an LLM router
- a coordinator workflow
- a combined mini-project using all three patterns
These patterns form a core foundation for agentic application design. In later sessions, they can be extended with tools, memory, evaluation, and more robust workflow control.
Back to Chapter | Back to Master Plan | Previous Session | Next Session