Skip to content

Session 1: Planning and Task Decomposition

Synopsis

Covers how agents break large goals into manageable sub-tasks and how planners can improve execution quality. Learners understand when explicit planning helps and when it introduces unnecessary overhead.

Session Content

Session 1: Planning and Task Decomposition

Session Overview

Duration: ~45 minutes
Audience: Python developers with basic programming knowledge
Goal: Learn how GenAI systems break complex problems into manageable steps, and implement simple planning workflows using the OpenAI Responses API with gpt-5.4-mini.

Learning Objectives

By the end of this session, learners will be able to:

  • Explain what planning and task decomposition mean in GenAI and agentic systems
  • Distinguish between a goal, subtask, dependency, and deliverable
  • Use an LLM to decompose a complex task into structured steps
  • Build a Python script that generates a task plan using the OpenAI Responses API
  • Validate and refine a generated plan for clarity and usefulness

1. Why Planning Matters in Agentic Systems

Agentic systems are designed to work toward goals rather than only answering one-off questions. To do that well, they often need to:

  1. Understand the user’s objective
  2. Break the objective into smaller tasks
  3. Order those tasks logically
  4. Decide what information is needed
  5. Execute or assist with execution step by step

Example

A user request like:

"Help me launch a beginner Python workshop for my local community"

is not a single-step task. It includes many smaller pieces:

  • Define workshop goals
  • Identify audience
  • Choose venue or online platform
  • Prepare agenda
  • Create sign-up form
  • Promote event
  • Gather materials

A good agent should not jump directly into one action. It should first create a workable plan.

Key Idea

Planning is the process of transforming a broad objective into an actionable sequence of smaller steps.

Task decomposition is the technique of splitting a large task into manageable subtasks.


2. Core Concepts

2.1 Goal

The high-level outcome the user wants.

Example:
"Create a study plan for learning Python in 6 weeks"

2.2 Subtask

A smaller action that contributes to the goal.

Examples:

  • Assess current skill level
  • Divide topics by week
  • Add exercises and milestones

2.3 Dependency

A subtask that must happen before another subtask.

Example:

  • You must define the learner’s current level before customizing the study plan.

2.4 Deliverable

The output produced by a task.

Examples:

  • Weekly study schedule
  • Resource list
  • Practice checklist

2.5 Constraints

Conditions that affect the plan.

Examples:

  • Budget limit
  • Time limit
  • Skill level
  • Available tools

3. What Makes a Good Plan?

A useful plan is usually:

  • Clear: easy to understand
  • Ordered: steps appear in a logical sequence
  • Actionable: each task can be acted on
  • Complete enough: covers the important work
  • Constrained: respects time, budget, or resource limits
  • Adaptable: can be revised as new information appears

Weak Plan Example

  1. Work on project
  2. Research stuff
  3. Finish it

Stronger Plan Example

  1. Define the project scope and expected output
  2. Identify the target audience and their needs
  3. Research 3 comparable examples for inspiration
  4. Draft the first version of the content outline
  5. Review the outline for gaps and sequencing
  6. Produce the final version

4. How LLMs Help With Planning

LLMs are useful for planning because they can:

  • Interpret broad natural-language goals
  • Suggest subtasks based on prior patterns
  • Reformat plans into structured outputs
  • Revise plans when constraints change

However, LLM-generated plans are not always correct or complete. They may:

  • Omit important steps
  • Produce vague subtasks
  • Miss dependencies
  • Suggest unrealistic actions

So planning with LLMs works best when you:

  • Give clear prompts
  • Ask for structured outputs
  • Review and refine the plan
  • Add domain-specific constraints

5. Planning Prompt Design

When asking an LLM to decompose a task, include:

  • The goal
  • Context
  • Constraints
  • Desired output format

Better Prompt Template

You are a planning assistant.

Break the following goal into ordered subtasks.

Goal: Launch a beginner Python workshop
Constraints:
- Budget under $200
- Audience: complete beginners
- Timeframe: 3 weeks
- Format: in-person

Return:
- A numbered list of subtasks
- A short rationale for each subtask
- Any dependencies between tasks

Why This Works

This prompt reduces ambiguity and asks for specific structure. That makes the output easier to use in code.


6. Hands-On Exercise 1: Generate a Simple Task Plan

Objective

Use the OpenAI Responses API to decompose a user goal into an ordered task list.

What You Will Build

A Python script that:

  • Sends a planning prompt to gpt-5.4-mini
  • Receives a plan
  • Prints the result

Setup

Install the OpenAI Python SDK:

pip install openai

Set your API key:

export OPENAI_API_KEY="your_api_key_here"

On Windows PowerShell:

setx OPENAI_API_KEY "your_api_key_here"

Code

import os
from openai import OpenAI

# Initialize the OpenAI client.
# The SDK automatically reads the API key from the OPENAI_API_KEY environment variable.
client = OpenAI()

# Define the user's goal and planning constraints.
goal = "Plan a 4-week beginner Python study group for working adults"
constraints = [
    "Participants have only 4 hours per week",
    "Budget is minimal",
    "Sessions should be practical",
    "The plan should include preparation and follow-up"
]

# Build a clear prompt for planning.
prompt = f"""
You are a helpful planning assistant.

Break the following goal into an ordered list of practical subtasks.

Goal:
{goal}

Constraints:
- {constraints[0]}
- {constraints[1]}
- {constraints[2]}
- {constraints[3]}

Return:
1. A numbered list of subtasks
2. A 1-2 sentence explanation for each subtask
3. Important dependencies between tasks
4. A final section called "Risks or Gaps to Check"
""".strip()

# Call the Responses API using the requested model.
response = client.responses.create(
    model="gpt-5.4-mini",
    input=prompt
)

# Print the model's text output.
print("=== Generated Plan ===")
print(response.output_text)

Example Output

=== Generated Plan ===
1. Define the study group's learning goals
   Clarify whether the group should focus on Python basics, problem-solving, or project-building so the content stays focused.

2. Identify participant availability and experience level
   Confirm when working adults can attend and whether they are complete beginners to set the right pace.

3. Select a simple weekly structure
   Design each week with a short lesson, guided coding practice, and a take-home exercise.

4. Choose free or low-cost learning materials
   Gather beginner-friendly Python resources, exercises, and setup instructions that fit the budget.

5. Prepare the environment setup guide
   Ensure participants can install Python and run code before the first session.

6. Create the 4-week session outline
   Assign a practical beginner topic to each week, such as variables, conditionals, loops, and functions.

7. Plan follow-up and accountability
   Decide how to share notes, homework, reminders, and encouragement between sessions.

Dependencies:
- Participant availability and skill level should be confirmed before finalizing the weekly structure.
- Materials should be chosen before the session outline is finalized.
- Environment setup must happen before the first learning session.

Risks or Gaps to Check:
- Participants may need extra setup support
- The pace may be too fast for complete beginners
- Working adults may miss sessions, so make materials reusable

Exercise Tasks

  1. Run the script as-is
  2. Change the goal to a different real-world project
  3. Add or change constraints
  4. Compare the quality of the plan before and after adding constraints

Reflection Questions

  • Which constraints changed the output most?
  • Were any subtasks too vague?
  • Did the model include practical preparation steps?

7. From Free-Form Output to Structured Planning

Free-form text is useful for people to read, but structured output is easier for programs to validate and reuse.

A planning system often benefits from returning data like:

  • task name
  • description
  • priority
  • dependencies
  • estimated effort

For this session, we will ask the model to produce JSON.


8. Hands-On Exercise 2: Generate a Structured Plan in JSON

Objective

Create a Python script that requests a structured task plan and parses it as JSON.

What You Will Build

A script that:

  • Requests a JSON plan from the model
  • Parses it safely in Python
  • Prints tasks in a readable format

Code

import json
from openai import OpenAI

client = OpenAI()

goal = "Create a weekend community workshop that teaches children the basics of coding with Python"

prompt = f"""
You are a planning assistant.

Create a task plan for the following goal:

Goal: {goal}

Return ONLY valid JSON in this exact structure:
{{
  "goal": "string",
  "tasks": [
    {{
      "id": 1,
      "title": "string",
      "description": "string",
      "dependencies": [1, 2],
      "estimated_effort": "string"
    }}
  ],
  "risks": ["string"]
}}

Requirements:
- Include 5 to 8 tasks
- Tasks must be ordered logically
- Dependencies must reference earlier task IDs only
- estimated_effort should be one of: "low", "medium", "high"
- Do not include markdown fences
""".strip()

response = client.responses.create(
    model="gpt-5.4-mini",
    input=prompt
)

raw_text = response.output_text.strip()

print("=== Raw JSON Response ===")
print(raw_text)

# Parse the returned JSON into a Python dictionary.
plan = json.loads(raw_text)

print("\n=== Parsed Plan Summary ===")
print(f"Goal: {plan['goal']}\n")

for task in plan["tasks"]:
    print(f"Task {task['id']}: {task['title']}")
    print(f"  Description: {task['description']}")
    print(f"  Dependencies: {task['dependencies']}")
    print(f"  Estimated effort: {task['estimated_effort']}")
    print()

print("Risks:")
for risk in plan["risks"]:
    print(f"- {risk}")

Example Output

=== Raw JSON Response ===
{
  "goal": "Create a weekend community workshop that teaches children the basics of coding with Python",
  "tasks": [
    {
      "id": 1,
      "title": "Define workshop goals and age range",
      "description": "Clarify the learning outcomes, target age group, and skill level so the workshop content is appropriate and focused.",
      "dependencies": [],
      "estimated_effort": "low"
    },
    {
      "id": 2,
      "title": "Choose venue, schedule, and capacity",
      "description": "Select a location, decide the workshop duration, and determine how many children can be supported safely and effectively.",
      "dependencies": [1],
      "estimated_effort": "medium"
    },
    {
      "id": 3,
      "title": "Design beginner-friendly lesson activities",
      "description": "Prepare simple coding exercises, demonstrations, and interactive tasks that are suitable for children learning Python for the first time.",
      "dependencies": [1],
      "estimated_effort": "high"
    },
    {
      "id": 4,
      "title": "Prepare technical setup instructions",
      "description": "Decide which laptops, browsers, or coding tools will be used and create simple setup guidance for helpers or participants.",
      "dependencies": [2, 3],
      "estimated_effort": "medium"
    },
    {
      "id": 5,
      "title": "Recruit volunteers or assistants",
      "description": "Identify people who can help supervise, answer questions, and support children during hands-on activities.",
      "dependencies": [2],
      "estimated_effort": "medium"
    },
    {
      "id": 6,
      "title": "Promote registration and collect participant information",
      "description": "Share the event with the community and gather sign-ups, age information, and any accessibility needs.",
      "dependencies": [2, 3],
      "estimated_effort": "medium"
    },
    {
      "id": 7,
      "title": "Run the workshop and gather feedback",
      "description": "Deliver the session, observe what worked well, and collect feedback from families and volunteers for improvement.",
      "dependencies": [4, 5, 6],
      "estimated_effort": "high"
    }
  ],
  "risks": [
    "The content may be too advanced for some children",
    "Technical setup problems could reduce workshop time",
    "Not enough volunteers may be available for hands-on support"
  ]
}

=== Parsed Plan Summary ===
Goal: Create a weekend community workshop that teaches children the basics of coding with Python

Task 1: Define workshop goals and age range
  Description: Clarify the learning outcomes, target age group, and skill level so the workshop content is appropriate and focused.
  Dependencies: []
  Estimated effort: low

Task 2: Choose venue, schedule, and capacity
  Description: Select a location, decide the workshop duration, and determine how many children can be supported safely and effectively.
  Dependencies: [1]
  Estimated effort: medium

Task 3: Design beginner-friendly lesson activities
  Description: Prepare simple coding exercises, demonstrations, and interactive tasks that are suitable for children learning Python for the first time.
  Dependencies: [1]
  Estimated effort: high

Task 4: Prepare technical setup instructions
  Description: Decide which laptops, browsers, or coding tools will be used and create simple setup guidance for helpers or participants.
  Dependencies: [2, 3]
  Estimated effort: medium

Task 5: Recruit volunteers or assistants
  Description: Identify people who can help supervise, answer questions, and support children during hands-on activities.
  Dependencies: [2]
  Estimated effort: medium

Task 6: Promote registration and collect participant information
  Description: Share the event with the community and gather sign-ups, age information, and any accessibility needs.
  Dependencies: [2, 3]
  Estimated effort: medium

Task 7: Run the workshop and gather feedback
  Description: Deliver the session, observe what worked well, and collect feedback from families and volunteers for improvement.
  Dependencies: [4, 5, 6]
  Estimated effort: high

Risks:
- The content may be too advanced for some children
- Technical setup problems could reduce workshop time
- Not enough volunteers may be available for hands-on support

Exercise Tasks

  1. Run the script
  2. Change the goal to something from your own work
  3. Add a validation step to ensure every dependency refers to an earlier task
  4. Identify any task titles that are too broad and refine the prompt

9. Validating a Plan Programmatically

A plan should not just look good. It should also pass simple checks.

Examples of Validation Rules

  • Task IDs should be unique
  • Dependencies should point to existing tasks
  • Dependencies should only point backward
  • There should be no empty task descriptions
  • The number of tasks should match the requested range

Hands-On Exercise 3: Validate the Generated Plan

Objective

Add plan validation logic in Python.

Code

import json
from openai import OpenAI

client = OpenAI()

prompt = """
You are a planning assistant.

Create a project plan for this goal:
Build a simple personal finance tracker in Python for beginners

Return ONLY valid JSON in this exact structure:
{
  "goal": "string",
  "tasks": [
    {
      "id": 1,
      "title": "string",
      "description": "string",
      "dependencies": [1, 2],
      "estimated_effort": "string"
    }
  ],
  "risks": ["string"]
}

Requirements:
- Include 5 to 7 tasks
- Tasks must be specific and ordered
- Dependencies must only reference earlier tasks
- Do not include markdown fences
""".strip()

response = client.responses.create(
    model="gpt-5.4-mini",
    input=prompt
)

plan = json.loads(response.output_text)


def validate_plan(plan_dict: dict) -> list[str]:
    """
    Validate the generated plan and return a list of issues.
    An empty list means the plan passed all checks.
    """
    issues = []

    tasks = plan_dict.get("tasks", [])
    task_ids = [task.get("id") for task in tasks]

    # Check that task IDs are unique.
    if len(task_ids) != len(set(task_ids)):
        issues.append("Task IDs are not unique.")

    valid_effort_values = {"low", "medium", "high"}

    for task in tasks:
        task_id = task.get("id")
        title = task.get("title", "").strip()
        description = task.get("description", "").strip()
        dependencies = task.get("dependencies", [])
        effort = task.get("estimated_effort", "").strip().lower()

        if not title:
            issues.append(f"Task {task_id} is missing a title.")

        if not description:
            issues.append(f"Task {task_id} is missing a description.")

        if effort not in valid_effort_values:
            issues.append(
                f"Task {task_id} has invalid estimated_effort: {effort!r}."
            )

        for dep in dependencies:
            if dep not in task_ids:
                issues.append(
                    f"Task {task_id} depends on non-existent task ID {dep}."
                )
            if dep >= task_id:
                issues.append(
                    f"Task {task_id} has invalid forward/self dependency on task ID {dep}."
                )

    if not (5 <= len(tasks) <= 7):
        issues.append(f"Expected 5 to 7 tasks, got {len(tasks)}.")

    return issues


print("=== Generated Plan ===")
print(json.dumps(plan, indent=2))

issues = validate_plan(plan)

print("\n=== Validation Result ===")
if not issues:
    print("Plan is valid.")
else:
    print("Plan has issues:")
    for issue in issues:
        print(f"- {issue}")

Example Output

=== Validation Result ===
Plan is valid.

Exercise Tasks

  1. Run the validator
  2. Intentionally edit one dependency to an invalid future ID
  3. Re-run the validator and inspect the error
  4. Add one more rule, such as minimum title length

10. Refining Plans Iteratively

Planning is rarely perfect on the first attempt. Good agentic workflows often use an iterative loop:

  1. Generate plan
  2. Validate plan
  3. Revise weak parts
  4. Re-check plan
  5. Continue to execution

Common Refinement Prompts

  • "Make each task more specific"
  • "Reduce the plan to 5 essential steps"
  • "Add missing risks and constraints"
  • "Rewrite tasks so each begins with an action verb"
  • "Identify dependencies more clearly"

Example Refinement Prompt

You previously created a task plan.

Revise it to make each task:
- specific
- beginner-friendly
- no more than one sentence
- action-oriented

Return the improved version in the same JSON format.

11. Mini Challenge

Scenario

You are building a study assistant that helps users plan personal learning projects.

Challenge

Create a planning prompt and Python script that decomposes this goal:

"Learn enough Python in 8 weeks to automate simple spreadsheet tasks at work"

Requirements

Your output should include:

  • 6 to 8 tasks
  • dependencies
  • estimated effort
  • 3 likely risks

Suggested Stretch Goals

  • Add input from the command line using input()
  • Save the generated plan to a file called plan.json
  • Print a user-friendly summary after validation

12. Best Practices for LLM-Based Planning

  • Start with a clear goal statement
  • Include constraints explicitly
  • Ask for structured output when possible
  • Validate generated plans in code
  • Expect to refine prompts over time
  • Keep tasks concrete and observable
  • Treat the model as a planning assistant, not an unquestionable authority

13. Session Summary

In this session, you learned:

  • What planning and task decomposition mean in GenAI systems
  • Why agentic workflows need structured steps
  • How to prompt an LLM to create an ordered plan
  • How to use the OpenAI Responses API with gpt-5.4-mini
  • How to request JSON output and validate it in Python
  • How to improve plans through iteration

Planning is a foundational skill for agentic development. Once a system can break down goals into manageable subtasks, it becomes much easier to build workflows that are reliable, inspectable, and extendable.


Useful Resources

  • OpenAI Responses API migration guide: https://developers.openai.com/api/docs/guides/migrate-to-responses
  • OpenAI API docs: https://platform.openai.com/docs
  • OpenAI Python SDK: https://github.com/openai/openai-python
  • Python json module docs: https://docs.python.org/3/library/json.html

Suggested Homework

  1. Build a reusable Python function called generate_plan(goal: str) -> dict
  2. Add a second function called validate_plan(plan: dict) -> list[str]
  3. Test your planner on 3 different goals:
  4. learning project
  5. event planning project
  6. software project
  7. Compare which domains produce the best and worst decompositions
  8. Write down one prompt improvement that makes the plans more useful

Back to Chapter | Back to Master Plan | Next Session