Session 2: Prompt Patterns for Common Tasks
Synopsis
Covers reusable prompt patterns for summarization, extraction, classification, transformation, brainstorming, and code generation. This session gives learners a practical toolkit they can immediately apply in Python applications.
Session Content
Session 2: Prompt Patterns for Common Tasks
Session Overview
In this session, learners will explore practical prompt patterns for common GenAI tasks using Python and the OpenAI Responses API. The focus is on building intuition for how prompt structure influences model behavior, and on applying reusable prompt patterns to real development scenarios.
By the end of this session, learners will be able to:
- Recognize common prompt patterns for typical LLM tasks
- Write clearer prompts for summarization, extraction, classification, transformation, and generation
- Use system and user instructions effectively
- Build small Python scripts using the OpenAI Responses API with
gpt-5.4-mini - Evaluate and refine prompts based on output quality
Learning Objectives
After this session, learners should be able to:
- Explain why prompt structure matters
- Use prompt patterns for:
- Summarization
- Information extraction
- Classification
- Transformation
- Structured generation
- Apply prompt constraints to improve reliability
- Compare weak prompts with stronger prompt designs
- Implement prompt-driven workflows in Python
Session Agenda (~45 minutes)
- 0–5 min: Introduction to prompt patterns
- 5–12 min: Core prompt anatomy
- 12–22 min: Common prompt patterns for common tasks
- 22–38 min: Hands-on exercises in Python
- 38–45 min: Review, discussion, and prompt improvement checklist
1. Why Prompt Patterns Matter
A prompt pattern is a reusable way of asking a model to perform a task. Instead of inventing prompts from scratch each time, developers can rely on patterns that consistently produce better outputs.
Prompt patterns are useful because they help with:
- Clarity
- Consistency
- Reusability
- Better formatting
- Reduced ambiguity
For example, compare these two prompts:
Weak Prompt
Summarize this article.
Stronger Prompt
Summarize the following article in 3 bullet points.
Focus on the main argument, key evidence, and conclusion.
Use simple language for a beginner audience.
The second prompt gives the model:
- A task
- An output format
- A focus area
- A target audience
These details often improve quality significantly.
2. Core Prompt Anatomy
Most effective prompts contain some combination of the following parts:
2.1 Task Instruction
Clearly state what the model should do.
Examples:
- "Summarize the text"
- "Extract all dates and names"
- "Classify the sentiment"
- "Rewrite in formal tone"
2.2 Context
Provide the model with the input material or background needed to complete the task.
Example:
You are helping a customer support team analyze user feedback.
2.3 Constraints
Set boundaries to improve reliability.
Examples:
- "Use JSON"
- "Limit to 5 bullet points"
- "Do not invent missing information"
- "Return only one label"
2.4 Output Format
Specify exactly how the output should look.
Examples:
- "Return a JSON object"
- "Use a markdown table"
- "Answer in one sentence"
- "Use bullet points only"
2.5 Examples
If needed, include examples of desired behavior. This is often called few-shot prompting.
Example:
Input: The product arrived late and damaged.
Output: negative
3. Prompt Patterns for Common Tasks
3.1 Summarization Pattern
Use Case
Convert long text into shorter, more digestible content.
Pattern
Summarize the following text in [format].
Focus on [important aspects].
Keep the summary under [constraint].
Text:
[INPUT]
Example
Summarize the following meeting notes in 4 bullet points.
Focus on decisions, action items, blockers, and deadlines.
Keep each bullet under 20 words.
Text:
The team agreed to launch the beta on March 15...
Why It Works
- Defines format
- Defines focus
- Sets length constraints
3.2 Extraction Pattern
Use Case
Pull specific information from unstructured text.
Pattern
Extract the following fields from the text:
- [field1]
- [field2]
- [field3]
If a field is missing, return null.
Return the result as JSON only.
Text:
[INPUT]
Example
Extract the following fields from the text:
- customer_name
- order_id
- issue_type
- refund_requested
If a field is missing, return null.
Return the result as JSON only.
Text:
Hi, this is Priya. My order ORD-8821 arrived broken, and I'd like a refund.
Why It Works
- Makes the extraction targets explicit
- Reduces hallucination by specifying
null - Enforces structured output
3.3 Classification Pattern
Use Case
Assign text into one of a fixed set of categories.
Pattern
Classify the following text into exactly one of these categories:
[categories]
Return only the category name.
Text:
[INPUT]
Example
Classify the following text into exactly one of these categories:
billing, technical_support, account_access, sales
Return only the category name.
Text:
I reset my password twice and still cannot log in.
Why It Works
- Restricts output space
- Improves consistency
- Easy to integrate into applications
3.4 Transformation Pattern
Use Case
Rewrite content while preserving meaning.
Pattern
Rewrite the following text in [style/tone/format].
Preserve the original meaning.
Do not add new facts.
Text:
[INPUT]
Example
Rewrite the following text in a professional and friendly tone.
Preserve the original meaning.
Do not add new facts.
Text:
Your request is late and we can’t do anything until next week.
Why It Works
- Separates style change from content change
- Reduces unintended additions
3.5 Structured Generation Pattern
Use Case
Generate content with a predictable format.
Pattern
Generate a [type of content] about [topic].
Include:
- [requirement 1]
- [requirement 2]
- [requirement 3]
Return the result in [format].
Example
Generate a study plan about prompt engineering.
Include:
- 5 daily tasks
- one learning objective per task
- one practical exercise per task
Return the result as a markdown table.
Why It Works
- Guides the model toward useful completeness
- Encourages machine-readable or user-friendly output
3.6 Few-Shot Prompting Pattern
Use Case
Show the model examples of desired input-output behavior.
Pattern
Classify each text as positive, neutral, or negative.
Example 1:
Text: The app is easy to use and fast.
Label: positive
Example 2:
Text: It works fine, nothing special.
Label: neutral
Example 3:
Text: The latest update broke my workflow.
Label: negative
Now classify:
Text: [INPUT]
Label:
Why It Works
- Gives the model a concrete pattern to imitate
- Useful when labels or transformations are subtle
4. Best Practices for Prompt Design
4.1 Be Specific
Avoid vague instructions like "make it better." Instead say:
- "Rewrite for a non-technical audience"
- "Use 3 bullet points"
- "Return valid JSON"
4.2 Separate Instructions from Data
Use clear labels such as:
Instructions:
...
Text:
...
This reduces confusion.
4.3 Constrain the Output
Useful constraints include:
- length limits
- allowed labels
- JSON-only responses
- "If unknown, say null"
4.4 Ask for Structure
Structured outputs are easier to parse and evaluate.
Examples:
- JSON
- CSV
- markdown tables
- numbered lists
4.5 Reduce Hallucination Risk
Helpful instructions:
- "Do not invent missing details"
- "If the answer is not present, return null"
- "Base your answer only on the provided text"
4.6 Iterate
Prompting is often an engineering loop:
- Try a prompt
- Inspect output
- Adjust wording, constraints, or examples
- Test again
5. Python Setup for Hands-On Work
Install the OpenAI SDK if needed:
pip install openai
Set your API key:
export OPENAI_API_KEY="your_api_key_here"
For Windows PowerShell:
setx OPENAI_API_KEY "your_api_key_here"
6. Hands-On Exercise 1: Summarization and Classification
Goal
Build a Python script that: 1. Summarizes customer feedback 2. Classifies it into a support category
Concepts Practiced
- Prompt structure
- Summarization pattern
- Classification pattern
- Responses API usage
Python Example
"""
Session 2 - Exercise 1
Summarization and classification with the OpenAI Responses API.
Requirements:
pip install openai
Environment:
export OPENAI_API_KEY="your_api_key_here"
"""
import os
from openai import OpenAI
# Create a client using the API key from the environment.
client = OpenAI(api_key=os.environ.get("OPENAI_API_KEY"))
# Sample customer feedback for the exercise.
feedback_text = """
Hi team,
I was charged twice for my subscription this month.
Also, I tried to update my payment method, but the page kept failing to load.
Please help me resolve this as soon as possible.
"""
# -----------------------------
# Part 1: Summarization prompt
# -----------------------------
summary_prompt = f"""
Summarize the following customer feedback in 3 bullet points.
Focus on the customer's main issue, urgency, and requested resolution.
Use simple language.
Feedback:
{feedback_text}
"""
summary_response = client.responses.create(
model="gpt-5.4-mini",
input=summary_prompt,
)
# The SDK provides a convenience property for text output.
summary_text = summary_response.output_text
print("SUMMARY:")
print(summary_text)
print("-" * 60)
# -----------------------------
# Part 2: Classification prompt
# -----------------------------
classification_prompt = f"""
Classify the following customer feedback into exactly one of these categories:
billing, technical_support, account_access, sales
Return only the category name.
Feedback:
{feedback_text}
"""
classification_response = client.responses.create(
model="gpt-5.4-mini",
input=classification_prompt,
)
classification_text = classification_response.output_text.strip()
print("CLASSIFICATION:")
print(classification_text)
Example Output
SUMMARY:
- Customer was charged twice for the subscription this month.
- Customer could not update the payment method because the page failed to load.
- Customer wants the issue resolved quickly.
------------------------------------------------------------
CLASSIFICATION:
billing
Exercise Tasks
- Run the script as provided
- Change the feedback text to a login-related issue
- Re-run classification and observe category changes
- Modify the summarization prompt to return exactly 2 bullet points
- Add a sentence telling the model not to invent missing details
Reflection Questions
- Did the summary stay focused?
- Did the classifier always choose a valid label?
- What changed when the prompt became more specific?
7. Hands-On Exercise 2: Information Extraction into JSON
Goal
Extract structured fields from a support message.
Concepts Practiced
- Extraction pattern
- JSON formatting
- Handling missing values with
null
Python Example
"""
Session 2 - Exercise 2
Information extraction into JSON using the OpenAI Responses API.
"""
import json
import os
from openai import OpenAI
client = OpenAI(api_key=os.environ.get("OPENAI_API_KEY"))
support_message = """
Hello, this is Marcus Lee.
My order ID is ORD-10452.
The package arrived on April 3, but one of the items was missing.
I would like a replacement, not a refund.
"""
extraction_prompt = f"""
Extract the following fields from the text:
- customer_name
- order_id
- delivery_date
- issue_type
- resolution_requested
If a field is missing, return null.
Base the answer only on the provided text.
Return the result as JSON only.
Text:
{support_message}
"""
response = client.responses.create(
model="gpt-5.4-mini",
input=extraction_prompt,
)
raw_output = response.output_text.strip()
print("RAW MODEL OUTPUT:")
print(raw_output)
print("-" * 60)
# Attempt to parse the model output as JSON.
# In production systems, you should validate the result carefully.
data = json.loads(raw_output)
print("PARSED JSON:")
print(json.dumps(data, indent=2))
Example Output
RAW MODEL OUTPUT:
{
"customer_name": "Marcus Lee",
"order_id": "ORD-10452",
"delivery_date": "April 3",
"issue_type": "missing item",
"resolution_requested": "replacement"
}
------------------------------------------------------------
PARSED JSON:
{
"customer_name": "Marcus Lee",
"order_id": "ORD-10452",
"delivery_date": "April 3",
"issue_type": "missing item",
"resolution_requested": "replacement"
}
Exercise Tasks
- Run the script and inspect the JSON output
- Remove the order ID from the message and verify whether the model returns
null - Add an extra field such as
refund_requested - Update the prompt to normalize
issue_typeinto one of: - damaged_item
- missing_item
- delayed_delivery
- other
- Test multiple support messages
Discussion
Why is extraction often more reliable when:
- fields are listed explicitly?
- missing values are allowed as null?
- output format is constrained to JSON?
8. Hands-On Exercise 3: Prompt Comparison Lab
Goal
Compare weak and strong prompts for the same task.
Concepts Practiced
- Prompt iteration
- Constraint design
- Output quality evaluation
Python Example
"""
Session 2 - Exercise 3
Compare weak and strong prompt patterns with the same input.
"""
import os
from openai import OpenAI
client = OpenAI(api_key=os.environ.get("OPENAI_API_KEY"))
text = """
Our engineering team completed the payment integration, but testing uncovered
two major bugs in the checkout flow. The launch scheduled for Friday may need
to be delayed until the issues are fixed. The product manager asked QA to
provide an updated timeline by tomorrow afternoon.
"""
weak_prompt = f"""
Summarize this text:
{text}
"""
strong_prompt = f"""
Summarize the following project update in exactly 3 bullet points.
Focus on:
- completed work
- current risks
- next action
Keep each bullet under 18 words.
Do not invent details.
Text:
{text}
"""
weak_response = client.responses.create(
model="gpt-5.4-mini",
input=weak_prompt,
)
strong_response = client.responses.create(
model="gpt-5.4-mini",
input=strong_prompt,
)
print("WEAK PROMPT OUTPUT:")
print(weak_response.output_text)
print("-" * 60)
print("STRONG PROMPT OUTPUT:")
print(strong_response.output_text)
Example Output
WEAK PROMPT OUTPUT:
The engineering team finished the payment integration, but testing found two major checkout bugs. Because of these issues, the Friday launch might be delayed. QA has been asked to provide an updated timeline by tomorrow afternoon.
------------------------------------------------------------
STRONG PROMPT OUTPUT:
- Payment integration was completed by the engineering team.
- Two major checkout bugs may delay Friday’s planned launch.
- QA will provide an updated timeline by tomorrow afternoon.
Exercise Tasks
- Run the comparison script
- Compare:
- readability
- completeness
- structure
- consistency
- Change the strong prompt to request a JSON object instead of bullet points
- Add a target audience, such as "for executives" or "for engineers"
- Try a transformation task instead of summarization
Suggested Evaluation Checklist
Use these questions when comparing outputs:
- Did the model follow the requested format?
- Did it include the most important information?
- Did it avoid invented details?
- Is the output easy to use downstream?
- Would a teammate find the result reliable?
9. Mini Pattern Library
Below is a reusable cheat sheet for common tasks.
Summarization
Summarize the following text in 3 bullet points.
Focus on the key message, supporting detail, and outcome.
Do not invent details.
Text:
[INPUT]
Extraction
Extract these fields from the text:
- name
- date
- issue
If missing, return null.
Return JSON only.
Text:
[INPUT]
Classification
Classify the following text into exactly one category:
[category1, category2, category3]
Return only the category name.
Text:
[INPUT]
Transformation
Rewrite the following text in a professional tone.
Preserve the meaning.
Do not add new facts.
Text:
[INPUT]
Structured Generation
Generate a study guide on [topic].
Include:
- 3 key concepts
- 3 examples
- 3 practice questions
Return as markdown.
Few-Shot Classification
Classify text as urgent or non_urgent.
Example:
Text: Production is down. We need help now.
Label: urgent
Example:
Text: Can you explain how billing works?
Label: non_urgent
Now classify:
Text: [INPUT]
Label:
10. Common Mistakes to Avoid
Too Vague
Bad:
Help with this text.
Better:
Summarize this text in 2 sentences for a non-technical audience.
Missing Output Constraints
Bad:
Extract order details.
Better:
Extract order_id, customer_name, and issue_type.
Return JSON only. Use null if missing.
Asking for Too Many Things at Once
Bad:
Summarize, classify, rewrite, and generate next steps.
Better:
Break into separate steps or API calls.
No Guard Against Hallucination
Better prompts often include:
Base the answer only on the provided text.
If information is missing, return null.
11. Quick Knowledge Check
Answer these questions as a group or individually:
- Why is "Return only the category name" useful in classification prompts?
- What problem does "If missing, return null" help solve?
- Why might a strong prompt outperform a weak prompt?
- When is few-shot prompting especially useful?
- Why is structured output valuable in applications?
12. Recap
In this session, learners practiced prompt patterns that appear frequently in real applications:
- Summarization
- Extraction
- Classification
- Transformation
- Structured generation
- Few-shot prompting
They also saw that good prompts usually include:
- a clear task
- useful context
- constraints
- output formatting requirements
- examples when necessary
Prompt engineering is less about magic wording and more about careful task design.
13. Useful Resources
- OpenAI Responses API Guide
- OpenAI API Reference
- OpenAI Python SDK
- Prompt Engineering Guide
- JSON Format Basics
14. Take-Home Practice
Before the next session, try the following:
- Build a prompt that classifies app reviews into:
- bug_report
- feature_request
- praise
-
complaint
-
Create an extraction prompt for job postings that returns:
- role
- company
- location
-
required_skills
-
Rewrite a technical paragraph for:
- a beginner audience
- an executive audience
-
a customer-facing audience
-
Compare a zero-shot prompt and a few-shot prompt for the same classification task
15. Instructor Notes
Suggested Delivery Tips
- Start with weak vs strong prompt examples
- Ask learners to predict outputs before running code
- Encourage iterative improvement, not perfection on the first try
- Discuss how prompt constraints make applications easier to build
Suggested Live Demo Flow
- Show a vague prompt
- Run it
- Improve it with:
- format
- constraints
- focus
- Run again
- Compare outputs
This helps learners see prompt engineering as a practical development skill rather than a purely theoretical topic.
End of Session
Key takeaway: Good prompt patterns make LLM behavior more predictable, usable, and easier to integrate into Python applications.
Back to Chapter | Back to Master Plan | Previous Session | Next Session