Session 4: Python's Role in GenAI Development
Synopsis
Introduces the Python ecosystem for GenAI and agentic development, including APIs, SDKs, notebooks, virtual environments, and common libraries. This session orients learners to the practical implementation environment used throughout the curriculum.
Session Content
Session 4: Python's Role in GenAI Development
Session Overview
Duration: ~45 minutes
Audience: Python developers with basic programming knowledge
Session Goal: Understand why Python is the dominant language for GenAI development, how it fits into modern LLM application workflows, and how to use Python with the OpenAI Responses API to build simple GenAI features.
Learning Outcomes
By the end of this session, learners will be able to:
- Explain why Python is widely used in GenAI and agentic development
- Identify key Python ecosystem tools relevant to GenAI workflows
- Understand common architectural patterns for Python-based GenAI applications
- Use the OpenAI Python SDK with the Responses API
- Build simple Python scripts for prompt-based generation and structured workflows
- Apply Python best practices when developing GenAI applications
1. Why Python Dominates GenAI Development
Python has become the default language for AI, ML, and GenAI work for several practical reasons.
1.1 Readability and Developer Productivity
Python is easy to read and write. This matters because GenAI development is often iterative:
- you test prompts
- refine outputs
- connect APIs
- transform data
- build small experiments quickly
Python enables rapid prototyping without a lot of boilerplate.
1.2 Rich Ecosystem
Python has a mature ecosystem for:
- data manipulation:
pandas,numpy - web apps and APIs:
FastAPI,Flask,Django - notebooks and experimentation:
Jupyter - ML/AI frameworks:
PyTorch,TensorFlow,scikit-learn - LLM and agent tooling: SDKs, orchestration frameworks, vector DB clients
This means a Python developer can build end-to-end GenAI systems in one language.
1.3 Strong Community and Learning Resources
Python’s community contributes:
- tutorials
- reusable libraries
- notebooks
- examples
- templates
- integrations with cloud services and AI platforms
This lowers the barrier to entry for GenAI development.
1.4 Interoperability
Python works well with:
- REST APIs
- databases
- cloud platforms
- message queues
- files and documents
- backend systems
This is critical because GenAI applications often combine LLMs with business systems and external tools.
2. Where Python Fits in a GenAI Application
A typical Python-powered GenAI application often includes several layers.
2.1 Common Workflow
- Collect input
- user message
- uploaded text
- document content
-
app state
-
Preprocess data
- clean text
- chunk documents
- extract metadata
-
format prompts
-
Call the model
- send instructions and input to an LLM
-
receive generated text or structured output
-
Post-process output
- validate format
- parse JSON
- store results
-
display response
-
Integrate with application logic
- save history
- trigger workflows
- call tools or APIs
- monitor usage
2.2 Python’s Typical Responsibilities
Python is often used for:
- prompt construction
- API calls
- response parsing
- data transformation
- orchestration
- evaluation scripts
- backend application logic
2.3 Example Architecture
User Input
|
v
Python App (CLI / Notebook / API / Web Backend)
|
+--> Prompt formatting
+--> Validation
+--> Business logic
|
v
OpenAI Responses API
|
v
Model Output
|
+--> Parse
+--> Store
+--> Display / Route / Evaluate
3. Python Use Cases in GenAI
Python supports many practical GenAI tasks.
3.1 Prompt-Based Text Generation
Examples:
- summarize meeting notes
- draft emails
- generate product descriptions
- rewrite text for tone or clarity
3.2 Information Extraction
Examples:
- extract tasks from notes
- identify entities in support tickets
- convert free text into structured records
3.3 Classification
Examples:
- categorize feedback
- label user intent
- prioritize incidents
3.4 Retrieval-Augmented Workflows
Python can:
- load documents
- split them into chunks
- generate embeddings
- query vector stores
- construct context-rich prompts
3.5 Agentic Systems
In agentic workflows, Python often acts as the control layer that:
- tracks state
- decides next steps
- calls tools
- manages retries
- applies guardrails
- logs execution
4. Best Practices for Python in GenAI Projects
4.1 Keep Prompts Separate from Core Logic
Avoid burying prompts deep inside application code. Instead:
- store prompts in constants
- use templates
- make prompt design easy to revise
4.2 Use Environment Variables for Secrets
Never hardcode API keys in source files.
Use:
- environment variables
.envfiles for local development- secret managers in production
4.3 Validate Model Outputs
LLM outputs may be:
- incomplete
- inconsistent
- verbose
- incorrectly formatted
Always validate outputs before using them in downstream systems.
4.4 Log Inputs and Outputs Carefully
Logging is useful for debugging, but be careful with:
- personal data
- sensitive business information
- compliance requirements
4.5 Start Simple
Before building advanced agentic systems:
- begin with one task
- test prompts manually
- validate output quality
- then add orchestration and tools
5. Setting Up Python for OpenAI GenAI Development
5.1 Install the SDK
pip install openai
5.2 Set Your API Key
macOS / Linux
export OPENAI_API_KEY="your_api_key_here"
Windows PowerShell
setx OPENAI_API_KEY "your_api_key_here"
5.3 Basic Project Structure
genai-python-session/
├── app.py
├── utils.py
├── requirements.txt
└── .env # optional for local development, do not commit secrets
6. Using Python with the OpenAI Responses API
The OpenAI Python SDK supports the Responses API, which is the recommended interface for new development.
6.1 Minimal Example
from openai import OpenAI
# Create a client instance.
# The SDK automatically reads the OPENAI_API_KEY environment variable.
client = OpenAI()
# Send a simple request to the Responses API.
response = client.responses.create(
model="gpt-5.4-mini",
input="Write a one-sentence explanation of why Python is useful in GenAI."
)
# Print the model's text output.
print(response.output_text)
Example Output
Python is useful in GenAI because it combines simple syntax with a powerful ecosystem for data processing, model integration, and rapid experimentation.
6.2 Adding System-Style Instructions
A useful pattern is to provide role and behavior instructions clearly.
from openai import OpenAI
client = OpenAI()
response = client.responses.create(
model="gpt-5.4-mini",
input=[
{
"role": "system",
"content": [
{
"type": "input_text",
"text": "You are a concise technical tutor for Python developers."
}
],
},
{
"role": "user",
"content": [
{
"type": "input_text",
"text": "Explain the relationship between Python and GenAI in 3 bullet points."
}
],
},
],
)
print(response.output_text)
Example Output
- Python is popular in GenAI because it is easy to learn and fast to prototype with.
- It has a strong ecosystem for APIs, data processing, machine learning, and web backends.
- It often serves as the orchestration layer connecting LLMs, tools, data sources, and applications.
7. Hands-On Exercise 1: Build a Simple Python GenAI Assistant
Objective
Create a Python script that sends a user question to the OpenAI Responses API and prints a concise answer.
What You’ll Practice
- importing the OpenAI SDK
- creating a client
- calling
responses.create() - reading
response.output_text
Code
"""
exercise_1_assistant.py
A simple Python GenAI assistant using the OpenAI Responses API.
This script demonstrates the minimum workflow for sending a prompt
to the model and printing the result.
"""
from openai import OpenAI
def ask_question(question: str) -> str:
"""
Send a question to the model and return the generated answer.
Args:
question: The user's input question.
Returns:
The model's text response as a string.
"""
client = OpenAI()
response = client.responses.create(
model="gpt-5.4-mini",
input=[
{
"role": "system",
"content": [
{
"type": "input_text",
"text": (
"You are a helpful AI assistant for Python developers. "
"Answer clearly and concisely."
),
}
],
},
{
"role": "user",
"content": [
{
"type": "input_text",
"text": question,
}
],
},
],
)
return response.output_text
def main() -> None:
"""
Run a sample interaction with the assistant.
"""
question = "Why is Python so common in GenAI application development?"
answer = ask_question(question)
print("Question:")
print(question)
print("\nAnswer:")
print(answer)
if __name__ == "__main__":
main()
Example Output
Question:
Why is Python so common in GenAI application development?
Answer:
Python is common in GenAI development because it is easy to use, has excellent AI and data libraries, and works well for quickly building and integrating LLM-powered applications.
Suggested Learner Tasks
- Change the question
- Make the answer shorter
- Ask for a numbered list instead of a paragraph
- Adjust the system instruction to make the assistant more technical
8. Hands-On Exercise 2: Generate Structured Study Notes
Objective
Use Python to ask the model for structured learning notes about a GenAI topic.
What You’ll Practice
- prompt design
- formatting requests clearly
- generating predictable educational content
Code
"""
exercise_2_study_notes.py
Generate structured study notes on a GenAI topic using the OpenAI Responses API.
"""
from openai import OpenAI
def generate_study_notes(topic: str) -> str:
"""
Generate structured study notes for a given topic.
Args:
topic: The topic to explain.
Returns:
Formatted study notes as plain text.
"""
client = OpenAI()
prompt = f"""
Create study notes for the topic: {topic}
Structure the response using these sections:
1. Definition
2. Why it matters
3. Python's role
4. One real-world example
5. Three key takeaways
Keep the explanation beginner-friendly.
"""
response = client.responses.create(
model="gpt-5.4-mini",
input=[
{
"role": "system",
"content": [
{
"type": "input_text",
"text": "You are an educational assistant for beginner Python developers learning GenAI.",
}
],
},
{
"role": "user",
"content": [
{
"type": "input_text",
"text": prompt,
}
],
},
],
)
return response.output_text
def main() -> None:
"""
Generate and print study notes for a sample topic.
"""
topic = "Retrieval-augmented generation"
notes = generate_study_notes(topic)
print(f"Study Notes: {topic}\n")
print(notes)
if __name__ == "__main__":
main()
Example Output
Study Notes: Retrieval-augmented generation
1. Definition
Retrieval-augmented generation (RAG) is a pattern where an AI system first retrieves relevant information from a data source and then uses that information to generate a response.
2. Why it matters
It helps models produce more relevant and grounded answers, especially when working with private or changing information.
3. Python's role
Python is commonly used to load documents, process text, query search or vector systems, and assemble the final prompt sent to the model.
4. One real-world example
A company support bot retrieves help center articles before answering a customer question.
5. Three key takeaways
- RAG combines retrieval with generation.
- It improves relevance and grounding.
- Python often orchestrates the entire workflow.
Suggested Learner Tasks
- Change the topic to “prompt engineering” or “embeddings”
- Ask for a table instead of numbered sections
- Add a section called “Common mistakes”
9. Hands-On Exercise 3: Extract Structured Information from Text
Objective
Demonstrate how Python can turn unstructured text into structured information for downstream use.
Why This Matters
Many GenAI apps do more than chat. They extract:
- tasks
- names
- priorities
- categories
- summaries
Python is ideal for sending text to a model and then processing the result.
Code
"""
exercise_3_extract_tasks.py
Extract structured task information from meeting notes using the OpenAI Responses API.
The model is instructed to return JSON-like content that is easy to inspect.
"""
from openai import OpenAI
def extract_tasks(meeting_notes: str) -> str:
"""
Extract action items from meeting notes.
Args:
meeting_notes: Raw meeting notes as a string.
Returns:
A text response containing structured task information.
"""
client = OpenAI()
prompt = f"""
Extract action items from the following meeting notes.
Return the result as JSON with this structure:
{{
"tasks": [
{{
"task": "string",
"owner": "string or null",
"priority": "low|medium|high"
}}
]
}}
If no owner is mentioned, use null.
If priority is unclear, use "medium".
Meeting notes:
{meeting_notes}
"""
response = client.responses.create(
model="gpt-5.4-mini",
input=[
{
"role": "system",
"content": [
{
"type": "input_text",
"text": (
"You extract structured project information from text. "
"Return only the requested JSON."
),
}
],
},
{
"role": "user",
"content": [
{
"type": "input_text",
"text": prompt,
}
],
},
],
)
return response.output_text
def main() -> None:
"""
Run a sample extraction workflow.
"""
notes = """
Team sync summary:
- Priya will update the onboarding guide by Friday.
- Mark should investigate the login timeout bug. This is high priority.
- Create a shortlist of vector database options for next week's review.
"""
extracted = extract_tasks(notes)
print("Meeting Notes:")
print(notes)
print("\nExtracted Tasks:")
print(extracted)
if __name__ == "__main__":
main()
Example Output
{
"tasks": [
{
"task": "Update the onboarding guide",
"owner": "Priya",
"priority": "medium"
},
{
"task": "Investigate the login timeout bug",
"owner": "Mark",
"priority": "high"
},
{
"task": "Create a shortlist of vector database options for next week's review",
"owner": null,
"priority": "medium"
}
]
}
Suggested Learner Tasks
- Add due date extraction
- Add task categories
- Parse the returned JSON using Python’s
jsonmodule - Validate that every task contains
task,owner, andpriority
10. Python Best Practices for GenAI Scripts
10.1 Wrap Logic in Functions
Functions improve:
- readability
- testing
- reuse
- maintainability
10.2 Add Docstrings and Comments
GenAI code often includes prompt logic that benefits from explanation.
10.3 Separate Configuration
Keep model names, prompts, and app settings in configurable variables.
10.4 Handle Errors
In production code, handle:
- missing API keys
- network failures
- malformed outputs
- rate limits
- invalid user input
Example with Basic Error Handling
"""
basic_error_handling.py
A small example showing safer use of the OpenAI Responses API.
"""
import os
from openai import OpenAI
def generate_reply(user_text: str) -> str:
"""
Generate a response for the given user text.
Args:
user_text: Input text from the user.
Returns:
The model output text.
Raises:
ValueError: If the API key is missing or input is empty.
"""
if not user_text.strip():
raise ValueError("Input text must not be empty.")
if not os.getenv("OPENAI_API_KEY"):
raise ValueError("OPENAI_API_KEY is not set.")
client = OpenAI()
response = client.responses.create(
model="gpt-5.4-mini",
input=f"Reply briefly to this message: {user_text}"
)
return response.output_text
def main() -> None:
"""
Demonstrate error-aware generation.
"""
try:
result = generate_reply("Explain Python's value in one sentence.")
print(result)
except Exception as exc:
print(f"Error: {exc}")
if __name__ == "__main__":
main()
Example Output
Python is valuable because it makes it easy to build, connect, and experiment with GenAI applications quickly.
11. Mini Discussion: Python in Agentic Development
As applications become more agentic, Python’s role expands beyond simple prompt calls.
Python often manages:
- multi-step workflows
- state and memory
- tool invocation
- retries and fallback logic
- structured parsing
- evaluation and logging
Example Agentic Pattern
User asks a question
|
v
Python app decides:
- answer directly?
- search docs?
- call a calculator?
- ask a follow-up?
|
v
Python orchestrates API calls and tool results
|
v
Final response returned to user
Even when the model performs reasoning, Python remains the glue that coordinates the full application.
12. Recap
In this session, you learned that Python is central to GenAI development because it offers:
- simplicity and productivity
- strong AI and backend ecosystems
- flexibility for experimentation and integration
- a natural role as the orchestration layer in LLM applications
You also practiced using Python with the OpenAI Responses API to:
- generate answers
- create study notes
- extract structured information
These workflows form the foundation for more advanced GenAI and agentic systems.
13. Quick Knowledge Check
- Why is Python especially suitable for GenAI prototyping?
- What are three common responsibilities Python has in an LLM application?
- Why should API keys be stored in environment variables?
- What is the purpose of validating model outputs?
- How does Python support agentic workflows beyond basic text generation?
14. Useful Resources
- OpenAI Responses API migration guide: https://developers.openai.com/api/docs/guides/migrate-to-responses
- OpenAI API docs: https://platform.openai.com/docs
- OpenAI Python SDK: https://github.com/openai/openai-python
- Python official documentation: https://docs.python.org/3/
- FastAPI documentation: https://fastapi.tiangolo.com/
- Jupyter documentation: https://jupyter.org/
- PyTorch documentation: https://pytorch.org/docs/stable/index.html
15. Suggested Homework
Task 1
Modify Exercise 1 so the assistant accepts user input from the terminal using input().
Task 2
Extend Exercise 2 to save the generated study notes into a text file.
Task 3
Extend Exercise 3 to:
- parse the returned JSON
- pretty-print each task
- validate that priority is one of low, medium, or high
Task 4
Create a small Python module with: - one file for prompts - one file for API calls - one file for running the app
This will help reinforce good project structure for GenAI development.