Session 2: Calling LLM APIs from Python
Synopsis
Introduces the mechanics of sending prompts and receiving responses using Python client libraries and HTTP APIs. Learners understand request structure, authentication, response parsing, and simple application loops.
Session Content
Session 2: Calling LLM APIs from Python
Session Overview
Duration: ~45 minutes
Audience: Python developers with basic programming knowledge, beginning their GenAI journey
Learning Objectives
By the end of this session, learners will be able to:
- Understand how Python applications call LLM APIs
- Set up the OpenAI Python SDK and authenticate securely
- Send prompts using the OpenAI Responses API
- Control model behavior with key parameters
- Parse and use model outputs in Python programs
- Build simple reusable helper functions for LLM calls
- Handle common API errors and debugging scenarios
1. Why Call LLM APIs from Python?
Python is one of the most common languages for building AI-enabled applications because it is:
- Easy to read and write
- Rich in data tooling and web frameworks
- Well-supported by AI SDKs and libraries
When calling an LLM API from Python, your application typically does the following:
- Collects input from a user, file, or system
- Sends a request to an LLM endpoint
- Receives generated output
- Uses that output in code:
- display it in an app
- store it in a database
- transform data
- drive decisions in workflows
Common use cases
- Text summarization
- Draft generation
- Classification
- Information extraction
- Chat assistants
- Code explanation
- Workflow automation
2. Core Concepts Before Writing Code
2.1 API Calls in Simple Terms
An API call is a request your Python program makes to a remote service. In this session, that service is an LLM.
Your program sends:
- a model name
- an input
- optional instructions or parameters
The API returns:
- generated text
- sometimes structured output
- metadata about the response
2.2 The OpenAI Responses API
The Responses API is the modern interface for generating model outputs. It provides a flexible way to:
- send prompts
- handle text generation
- support future multimodal and tool-based workflows
- build applications in a consistent way
2.3 Secure Authentication
Never hardcode API keys directly in source code.
Use environment variables instead:
- safer for local development
- easier to deploy
- reduces accidental leaks in version control
3. Environment Setup
3.1 Install Python Package
pip install openai
3.2 Set the API Key
macOS/Linux
export OPENAI_API_KEY="your_api_key_here"
Windows PowerShell
setx OPENAI_API_KEY "your_api_key_here"
Restart your terminal or IDE after setting it if needed.
3.3 Verify Your Setup
Create a file called check_env.py:
import os
api_key = os.getenv("OPENAI_API_KEY")
if api_key:
print("OPENAI_API_KEY is set.")
print(f"Key starts with: {api_key[:8]}...")
else:
print("OPENAI_API_KEY is NOT set.")
Example output
OPENAI_API_KEY is set.
Key starts with: sk-proj-...
4. Your First LLM API Call in Python
Create a file named first_response.py.
from openai import OpenAI
# Create a client instance.
# The SDK automatically reads OPENAI_API_KEY from the environment.
client = OpenAI()
# Send a simple request using the Responses API.
response = client.responses.create(
model="gpt-5.4-mini",
input="Explain recursion in one short paragraph for a beginner programmer."
)
# Print the generated text output.
print(response.output_text)
What this does
OpenAI()creates a clientclient.responses.create(...)sends a requestmodel="gpt-5.4-mini"selects the modelinput=...is the promptresponse.output_textgives the text result
Example output
Recursion is a programming technique where a function solves a problem by calling itself on a smaller version of the same problem. It keeps doing this until it reaches a simple stopping point, called a base case, which prevents it from running forever. Recursion is useful for problems that can be broken into repeating smaller steps, such as navigating folders or working with tree structures.
5. Structuring Prompts Effectively
Even a basic API call benefits from clear prompting.
Weak prompt
Tell me about loops
Better prompt
Explain Python loops to a beginner. Cover for-loops and while-loops, and include one short example of each.
Prompting tips
- Be specific about the task
- Mention the audience
- Ask for format when useful
- Keep prompts clear and direct
Example
from openai import OpenAI
client = OpenAI()
prompt = """
Explain Python loops to a beginner.
Cover:
1. What a for-loop does
2. What a while-loop does
3. One short code example of each
Keep the explanation under 200 words.
"""
response = client.responses.create(
model="gpt-5.4-mini",
input=prompt
)
print(response.output_text)
6. Adding Instructions for More Controlled Output
In many applications, you want responses in a predictable style.
Create instructed_response.py:
from openai import OpenAI
client = OpenAI()
response = client.responses.create(
model="gpt-5.4-mini",
instructions="You are a helpful Python tutor. Respond clearly, concisely, and use beginner-friendly language.",
input="What is the difference between a list and a tuple in Python?"
)
print(response.output_text)
Why use instructions?
It helps define:
- role or persona
- tone
- level of detail
- output expectations
Example output
A list and a tuple are both used to store collections of items in Python, but the main difference is that lists are mutable and tuples are immutable. This means you can change a list after creating it, such as adding or removing items, while a tuple cannot be changed once it is created. Lists use square brackets `[]`, and tuples use parentheses `()`. Use a list when your data may change, and a tuple when you want fixed data.
7. Reusable Helper Function for API Calls
As your code grows, avoid repeating the same client logic everywhere.
Create llm_helper.py:
from openai import OpenAI
# Create one shared client for the module.
client = OpenAI()
def ask_llm(prompt: str, instructions: str | None = None) -> str:
"""
Send a prompt to the OpenAI Responses API and return plain text output.
Args:
prompt: The user prompt or task description.
instructions: Optional behavior/style instructions.
Returns:
Generated text from the model.
"""
response = client.responses.create(
model="gpt-5.4-mini",
instructions=instructions,
input=prompt
)
return response.output_text
if __name__ == "__main__":
answer = ask_llm(
prompt="Give me three beginner-friendly tips for debugging Python code.",
instructions="You are a concise programming mentor."
)
print(answer)
Example output
1. Read the error message carefully, because it often tells you the exact line and type of problem.
2. Use print statements or a debugger to inspect variable values and understand what your code is actually doing.
3. Test small parts of your program one at a time so you can isolate where the bug starts.
8. Handling Errors and Building Robust Code
Real applications must handle failures gracefully.
Common issues include:
- missing API key
- network problems
- invalid parameters
- rate limiting
- temporary service errors
Create safe_call.py:
import os
from openai import OpenAI
def main() -> None:
# Validate required environment configuration early.
api_key = os.getenv("OPENAI_API_KEY")
if not api_key:
raise RuntimeError(
"OPENAI_API_KEY is not set. Please configure it before running this script."
)
client = OpenAI(api_key=api_key)
try:
response = client.responses.create(
model="gpt-5.4-mini",
input="Give me a two-sentence explanation of exception handling in Python."
)
print("Model response:\n")
print(response.output_text)
except Exception as exc:
# In production, replace print with proper logging.
print(f"An error occurred while calling the API: {exc}")
if __name__ == "__main__":
main()
Best practices shown here
- validate config early
- isolate execution in
main() - catch exceptions around external calls
- display useful error messages
9. Working with Program Inputs
Most real applications build prompts dynamically.
Create dynamic_prompt.py:
from openai import OpenAI
client = OpenAI()
def summarize_topic(topic: str, audience: str) -> str:
"""
Generate a short explanation of a topic for a target audience.
"""
prompt = f"Explain '{topic}' for {audience} in 4 bullet points."
response = client.responses.create(
model="gpt-5.4-mini",
instructions="You are a clear educational assistant.",
input=prompt
)
return response.output_text
if __name__ == "__main__":
topic = "APIs"
audience = "beginner Python developers"
result = summarize_topic(topic, audience)
print(result)
Example output
- An API is a way for one program to communicate with another.
- It lets your Python code request data or services without knowing the internal details.
- When you call an LLM API, your code sends text and receives generated output.
- APIs are useful because they let developers reuse powerful services instead of building everything from scratch.
10. Hands-On Exercise 1: First Prompt App
Goal
Write a Python script that asks the model to explain a programming concept.
Task
Create a script called exercise_1_prompt_app.py that:
- uses
gpt-5.4-mini - uses the Responses API
- asks the user to enter a programming topic
- asks the model to explain it in simple language
- prints the response
Starter Solution
from openai import OpenAI
def main() -> None:
# Create the client once.
client = OpenAI()
# Get a topic from the user.
topic = input("Enter a programming topic: ").strip()
if not topic:
print("Please enter a non-empty topic.")
return
# Build a focused prompt.
prompt = f"Explain the programming topic '{topic}' in simple language for a beginner."
# Call the Responses API.
response = client.responses.create(
model="gpt-5.4-mini",
instructions="You are a patient programming tutor.",
input=prompt
)
# Display the generated explanation.
print("\nExplanation:\n")
print(response.output_text)
if __name__ == "__main__":
main()
Example run
Enter a programming topic: variables
Explanation:
Variables are names used to store data in a program. They let you save values like numbers, text, or lists so you can use them later in your code. For example, if you write `age = 25`, the variable `age` stores the number 25. Variables make programs easier to read and update because you can work with meaningful names instead of repeating raw values.
Suggested learner extensions
- ask for explanation length
- ask for bullet points instead of paragraph format
- save the result to a text file
11. Hands-On Exercise 2: Build a Reusable Summarizer Function
Goal
Practice wrapping LLM calls inside Python functions.
Task
Create a program that summarizes any block of text in 3 bullet points.
Solution
from openai import OpenAI
client = OpenAI()
def summarize_text(text: str) -> str:
"""
Summarize the given text into 3 concise bullet points.
Args:
text: The source text to summarize.
Returns:
The generated summary as plain text.
"""
response = client.responses.create(
model="gpt-5.4-mini",
instructions=(
"You are a concise summarization assistant. "
"Return exactly 3 bullet points using simple language."
),
input=f"Summarize the following text:\n\n{text}"
)
return response.output_text
def main() -> None:
sample_text = (
"Python is a high-level programming language known for its readability "
"and large ecosystem. It is widely used in web development, automation, "
"data analysis, artificial intelligence, and education. Its simple syntax "
"makes it especially popular with beginners, while its powerful libraries "
"make it useful for professionals."
)
summary = summarize_text(sample_text)
print("Summary:\n")
print(summary)
if __name__ == "__main__":
main()
Example output
- Python is a readable and beginner-friendly programming language.
- It is used in many areas, including web development, automation, data analysis, and AI.
- Python is popular because it combines simple syntax with powerful libraries.
Discussion points
- Why wrap API logic in a function?
- How would you reuse this in a web app or CLI tool?
- What prompt changes improve summarization quality?
12. Hands-On Exercise 3: Generate Structured Study Notes
Goal
Use prompting to get predictable formatting.
Task
Create a script that turns a concept into study notes with these sections:
- Definition
- Why it matters
- Example
Solution
from openai import OpenAI
client = OpenAI()
def create_study_notes(concept: str) -> str:
"""
Generate beginner-friendly study notes for a concept.
"""
response = client.responses.create(
model="gpt-5.4-mini",
instructions=(
"You are an educational assistant for beginner Python developers. "
"Use the exact headings: Definition, Why it matters, Example."
),
input=(
f"Create short study notes for the concept '{concept}'. "
"Use clear and simple language."
)
)
return response.output_text
if __name__ == "__main__":
notes = create_study_notes("function parameters")
print(notes)
Example output
Definition
Function parameters are variables listed in a function definition that receive input values when the function is called.
Why it matters
They let you write flexible functions that can work with different data instead of hardcoded values.
Example
```python
def greet(name):
print(f"Hello, {name}!")
greet("Ava")
---
## 13. Mini Challenge: Build a Command-Line Learning Assistant
### Goal
Combine everything learned so far.
### Requirements
Build a CLI app that:
- asks the user for a topic
- asks whether they want:
- explanation
- summary
- study notes
- calls the LLM appropriately
- prints the result
- handles empty input safely
### Reference Implementation
```python
from openai import OpenAI
client = OpenAI()
def get_response(topic: str, mode: str) -> str:
"""
Return an LLM-generated response based on the selected mode.
"""
mode = mode.lower().strip()
if mode == "explanation":
instructions = "You are a beginner-friendly Python tutor."
prompt = f"Explain '{topic}' in one short paragraph for a beginner."
elif mode == "summary":
instructions = "You are a concise educational assistant. Use 3 bullet points."
prompt = f"Summarize the concept '{topic}' for a beginner programmer."
elif mode == "study notes":
instructions = (
"You are an educational assistant. "
"Use the exact headings: Definition, Why it matters, Example."
)
prompt = f"Create study notes for '{topic}' in simple language."
else:
return "Invalid mode selected."
response = client.responses.create(
model="gpt-5.4-mini",
instructions=instructions,
input=prompt
)
return response.output_text
def main() -> None:
print("Welcome to the CLI Learning Assistant\n")
topic = input("Enter a topic: ").strip()
if not topic:
print("Topic cannot be empty.")
return
print("\nChoose a mode:")
print("1. explanation")
print("2. summary")
print("3. study notes")
choice_map = {
"1": "explanation",
"2": "summary",
"3": "study notes"
}
choice = input("\nEnter your choice (1/2/3): ").strip()
mode = choice_map.get(choice)
if not mode:
print("Invalid choice.")
return
result = get_response(topic, mode)
print("\nResult:\n")
print(result)
if __name__ == "__main__":
main()
14. Common Mistakes to Avoid
Hardcoding API keys
Bad:
client = OpenAI(api_key="my-secret-key")
Better:
client = OpenAI()
Sending vague prompts
Bad:
Explain stuff
Better:
Explain Python dictionaries for beginners with one example.
Repeating logic everywhere
Bad:
- creating client code in many files with copy-paste
- no helper functions
- inconsistent prompts
Better:
- centralize client usage
- build reusable functions
- standardize instructions
Ignoring errors
Bad:
- assuming requests always succeed
Better:
- validate environment
- catch exceptions
- print or log useful messages
15. Recap
In this session, you learned how to:
- install and configure the OpenAI Python SDK
- authenticate using environment variables
- call the Responses API
- use
gpt-5.4-minifrom Python - improve outputs with better prompts and instructions
- wrap calls in reusable helper functions
- handle basic runtime issues safely
These skills are the foundation for everything that follows in GenAI application development.
16. Suggested Practice After Class
- Build a script that turns error messages into beginner-friendly explanations
- Create a function that generates quiz questions from a topic
- Add file input so your app can summarize
.txtfiles - Experiment with different prompt styles and compare output quality
Useful Resources
- OpenAI Responses API migration guide: https://developers.openai.com/api/docs/guides/migrate-to-responses
- OpenAI API docs: https://platform.openai.com/docs
- OpenAI Python SDK: https://github.com/openai/openai-python
- Python
osmodule docs: https://docs.python.org/3/library/os.html - Python virtual environments: https://docs.python.org/3/tutorial/venv.html
End-of-Session Checkpoint Questions
- What is the role of the
OpenAI()client in Python? - Why is
response.output_textuseful? - Why should API keys be stored in environment variables?
- How do
instructionsdiffer frominput? - Why is it helpful to wrap LLM calls in functions?
Optional Instructor Notes
Suggested pacing
- 0–10 min: Concepts and setup
- 10–20 min: First API call and prompting basics
- 20–30 min: Reusable functions and error handling
- 30–40 min: Hands-on exercises
- 40–45 min: Challenge and recap
Suggested live demo order
check_env.pyfirst_response.pyinstructed_response.pyllm_helper.pyexercise_1_prompt_app.py
Expected learner outcomes
Learners should leave with the confidence to:
- install the SDK
- make a successful LLM API call
- improve prompts for better results
- start building small Python tools powered by LLMs
Back to Chapter | Back to Master Plan | Previous Session | Next Session