Appendix A — Appendix
A.1 Create and Test Basic OpenAI SDK Agents
In this section, we look at creating simple starter AI agents using the OpenAI Agent SDK.
A.1.1 Create a Simple Agent Script
Let’s create an agent_sync.py file in a playground directory with the “hello world” example from the OpenAI Agent SDK documentation:
# agent_sync.py
from agents import Agent, Runner
question = "Write a haiku about recursion in programming."
agent = Agent(name="Assistant", instructions="You are a helpful assistant")
# Run the agent and get the result
result = Runner.run_sync(agent, question)
print(result.final_output)Run this script from the terminal:
python playground/agent_sync.pyYou should see a haiku output similar to:
Code within the code,
Functions calling themselves,
Infinite loop's dance.
You can also use uv run, which ensures the script runs in your virtual environment:
uv run playground/agent_sync.pyA.1.2 Different Ways to Run Agents
The OpenAI Agent SDK provides three main ways to run agents through the Runner class:
Synchronous Running with
run_sync():result = Runner.run_sync(agent, "What's the weather today?") print(result.final_output)This is the simplest approach for scripts and is what we used above. It blocks execution until the agent completes its response.
Asynchronous Running with
run():import asyncio async def main(): result = await Runner.run(agent, "What's the weather today?") print(result.final_output) asyncio.run(main())This is useful for more complex applications where you want to do other tasks while waiting for the agent to respond.
Streaming Results with
run_streamed():import asyncio async def main(): stream = await Runner.run_streamed(agent, "Tell me a long story") async for partial_result in stream: # This prints each piece as it arrives if partial_result.delta: print(partial_result.delta, end="", flush=True) asyncio.run(main())This allows you to see the agent’s response as it’s being generated, piece by piece, rather than waiting for the complete response.
A.1.3 The Agent Loop: How Agents Process Requests
When you run an agent with any of these methods, it follows this loop:
- The LLM processes your input
- The LLM produces output, which can be:
- A final response (loop ends)
- A handoff to another agent (loop continues with new agent)
- Tool calls (tools are executed, results are added to context, loop continues)
- This continues until a final output is produced or max turns is reached
A.1.4 Creating an Async Agent Script
For more flexibility, let’s create a script that uses the async approach. Create a file called agent_async.py under the playground directory:
# agent_async.py
import asyncio
from agents import Agent, Runner
question = "Write a haiku about recursion in programming."
async def main():
agent = Agent(name="Assistant", instructions="You are a helpful assistant")
result = await Runner.run(agent, question)
print(result.final_output)
if __name__ == "__main__":
asyncio.run(main())Run this script from the terminal:
python playground/agent_async.pyA.1.5 Creating a Streaming Agent Script
For longer responses, it can be helpful to see the results as they’re generated. Let’s create a script that uses the streaming approach:
# agent_streaming.py
import asyncio
from agents import Agent, Runner
async def main():
agent = Agent(name="Assistant", instructions="You are a helpful assistant")
question = "Write a short poem about artificial intelligence."
print(f"Question: {question}\n")
print("Response:")
# Get streaming results
stream = await Runner.run_streamed(agent, question)
# Process each piece as it arrives
async for partial_result in stream:
if partial_result.delta:
print(partial_result.delta, end="", flush=True)
print("\n\nDone!")
if __name__ == "__main__":
asyncio.run(main())Run this script to see the response appear gradually:
python playground/agent_streaming.pyA.1.6 Best Practices for Using the Agent SDK
Based on our exploration:
- For simple scripts and quick tests:
- Use
Runner.run_sync()for its simplicity
- Use
- For more complex applications:
- Use
asyncio.run(main())withawait Runner.run()for better performance and flexibility - This approach lets you run multiple agents or other tasks concurrently
- Use
- For better user experience:
- Use
Runner.run_streamed()when you want users to see responses as they’re generated - This is especially valuable for longer responses where waiting for the complete answer might feel slow
- Use
Now that we understand how to create and run a basic agent, we’re ready to move on to building our financial assistant with custom tools.
For those interested in running agents in interactive environments like Jupyter notebooks or VS Code cells, please refer to this section for additional guidance.
A.2 Run Agents in Interactive Python Development
One of the best ways to learn programming concepts is through interactive experimentation. Being able to run small snippets of code and see the results immediately helps solidify your understanding. VSCode and compatible IDEs like Windsurf and Cursor provide convenient ways to do this.
But when running agents in interactive environments like Jupyter notebooks or VS Code’s interactive Python, you may encounter specific challenges with asynchronous code. This appendix explains how to address these issues.
A.2.1 Set up Interactive Python Development in IDE
You can follow the video tutorial that walks you through this process, which essentially does the following two steps:
First, we need the Python extension, which provides essential features for Python development. In VSCode:
- Press
Ctrl+Shift+X(orCmd+Shift+Xon Mac) to open the Extensions panel - Search for “Python”
- Click “Install” on the official Python extension by Microsoft
Second, we’ll add Jupyter support to enable interactive code cells. In VSCode:
- In the Extensions panel, search for “Jupyter”
- Install the “Jupyter” extension
- Restart your IDE when prompted
Once configured, you can run code interactively by opening .ipynb notebook files.
The setup will also allow you to run code cells in Python .py file directly by creating code cells with # %%. Check out Python Interactive window for more details.
If this is the first time you run this code interactively, you might be prompted to “Select a kernel”, click into it to find and select the appropriate Python environment. Once you select the Python environment, it may further prompt you to install the ipykernel package with an ‘install’ button. In that case, simply click the button to install it. If that fails, you can install it directly from the terminal, make sure you’re in the Python environment (kernel) you selected for the interactive code to run, and then install the package with your package management command such as pip install ipykernel or uv add ipykernel. We will see exactly how to do these steps during our tutorial.
A.2.2 The Event Loop Challenge for Running Agents in Interactive Python
Interactive environments like Jupyter notebooks already run an event loop to keep the interface responsive. This creates a conflict when using Runner.run_sync(), which tries to create its own event loop.
If you try to run this code in an interactive cell:
# agent_int_sync.py
# %%
from agents import Agent, Runner
question = "Write a haiku about recursion in programming."
agent = Agent(name="Assistant", instructions="You are a helpful assistant")
result = Runner.run_sync(agent, question) # This will fail in interactive mode
print(result.final_output)You’ll get an error: RuntimeError: This event loop is already running
A.2.3 Solution: Use Direct Async Calls
In interactive environments, you need to use the async version directly:
# agent_int_async.py
# %%
from agents import Agent, Runner
question = "Write a haiku about recursion in programming."
async def main():
agent = Agent(name="Assistant", instructions="You are a helpful assistant")
result = await Runner.run(agent, question)
print(result.final_output)
# %%
await main() # This works in interactive cellsRun both cells in sequence to see the result.
A.2.4 Understanding Event Loops in Python
To understand this error, we need to know a bit about how asynchronous programming works in Python:
An event loop is like a manager that keeps track of all the tasks that need to be done and decides which task to work on next. It coordinates asynchronous operations.
- Keeps track of all pending tasks
- Decides which task to run next
- Manages waiting for I/O operations to complete
When you use
asyncandawaitin Python, you’re telling the event loop: “This task might take a while. While you’re waiting for it to finish, you can work on other tasks.”Interactive environments like Jupyter notebooks and VS Code cells already start their own event loop when they run your code. This is what allows them to run code and still keep the interface responsive, which is also why we can’t create a new one with
run_sync()but can use existing ones withawait.When we call
Runner.run_sync(), it tries to create its own event loop usingasyncio.run(). But Python doesn’t allow two event loops to run at the same time in the same thread, so we get the error.
Think of it like this: you can’t have two different managers trying to assign tasks to the same worker at the same time - they’d conflict with each other.
A.3 Reuse Tools Implementation for Agent SDK
All our three versions of Agent implementations use the same tool implementations in tools_yt.py. But the OpenAI Agent SDK version requires a decorator to mark the function as a tool. How do we make sure we can properly reuse the same function definitions in the tools_yf.py without having to copy them? We already solved it in our Agent SDK version with the @wrap decorator. This section explains why.
Let’s create a file called playground/agent_tools_reuse.py.
# %%
from agents import function_tool
from .tools_yt import (
get_stock_price as original_get_stock_price,
get_stock_history as original_get_stock_history,
get_company_info as original_get_company_info,
get_financial_metrics as original_get_financial_metrics
)
@function_tool
def get_stock_price(ticker: str) -> str:
return original_get_stock_price(ticker)
@function_tool
def get_stock_history(ticker: str, days: int) -> str:
return original_get_stock_history(ticker, days)
@function_tool
def get_company_info(ticker: str) -> str:
return original_get_company_info(ticker)
@function_tool
def get_financial_metrics(ticker: str) -> str:
return original_get_financial_metrics(ticker)
# %%
agent = Agent(
name="Assistant",
instructions="You are a helpful assistant",
tools=[
get_stock_price,
get_stock_history,
get_company_info,
get_financial_metrics]
)
# %%
for tool in agent.tools:
if isinstance(tool, FunctionTool):
print(tool.name)
print(tool.description)
print(json.dumps(tool.params_json_schema, indent=2))
print()However, we notice that the function no longer contains the original docstring:
get_stock_price
{
"properties": {
"ticker": {
"title": "Ticker",
"type": "string"
}
},
"required": [
"ticker"
],
"title": "get_stock_price_args",
"type": "object",
"additionalProperties": false
}Why? the Agents SDK by default uses the docstring of the function as the Tool description. But it can only see the docstring directly attached to the function being decorated. The wrapper function get_stock_price would not automatically inherit the docstring from the original function. While the SDK would still create a tool based on the wrapper’s signature (ticker: str -> str), it does not have the helpful descriptions for the agent to understand how to use it.
There are several ways to solve this problem, we can copy the original docstring over to the wrapper function, or we can provide description directly to @function_tool to overwrite its default behavior. But here if we want to reuse the docstring in the original methods, and minimize code redundancy, we can use the @wraps decorator from the functools module, which will make the original function’s metadata, including the docstring, available to the wrapper function.
Let’s create an agent_tools_reuse_wraps.py file under the playground directory.
# %%
from agents import Agent, function_tool, FunctionTool
from functools import wraps
import json
from tools_yf import (
get_stock_price as original_get_stock_price,
get_stock_history as original_get_stock_history,
get_company_info as original_get_company_info,
get_financial_metrics as original_get_financial_metrics
)
# %%
@function_tool
@wraps(original_get_stock_price)
def get_stock_price(ticker: str) -> str:
return original_get_stock_price(ticker)
@function_tool
@wraps(original_get_stock_history)
def get_stock_history(ticker: str, days: int) -> str:
return original_get_stock_history(ticker, days)
@function_tool
@wraps(original_get_company_info)
def get_company_info(ticker: str) -> str:
return original_get_company_info(ticker)
@function_tool
@wraps(original_get_financial_metrics)
def get_financial_metrics(ticker: str) -> str:
return original_get_financial_metrics(ticker)
# %%
agent = Agent(
name="Assistant",
instructions="You are a helpful assistant",
tools=[
get_stock_price,
get_stock_history,
get_company_info,
get_financial_metrics]
)
# %%
for tool in agent.tools:
if isinstance(tool, FunctionTool):
print(tool.name)
print(tool.description)
print(json.dumps(tool.params_json_schema, indent=2))
print()Now we are able to see the docstring in the tool again even though it is in a wraped function:
get_stock_price
Get the current price of a stock.
{
"properties": {
"ticker": {
"description": "The stock ticker symbol (e.g., 'AAPL')",
"title": "Ticker",
"type": "string"
}
},
"required": [
"ticker"
],
"title": "get_stock_price_args",
"type": "object",
"additionalProperties": false
}A.4 OpenAI Agent SDK’s Context Management System
The OpenAI Agent SDK provides a context management system that serves as the backbone for state management and dependency injection in your agent applications. Let’s explore what this system is, why it’s valuable, and how to use it effectively.
A.4.1 Understanding the Context System
In the Agent SDK, “context” refers to a structured way to share data and dependencies across different components of your agent. It’s essentially a container for any information that needs to be accessible throughout the agent’s lifecycle.
The SDK distinguishes between two types of context:
- Local Context: Data and dependencies available to your code components (tools, callbacks, etc.)
- LLM Context: Information that the language model sees when generating responses
The context system solves several critical challenges in agent development:
- State Management: Maintains state across multiple interactions
- Dependency Injection: Provides a clean way to share resources and helpers
- Type Safety: Enables proper type checking and IDE autocompletion
- Separation of Concerns: Keeps your agent components decoupled and focused
The SDK’s context system operates through a few key mechanisms:
- Context Object: You create a custom class (typically a dataclass) containing any data or methods your agent needs
- Context Wrapper: The SDK wraps your context in a
RunContextWrapperclass, which is passed to tools and other components - Type Generics: The Agent is generic on the context type, ensuring type safety throughout your application
- Dependency Injection: The context is passed to the Runner and then automatically injected into all components
As indicated in the comments, the context object itself is never sent to the LLM. Instead, there are specific patterns for making information from your context available to the language model. Let’s use a separate example to understand the context management in the SDK in detail.
A.4.2 A standalone example
We can first look at a standalone example of how context can be passed and used without using the Agent SDK.
# agent_sdk_context_standalone.py
from dataclasses import dataclass
import asyncio
@dataclass
class UserContext:
uid: str
is_pro_user: bool
def personalized_greeting(context: UserContext) -> str:
if context.is_pro_user:
return f"Welcome back, Pro user {context.uid}!"
else:
return f"Hello, free user {context.uid}! Upgrade anytime."
class Agent:
def __init__(self, instructions=None, dynamic_instructions_fn=None, tools=None):
self.instructions = instructions
self.dynamic_instructions_fn = dynamic_instructions_fn
self.tools = tools or []
async def run(self, context):
# Dynamic instructions first
if self.dynamic_instructions_fn:
self.instructions = self.dynamic_instructions_fn(context, self)
print(f"Running agent with user id: {context.uid}")
print(f"Is pro user? {context.is_pro_user}")
print(f"Instructions: {self.instructions}")
# Now use tools
for tool in self.tools:
result = tool(context)
print(f"Tool result: {result}")
class Runner:
def __init__(self, agent):
self.agent = agent
async def run(self, context):
await self.agent.run(context)
def dynamic_instructions(context: UserContext, agent: Agent) -> str:
if context.is_pro_user:
return f"Welcome, valued user {context.uid}!"
else:
return f"Hello, user {context.uid}. Upgrade to Pro for more features!"
# Create a context
context = UserContext(uid="1234", is_pro_user=True)
# Create an agent with dynamic instructions and a tool
agent = Agent(
dynamic_instructions_fn=dynamic_instructions,
tools=[personalized_greeting]
)
# Create a runner
runner = Runner(agent)
# Run everything
asyncio.run(runner.run(context))❯ python agent_sdk_context_standalone.py
Running agent with user id: 1234
Is pro user? True
Instructions: Welcome, valued user 1234!
Tool result: Welcome back, Pro user 1234!In summary:
| Concept | How It Works |
|---|---|
| Context | Passed from Runner to Agent at runtime |
| Dynamic Instructions | Agent can change instructions based on context |
| Tools | Agent can use context to call tools during its operation |
Here is a Visual Flow:
Runner.run(context)
|
v
Agent.run(context)
|
v
dynamic_instructions(context, agent)
|
v
Update agent.instructions
|
v
For each tool:
tool(context)
|
v
Use tool result
A.4.3 Use Context in the OpenAI Agent SDK
Let’s explore how context works in the OpenAI Agent SDK through a concrete example using the actual SDK components.
In the OpenAI Agent SDK, context is passed through a wrapper class:
# From the SDK source code
@dataclass
class RunContextWrapper(Generic[TContext]):
"""This wraps the context object that you passed to `Runner.run()`. It also contains
information about the usage of the agent run so far.
NOTE: Contexts are not passed to the LLM. They're a way to pass dependencies and data to code
you implement, like tool functions, callbacks, hooks, etc.
"""
context: TContext
"""The context object (or None), passed by you to `Runner.run()`"""
usage: Usage = field(default_factory=Usage)
"""The usage of the agent run so far. For streamed responses, the usage will be stale until the
last chunk of the stream is processed.
"""This wrapper design allows the SDK to inject your context into tools and other components while also providing additional metadata like usage statistics.
A.4.3.1 A Complete Working Example
Let’s see how this works in practice:
#agent_sdk_context.py
import asyncio
from dataclasses import dataclass
from typing import List
from agents import Agent, Runner, RunContextWrapper, function_tool
# 1. Define our context class
@dataclass
class UserContext:
uid: str
is_pro_user: bool
preferred_city: str
# 2. Create tools that access this context
@function_tool
async def get_weather(wrapper: RunContextWrapper[UserContext]) -> str:
"""Get the current weather for the user's preferred city."""
context = wrapper.context # Access the UserContext inside the wrapper
return f"The weather in {context.preferred_city} is sunny with mild winds."
@function_tool
async def get_air_quality(wrapper: RunContextWrapper[UserContext]) -> str:
"""Get air quality information for the user's preferred city."""
context = wrapper.context # Access the UserContext inside the wrapper
return f"The air quality in {context.preferred_city} is moderate today."
# 3. Define dynamic instructions that use context
def dynamic_instructions(wrapper: RunContextWrapper[UserContext], agent=None) -> str:
"""Generate personalized instructions based on user context and available tools."""
context = wrapper.context # Access the UserContext inside the wrapper
# Get the names of all available tools
tool_names = [tool.name.replace('_', ' ').title() for tool in agent.tools]
tool_list_text = ", ".join(tool_names)
# Customize based on user type
user_type = "Pro user" if context.is_pro_user else "Free user"
return (
f"You are assisting {user_type} {context.uid}.\n"
f"Available tools you can use: {tool_list_text}.\n"
f"The user's preferred city is {context.preferred_city}.\n"
f"Always be friendly and helpful."
)
# 4. Main function that creates and runs the agent
async def main():
# Create a context for a Pro user
user_context = UserContext(
uid="abc123",
is_pro_user=True,
preferred_city="New York"
)
# Define tools based on user type
tools = [get_weather, get_air_quality] if user_context.is_pro_user else [get_weather]
# Create an agent with dynamic instructions
agent = Agent[UserContext](
name="Weather Assistant",
instructions=dynamic_instructions,
tools=tools
)
# Run the agent with our context
result = await Runner.run(
starting_agent=agent,
input="What's the weather and air quality like today?",
context=user_context # Pass the context here
)
print(f"Agent response: {result.final_output}")
if __name__ == "__main__":
asyncio.run(main())We can run it:
❯ python agent_sdk_context.py
Agent response: The weather in New York is sunny with mild winds, and the air quality is moderate. Enjoy your day! 🌞A.4.3.2 The Context Flow in Detail
Let’s break down exactly how context flows through this system:
Creation: We create a
UserContextobject with user-specific informationuser_context = UserContext(uid="abc123", is_pro_user=True, preferred_city="New York")Type Declaration: We specify that our agent works with this context type
agent = Agent[UserContext](...)Injection: We pass the context to
Runner.run()result = await Runner.run(agent, "What's the weather?", context=user_context)Wrapping: Internally, the SDK wraps our context in a
RunContextWrapper# This happens inside the SDK wrapped_context = RunContextWrapper(context=user_context)Accessing in Tools: Tools receive this wrapper and access the context
async def get_weather(wrapper: RunContextWrapper[UserContext]) -> str: context = wrapper.context # This gives you back the original UserContext return f"Weather in {context.preferred_city} is sunny"Accessing in Instructions: The dynamic instructions function does the same
def dynamic_instructions(wrapper: RunContextWrapper[UserContext], agent=None) -> str: context = wrapper.context # Use context to generate instructionsInternally, the SDK passes the wrapper to the agent’s
get_system_promptmethod, which will call thedynamic_instructionsfunction with both the wrapper and the agent itself.
A.4.3.3 Why Use RunContextWrapper?
The wrapper approach provides several benefits:
Additional Metadata: The wrapper contains usage statistics and potentially other metadata
print(f"Tokens used so far: {wrapper.usage.total_tokens}")Consistent API: All components receive the same wrapper type, regardless of context type
Type Safety: The generic parameter
RunContextWrapper[UserContext]ensures type checkingDocumentation: The wrapper makes it explicit that contexts are not passed to the LLM
A.4.3.4 The Context-LLM Boundary
It’s crucial to understand that the context object is never sent to the LLM. As the SDK documentation states:
NOTE: Contexts are not passed to the LLM. They’re a way to pass dependencies and data to code you implement, like tool functions, callbacks, hooks, etc.
Instead, we extract information from the context and format it as text:
In dynamic instructions:
return f"You are assisting {user_type} {context.uid}..."In tool responses:
return f"The weather in {context.preferred_city} is sunny..."
If you are interested in a deeper understanding of RunContextWrapper, you can refer to this section.
A.5 Generic Types in Python
This chapter explain the basics of Generic and TypeVar which helps you undertand code like the following:
from dataclasses import dataclass, field
from typing import Any, Generic
from typing_extensions import TypeVar
TContext = TypeVar("TContext", default=Any)
@dataclass
class RunContextWrapper(Generic[TContext]):
"""This wraps the context object that you passed to `Runner.run()`. It also contains
information about the usage of the agent run so far.
NOTE: Contexts are not passed to the LLM. They're a way to pass dependencies and data to code
you implement, like tool functions, callbacks, hooks, etc.
"""
context: TContext
"""The context object (or None), passed by you to `Runner.run()`"""
usage: Usage = field(default_factory=Usage)
"""The usage of the agent run so far. For streamed responses, the usage will be stale until the
last chunk of the stream is processed.
"""We also explain @dataclass in the other chapter.
A.5.1 Understanding Generic and TypeVar in Python
When we write code, we often want it to be flexible — able to work with different types without rewriting everything.
For example, instead of writing one class for ints, another for strs, and another for floats, wouldn’t it be better to write a single, reusable class that can work with any type?
That’s exactly what generics are for.
In Python, the tools we use to build generics are TypeVar and Generic.
In this section, we’ll carefully walk through what they are, how they work, and why they matter — following the same learning journey many developers experience: starting with confusion, then gradually achieving full clarity.
A.5.2 What is TypeVar?
TypeVar stands for Type Variable. It does not create a new type like int or str.
Instead, it creates a placeholder for a type, like an empty hole that will be filled later when the code is used.
Example:
from typing import TypeVar
T = TypeVar("T")Tis a Python variable you can now use in type annotations.- The string
"T"is just a label for tooling and error messages — it has no effect at runtime.
✅ Important:
- T is not a real type. - It is a promise that says: “Later, I’ll be replaced with an actual type like int, str, float, etc.”
A.5.3 What is Generic?
Generic is a special base class that tells Python’s type system:
“This class (or function) is parameterized by a TypeVar.”
When you create a class that needs to be generic, you inherit from Generic[...], and specify which type variables it uses.
Example:
from typing import Generic, TypeVar
T = TypeVar("T")
class Box(Generic[T]):
def __init__(self, thing: T):
self.thing = thingExplanation: - Box inherits from Generic[T]. - This formally declares: “Box is a generic class over the type variable T.” - thing is typed as T, meaning its type will be decided later.
When using Box, you fill in T with a real type:
box_of_ints = Box[int](thing=123) # T becomes int
box_of_strings = Box[str](thing="hello") # T becomes str✅ The type placeholder T becomes a real type when you create an instance.
A.5.4 Why Inheriting from Generic[T] Matters
You might wonder: what happens if you use TypeVar but don’t inherit from Generic?
Example without proper inheritance:
class Box:
thing: T- Here,
Tis just floating around. - Python’s type checkers (like
mypyor Pyright) won’t know that this class is supposed to be generic. - You won’t get correct type checking, no specialization like
Box[int], and no autocompletion help.
✅ Only by inheriting from Generic[T] do you properly declare that your class uses a type placeholder.
A.5.5 Real World Flow: “Filling the TypeVar Hole”
When you define a class like Box(Generic[T]), you are creating a type hole.
When you instantiate it with Box[int], you are filling the hole.
Visualization:
Define class: Box(Generic[T]) --> T is empty (waiting)
Instantiate: Box[int](thing=...) --> T is filled with int
Thus: - Definition stage: declare placeholders. - Usage stage: specialize them.
A.5.6 Why does TypeVar have a string name?
You might notice that when you create a TypeVar, you provide a string label:
T = TypeVar("T")Why?
Because: - The left side (T) is the Python variable you use in your code. - The string inside ("T") is only for human readability — it shows up in type error messages.
You could even write:
Banana = TypeVar("🍌")- In your code, you must still use
Banana, not"🍌". - But error messages may refer to “🍌” — purely cosmetic.
✅ Key rule: always use the Python variable name (the one on the left), not the string inside.
A.5.7 Important Clarification: Box(Generic[T]) vs Box[int]
When learning about generics, it’s very common to feel confused about the different meanings of parentheses () and square brackets [].
Let’s break it down carefully:
A.5.7.1 Box(Generic[T]) — Inheritance (at class definition)
class Box(Generic[T]):
...✅ This happens at class definition time.
Boxinherits fromGeneric[T].- This formally tells Python’s type system: “Box is a generic class parameterized by a type variable T.”
- It’s exactly like normal inheritance, just with typing involved.
A.5.7.2 Box[int] — Specialization (at usage time)
box_of_ints = Box[int](thing=123)✅ Here, two things happen: - Box[int] specializes the generic class by filling the type variable T with int. - Then (thing=123) instantiates an object from that specialized version.
✅ At this point, there is no inheritance happening.
✅ Instead, you are customizing the generic class to a concrete type.
If it feels confusing that Box[int](...) does two things at once, you can split it into two explicit steps:
First, specialize the generic class by filling the type variable:
SpecializedBox = Box[int]Then, instantiate an object of the specialized class:
box_of_ints = SpecializedBox(thing=123)
So it becomes:
from typing import Generic, TypeVar
T = TypeVar("T")
class Box(Generic[T]):
def __init__(self, thing: T):
self.thing = thing
# Step 1: Specialize the generic type
SpecializedBox = Box[int]
# Step 2: Instantiate the specialized class
box_of_ints = SpecializedBox(thing=123)
print(box_of_ints.thing) # Output: 123✅ This makes it extra clear:
- Box[int] means “Box where T is int.” - SpecializedBox(thing=123) is a normal instantiation step.
You’ve already seen similar [ ] specialization when working with Python’s typing system. For example, List[int] for a list of integers. But List[int] use [ ] only for type hints. You cannot instantiate them like List[int](...). But with your own generic classes (like Box[T]), Box[int] creates a real, specialized class that you can instantiate with normal () syntax as above.
✅ In all these cases, square brackets [ ] are used to fill type parameters — not to inherit.
✅ Same idea applies to Box[int].
A.5.8 What Actually Happens At Runtime?
Here’s something subtle but important: - TypeVars and Generics are for static typing only. - At runtime, Python ignores the type parameters (Box[int] vs Box[str] don’t behave differently at runtime). - No extra runtime behavior is added.
- Only static type checkers (like mypy, pyright, and your IDE) understand and enforce generics.
✅ Thus, Generic[T] is a tool for better type checking and documentation, not for changing runtime execution.
A.5.9 Final Key Points
TypeVarcreates a placeholder for a type.Generic[...]declares that a class or function accepts type parameters.- You must inherit from
Generic[T]to make your type variables actually matter. - The string name inside
TypeVar("T")is only for error messages — use the variable in your code. - When you instantiate a generic class, you fill the placeholders with real types.
- Python can often infer types automatically, but you can also be explicit.
- At runtime, type parameters are ignored — generics are purely a static typing feature.
A.5.10 Shortcut to Remember
TypeVar creates a hole. Generic marks your class as having that hole. Real types like int or str are what you put into the hole when you use the class.
A.6 Structured Data Models in Python
You will see @dataclass and pydantic used frequently in the codebase. This chapter will explain the basics of why we use them and how to use them.
A.6.1 Why Structured Models?
Imagine dealing with user data like this:
data = {"id": 123, "name": "Alice"}Without structure, you manually track fields, types, and defaults. Bugs sneak in easily:
if data["user_id"] == 123: # Oops, wrong key!
...We need structured models: clear shapes for our data, so we can avoid these easy mistakes.
A.6.2 Dataclasses: Python’s Simple Built-in Solution
Before @dataclass, writing a data class meant manually creating methods:
class User:
def __init__(self, id: int, name: str):
self.id = id
self.name = name
def __repr__(self):
return f"User(id={self.id}, name='{self.name}')"A lot of boilerplate, right?
With @dataclass, Python writes that for you automatically:
from dataclasses import dataclass
@dataclass
class User:
id: int
name: str
user = User(id=123, name="Alice")
print(user)✅ Python automatically generates: - __init__() constructor - __repr__() for easy printing - __eq__() for comparisons - Easy assignment and updates: user.name = "Bob"
Result:
User(id=123, name='Alice')
You can easily modify fields too:
user.name = "Bob"
print(user)Output:
User(id=123, name='Bob')
But there’s an important catch:
Dataclasses don’t enforce types at runtime. The types you annotate are hints only, not strict checks.
Watch this:
user = User(id="wrong", name=456)
print(user)Output:
User(id='wrong', name=456)
Python still creates the object even though the types are wrong!
✅ Dataclasses help you structure data and make coding faster. ❌ But they can’t protect you from type mistakes at runtime.
A.6.3 Where Dataclasses Fall Short
Dataclasses are great if: - You trust your inputs - You’re writing internal tools or scripts
But with external data (APIs, user forms, etc), wrong types sneak in easily. And Python won’t catch it for you!
The danger: you think your object is valid, but somewhere later your program crashes — and it’s much harder to trace the bug.
✅ Dataclasses give you structure and convenience. ❌ They don’t validate your data.
A.6.4 Pydantic: Smarter Data Models
Pydantic models (BaseModel) look like dataclasses, but validate data at runtime:
from pydantic import BaseModel
class User(BaseModel):
id: int
name: str
user = User(id=123, name="Alice")
print(user)✅ Works normally:
id=123 name='Alice'
Now pass wrong types:
user = User(id="wrong", name=456)🚫 Raises a clear ValidationError immediately:
ValidationError:
1 validation error for User
id
Input should be a valid integer (type=type_error.integer)
name
Input should be a valid string (type=type_error.string)
✅ Much safer when handling real-world messy data!
A.6.5 Type Checking vs Validation: What’s the Difference?
- Type Checking: Happens before running the program, using tools like mypy or your editor. It only checks your code, not real data.
- Validation: Happens at runtime, when your program is actually running and handling real data.
Example:
user = User(id="bad", name=123) # Static check might miss this!With a dataclass, Python still accepts it. With Pydantic, you get a runtime error immediately.
Important Note: Static type checkers like mypy might catch obviously wrong literals like
id="bad", but they cannot catch wrong values when your data comes dynamically (e.g., from an API, a file, a dictionary). That’s why runtime validation is crucial for real-world applications.
A.6.6 Serialization
Serialization means taking a Python object and turn it into a format that can be easily saved or transmitted. Usually:
Python object ➔ JSON string (or bytes, or dict)
A.6.6.1 With Dataclasses
Dataclasses don’t directly provide JSON serialization, but you can get a dictionary using asdict().
from dataclasses import dataclass, asdict
import json
@dataclass
class User:
id: int
name: str
user = User(id=1, name="Alice")
print(asdict(user))Output:
{'id': 1, 'name': 'Alice'}To serialize to JSON:
print(json.dumps(asdict(user)))Output:
{"id": 1, "name": "Alice"}A.6.6.2 With Pydantic 2.0+
Pydantic provides built-in serialization:
from pydantic import BaseModel
class User(BaseModel):
id: int
name: str
user = User(id=1, name="Alice")
print(user.model_dump()) # Pydantic 2.0+Output:
{'id': 1, 'name': 'Alice'}To serialize directly to JSON:
print(user.model_dump_json()) # Pydantic 2.0+Output:
{"id":1,"name":"Alice"}It also performs field validations before serialization.
A.6.7 Parsing
Parsing means taking some external data (like a JSON string, or a dictionary) and create a Python object from it.
A.6.7.1 With Dataclasses
If you have a dictionary and want a dataclass object, you manually unpack:
data = {"id": 1, "name": "Alice"}
user = User(**data)
print(user)Output:
User(id=1, name='Alice')⚠️ If types are wrong, dataclasses do not validate:
data = {"id": "wrong", "name": 123}
user = User(**data)
print(user)Output:
User(id='wrong', name=123)⚠️ No error even though id is supposed to be an int!
A.6.7.2 With Pydantic 2.0+
Pydantic models can validate and parse data safely:
data = {"id": "1", "name": "Alice"}
user = User.model_validate(data) # Pydantic 2.0+
print(user)Output:
id=1 name='Alice'✅ It automatically coerces the string "1" into the integer 1.
If data is invalid:
bad_data = {"id": {}, "name": []}
user = User.model_validate(bad_data)Raises:
pydantic_core._pydantic_core.ValidationError: 2 validation errors...✅ Pydantic clearly tells you which fields are wrong.
A.6.8 Dataclasses, Pydantic, and Generics
Both dataclasses and Pydantic models can also work naturally with Python generics using Generic and TypeVar.
This means you can create reusable, type-safe models without tying them to a specific type until usage.
Example with a dataclass:
from typing import Generic, TypeVar
from dataclasses import dataclass
T = TypeVar("T")
@dataclass
class Box(Generic[T]):
thing: TExample with a Pydantic model:
from pydantic import BaseModel
class Box(BaseModel, Generic[T]):
thing: TLater, you can specialize them:
box_of_ints = Box[int](thing=123)
box_of_strings = Box[str](thing="hello")✅ In both cases, you get structured, reusable models. ✅ Pydantic will additionally validate the types at runtime!
A.6.9 Parsing Generic Models with Pydantic
What if you receive dynamic external data like this?
data = {"thing": "123"}Can you parse it into a generic model?
Let’s see!
With dataclass, You can manually unpack the data:
box = Box[int](**data)
print(box)Output:
Box(thing='123')✅ Object is created.
⚠️ But there’s no type checking or coercion: - thing is a string, even though we declared it should be an int. - Python doesn’t complain. - Type mismatch is silently accepted.
With Pydantic, You can parse it safely with:
box = Box[int].model_validate(data)
print(box)Pydantic will automatically validate and even coerce thing to the correct type:
thing=123✅ Pydantic automatically: - Coerces "123" (string) into 123 (integer) - Validates the type - Ensures the object matches the declared structure
If parsing fails (bad data), Pydantic will raise a clear ValidationError instead of letting wrong data sneak in.
A.6.10 Difference Between Dataclass vs Pydantic with Generics
| Feature | Dataclass + Generic | Pydantic + Generic |
|---|---|---|
| Type safety (static checking) | ✅ | ✅ |
| Runtime validation of field types | ❌ No | ✅ Yes |
| Parsing from external data | ❌ Manual unpacking | ✅ Built-in validation and coercion |
| Best use case | Internal, trusted code | External, untrusted inputs |
✅ Dataclasses + Generic are great for trusted internal data. ✅ Pydantic + Generic shine when parsing and validating real-world external data.
A.7 Logging Basics
You may have noticed we’ve included logging in our implementation. Logging is one of the most important skills for any developer, this seciton explains the basics of logging.
A.7.1 What is Logging and Why Use It?
Logging is a way to track what happens when your program runs. Instead of using print() statements that you later remove, logging lets you keep valuable debugging information in your code.
In our financial tools, logging helps us:
- Track when functions are called and with what parameters
- See detailed error information when something goes wrong
- Understand the flow of execution through our code
A.7.2 Basic Logging in Our Code
Here’s how we’ve set up logging in our tools:
Create a Logger:
logger = logging.getLogger(__name__)This creates a logger named after the current module.
Set Log Level:
logger.setLevel(logging.DEBUG)This determines which messages get recorded.
Add Log Messages:
logger.debug(f"Getting stock price for {ticker}")These show what’s happening in our code.
Log Errors:
except Exception as e: logger.error(f"Error fetching stock price: {str(e)}", exc_info=True)This captures detailed error information.
A.7.3 Test our Tools with Effective Logging
Now let’s test our tools and see how to make our logs useful. Create a test_tools_yf.py file:
# %%
import logging
import sys
from tools_yf import (
get_stock_price,
get_stock_history,
get_company_info,
get_financial_metrics
)
# %%
# Test the stock price function
ticker = "TSLA"
print("\n--- Testing get_stock_price ---")
result = get_stock_price(ticker)
print(f"Result: {result}")
# %%
# Test the stock history function
print("\n--- Testing get_stock_history ---")
result = get_stock_history(ticker, 7)
print(f"Result: {result}")
# %%
# Test the company info function
print("\n--- Testing get_company_info ---")
result = get_company_info(ticker)
print(f"Result: {result}")
# %%
# Test the financial metrics function
print("\n--- Testing get_financial_metrics ---")
result = get_financial_metrics(ticker)
print(f"Result: {result}")Let’s use the interactive environment to run the test script. We’ve add the # %% marker on top of the statements - as we mentioned when we set up interactive Python development in Section A.2, this creates a code cell that can be run interactively in VS Code with Jupyter integration.
To run each cell:
- Open the file in VS Code (or compatible IDE)
- Click the “Run Cell” button that appears above the
# %%line, or use the shortcutShift+EnterorShift+Return(on Mac) - If prompted to select a kernel, choose the
.venvenvironment we created earlier
If you see a prompt saying "Running cells requires the ipykernel package", click the prompted “Install” button. If that fails, you can install it directly from the terminal:
uv add ipykernelAfter that you should be able to run test script, you’ll see output like:
--- Testing get_stock_price ---
2025-04-22 21:26:20,148 - tools_yf - DEBUG - Getting stock price for TSLA
Result: Tesla, Inc. (TSLA) is currently trading at $237.97, up 4.60% today.
So the functions correctly fetch the data from Yahoo Finance and format the results nicely.
However, remember that we have configured the logging system, where are those logs?
A.7.4 Why Configuring Logging Matters
An important lesson: just adding logger calls to your code isn’t enough. You need to configure the logging system to actually see the messages.
Let’s copy test_tools_yf.py into test_tools_yf_log.py under the playground directory. Now let’s add the following into the file:
# %%
# Configure logging to show in the console
logging.basicConfig(
level=logging.DEBUG,
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s',
handlers=[
logging.StreamHandler(sys.stdout) # Log to stdout to see in notebook/interactive output
]
)Our logging.basicConfig() call does several important things: - Sets which messages to show (level=logging.DEBUG) - Formats messages with helpful information like timestamps - Directs logs to the console so we can see them
Now when you run the test script, you’ll see output like:
--- Testing get_stock_price ---
2025-04-22 22:04:14,016 - tools_yf - DEBUG - Getting stock price for TSLA
2025-04-22 22:04:14,017 - yfinance - DEBUG - Using User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/133.0.0.0 Safari/537.36
2025-04-22 22:04:14,018 - yfinance - DEBUG - get_raw_json(): https://query2.finance.yahoo.com/v10/finance/quoteSummary/TSLA
2025-04-22 22:04:14,018 - yfinance - DEBUG - Entering get()
2025-04-22 22:04:14,018 - yfinance - DEBUG - Entering _make_request()
2025-04-22 22:04:14,019 - yfinance - DEBUG - url=https://query2.finance.yahoo.com/v10/finance/quoteSummary/TSLA
2025-04-22 22:04:14,356 - yfinance - DEBUG - TSLA: OHLC after combining events: 2025-04-22 00:00:00-04:00 only
...
2025-04-22 22:04:14,358 - yfinance - DEBUG - TSLA: yfinance returning OHLC: 2025-04-22 00:00:00-04:00 only
2025-04-22 22:04:14,358 - yfinance - DEBUG - Exiting history()
2025-04-22 22:04:14,359 - yfinance - DEBUG - Exiting history()
Result: Tesla, Inc. (TSLA) is currently trading at $237.97, up 4.60% today.
This shows both our log message and the function’s return value.
A.7.5 Control Log Verbosity
If you feel external libraries like yfinance too chatty with DEBUG logs. We can control this with:
logging.getLogger('yfinance').setLevel(logging.WARNING)
logging.getLogger('urllib3').setLevel(logging.WARNING)
logging.getLogger('peewee').setLevel(logging.WARNING)This tells Python: “Only show WARNING or higher messages from these libraries.” This keeps our output focused on what we care about.
Now when you run the test script, we will only see our DEBUG message from tools_yf.py and the actual return value.
--- Testing get_stock_price ---
2025-04-22 22:08:23,271 - tools_yf - DEBUG - Getting stock price for TSLA
Result: Tesla, Inc. (TSLA) is currently trading at $237.97, up 4.60% today.As you build your agent, remember that good logging practices will make development and debugging much easier. We’ll continue to use logging throughout this tutorial as we build more complex functionality.
A.7.6 Understand Where to Configure Logging
An important question: where should we put the logging.basicConfig() call?
In Python’s logging system:
- The first call to
logging.basicConfig()in a program configures the root logger - Subsequent calls have no effect
This means:
- DO: Configure logging in your main script (the entry point of your program)
- DON’T: Configure logging in libraries or modules that might be imported
In our case, we put the configuration in test_agent.py because that’s our entry point. If we had put it in func_tools_yf.py or agent.py, it would only work if those modules were imported before any other logging configuration happened.
A.7.7 Key Takeaways About Logging
Through our testing, we’ve learned several important lessons about logging:
Add Logging Statements: Include
logger.debug(),logger.info(), andlogger.error()in your code to track execution.Configure Logging Once: Put
logging.basicConfig()in your main script, not in libraries/modules.Be Selective: Use
logging.getLogger("module.name").setLevel(logging.DEBUG)to enable detailed logging only for specific parts of your code.Include Context: Log relevant variables and parameters to make debugging easier.
Format Logs Clearly: Use a format that includes timestamps, module names, and log levels.