6  Building an Agent from Scratch with OpenAI Chat Completion API

6.1 Learning Objectives

  • Understand how the Chat Completion API works with function calling
  • Build your own tool management system for agent function calling
  • Implement conversation history management without SDK helpers
  • Create a complete agent that orchestrates all these components

Now that we’ve built our MarketMind financial assistant using the OpenAI Agent SDK, let’s take a step deeper and implement our agent from scratch using the OpenAI Chat Completion API.

6.2 Understanding the Chat Completion API with Function Calling

The OpenAI Chat Completion API is the foundation for building conversational AI systems. Unlike the Agent SDK, which handles many details for us, building Agent from scrach with the Chat Completion API requires us to manage the conversation flow, tool execution, and state management ourselves.

At its core, the Chat Completion API takes a series of messages and returns a response. Each message has a role (system, user, assistant, or tool) and content.

When we add function calling to the mix, we need to:

  1. Define our tools as JSON schemas
  2. Send these schemas along with our messages
  3. Process any tool calls that come back in the response
  4. Execute the tools and send the results back to the API
# Calling the Chat Completion API with function calling capability
response = self.client.chat.completions.create(
    model=self.model,
    messages=messages,
    tools=self.tool_schemas,
    tool_choice="auto"
)

This code sends our conversation messages to the API along with our tool schemas. The tool_choice="auto" parameter tells the model to decide when to use tools.

6.3 Create a Tool Manager

Let’s create a file for our tool manager:

touch src/agent_from_scratch/tool_manager.py

Now, let’s implement a tool manager that will handle tool registration, schema generation, and execution:

# src/agent_from_scratch/tool_manager.py
import json
import inspect
import logging
from typing import Any, Dict, List, Callable, Union, get_type_hints

# Configure logging
logger = logging.getLogger(__name__)

class ToolManager:
    """
    Manages the registration and execution of tools.
    """
    
    def __init__(self):
        self.tools = {}
        
    def _generate_parameter_schema(self, function: Callable) -> Dict[str, Any]:
        """
        Generate a JSON schema for the function parameters.
    
        Args:
            function: The function to generate a schema for
        
        Returns:
            A JSON schema for the function parameters
        """
        logger.debug(f"Generating parameter schema for function: {function.__name__}")
        signature = inspect.signature(function)
        type_hints = get_type_hints(function)

        logger.debug(f"Function signature: {signature}")
        logger.debug(f"Type hints: {type_hints}")

        properties = {}
        required = []

        for param_name, param in signature.parameters.items():
            # Get the parameter type from type hints, default to str if not specified
            param_type = type_hints.get(param_name, str)
        
            # Handle Optional types (Union[Type, None])
            if hasattr(param_type, "__origin__") and param_type.__origin__ is Union:
                # Check if this is Optional[Type] (Union[Type, None])
                args = param_type.__args__
                if len(args) == 2 and args[1] is type(None):  # noqa: E721
                    # This is Optional[Type], use the first type
                    param_type = args[0]
                    logger.debug(f"Detected Optional type for {param_name}, using {param_type}")
    
            # Handle both direct types and type annotations
            if hasattr(param_type, "__origin__"):
                # For annotations like List[int], Dict[str, int], etc.
                origin = param_type.__origin__
                if origin is list or origin is List:
                    json_type = "array"
                elif origin is dict or origin is Dict:
                    json_type = "object"
                else:
                    # Default to the name of the origin
                    json_type = origin.__name__.lower()
                    logger.debug(f"Using origin name for {param_name}: {json_type}")
            else:
                # For direct types like int, str, etc.
                param_type_name = param_type.__name__
            
                # Map Python types to JSON schema types
                type_map = {
                    "str": "string",
                    "int": "integer",
                    "float": "number",
                    "bool": "boolean",
                    "list": "array",
                    "dict": "object"
                }
            
                json_type = type_map.get(param_type_name, "string")
                logger.debug(f"Mapped {param_type_name} to {json_type} for {param_name}")
    
            # Extract parameter description from docstring if available
            param_desc = f"Parameter {param_name} for {function.__name__}"
            if function.__doc__:
                # Look for Args section in docstring
                doc_lines = function.__doc__.split("\n")
                in_args_section = False
                for line in doc_lines:
                    line = line.strip()
                    if line.startswith("Args:"):
                        in_args_section = True
                        continue
                    if in_args_section and line.startswith(param_name + ":"):
                        param_desc = line[len(param_name + ":"):].strip()
                        break
                    # If we hit a new section, stop looking
                    if in_args_section and line.endswith(":") and not line.startswith(param_name):
                        break
        
            properties[param_name] = {
                "type": json_type,
                "description": param_desc
            }
    
            # If the parameter has no default value, it's required
            if param.default == inspect.Parameter.empty:
                required.append(param_name)
                logger.debug(f"Parameter {param_name} is required")

        schema = {
            "type": "object",
            "properties": properties,
            "required": required
        }

        logger.debug(f"Generated schema: {json.dumps(schema, indent=2)}")
        return schema
        
    def register_tool(self, name, description, tool_function):
        """
        Register a new tool with the manager.
        
        Args:
            name: The unique name of the tool
            description: A description of what the tool does
            tool_function: The function that implements the tool
        """
        logger.debug(f"Registering tool: {name} - {description}")
        
        # Generate parameter schema at registration time
        parameter_schema = self._generate_parameter_schema(tool_function)
        
        self.tools[name] = {
            "description": description,
            "function": tool_function,
            "schema": parameter_schema
        }
        
        logger.debug(f"Tool registered successfully: {name}")
        return self  # Allow method chaining
        
    def get_tool(self, name):
        """Get a tool by name."""
        return self.tools.get(name, {}).get("function")
        
    def list_tools(self):
        """List all available tools with their descriptions."""
        return {name: info["description"] for name, info in self.tools.items()}
    
    def get_schema_for_tools(self):
        """
        Get all tools in the schema expected by tool call API.
        
        Returns:
            A list of tool schema definitions
        """
        logger.debug("Preparing tool schema definitions")
        tools = []
        
        for name, info in self.tools.items():
            tools.append({
                "type": "function",
                "function": {
                    "name": name,
                    "description": info["description"],
                    "parameters": info["schema"]
                }
            })
            
        logger.debug(f"Prepared {len(tools)} tool schema")
        return tools
        
    def execute_tool(self, name, **kwargs):
        """
        Execute a tool by name with the provided arguments.
        
        Args:
            name: The name of the tool to execute
            **kwargs: Arguments to pass to the tool
            
        Returns:
            The result of the tool execution, or an error message if the tool doesn't exist
        """
        tool_function = self.get_tool(name)
        if not tool_function:
            error_msg = f"Error: Tool '{name}' not found"
            logger.error(error_msg)
            return error_msg
        
        try:
            logger.debug(f"Executing tool '{name}' with args: {kwargs}")
            result = tool_function(**kwargs)
            logger.debug(f"Tool '{name}' executed successfully with result: {result}")
            return result
        except Exception as e:
            error_msg = f"Error executing tool '{name}': {str(e)}"
            logger.error(error_msg, exc_info=True)
            return error_msg

The ToolManager class we’ve implemented is the foundation of our from-scratch agent. Let’s break down its key components and why they matter:

6.3.1 Dynamic Schema Generation

One of the most powerful aspects of our ToolManager is how it automatically generates JSON schemas from Python functions. This is significantly different from the OpenAI Agent SDK approach, where the @function_tool decorator handled this for us.

The _generate_parameter_schema method performs sophisticated introspection of our functions:

  1. Type Detection: It uses Python’s type hints system to determine the proper JSON schema types for each parameter. This is much more robust than simple string descriptions.

  2. Docstring Parsing: Unlike the SDK which simply takes the entire docstring, our implementation intelligently parses docstrings to extract parameter-specific descriptions, making our tool definitions more precise.

  3. Handling Complex Types: The code handles advanced types like Optional, List, and Dict, converting them to the appropriate JSON schema formats.

This approach gives us complete control over schema generation, allowing us to customize how our functions are represented to the model. Compared to the SDK’s automatic approach, our implementation is more transparent and customizable, though it requires more code.

6.3.2 Tool Registration and Execution

The register_tool and execute_tool methods form the core of our tool management system:

  1. Explicit Registration: Unlike the SDK where tools are simply passed to the Agent constructor, our approach requires explicit registration with names and descriptions. This gives us more control over how tools are presented to the model.

  2. Error Handling: Our execute_tool method includes robust error handling, ensuring that tool failures don’t crash the entire agent. This is similar to what the SDK does behind the scenes, but now we have visibility and control over it.

  3. Method Chaining: The return self pattern in register_tool allows for elegant method chaining when registering multiple tools, making our code more readable.

Compared to the SDK’s abstracted approach, our implementation gives us complete visibility into the tool execution process, making debugging and customization much easier.

6.3.3 Build the Agent

Now, let’s create our Chat Completion API agent:

touch src/agent_from_scratch/agent_chat.py

Let’s implement the agent:

# src/agent_from_scratch/agent_chat.py
import json
import logging
import os
from openai import OpenAI
from src.agent_from_scratch.tool_manager import ToolManager
from src.common.config import DEFAULT_MODEL, SYSTEM_PROMPT, OPENAI_API_KEY, DEFAULT_MAX_ITERATIONS

DEFAULT_HISTORY_SIZE = 20

# Configure logging
logger = logging.getLogger(__name__)
logger.setLevel(logging.DEBUG)  # Set to DEBUG for detailed logs

class MarketMindChatAgent:
    """
    An AI-powered financial assistant that can answer queries about stocks and markets.
    """
    
    def __init__(self, model=DEFAULT_MODEL):
        logger.debug(f"Initializing MarketMindChatAgent with model: {model}")
        self.tool_manager = ToolManager()
        self.model = model
        self.client = OpenAI(api_key=OPENAI_API_KEY)
        self.conversation_history = []
        
        # Initialize empty tool schemas
        self.tool_schemas = []
        logger.debug("Tool schemas initialized (empty)")
        
        logger.debug(f"MarketMindChatAgent initialized successfully with model: {model}")
        
    def register_tool(self, name, description, tool_function):
        """Register a new tool with the agent."""
        logger.debug(f"Registering tool: {name} - {description}")
        self.tool_manager.register_tool(name, description, tool_function)
        
        # Update tool schemas immediately
        self.tool_schemas = self.tool_manager.get_schema_for_tools()
        logger.debug(f"Tool schemas updated: now have {len(self.tool_schemas)} tools")
        
        logger.debug(f"Tool registered successfully: {name}")
        return self
    
    def _handle_tool_calls(self, message, messages):
        """
        Handle tool calls from the LLM response.
        
        Args:
            message: The message from the LLM containing tool calls
            messages: The conversation history to append tool results to
            
        Returns:
            True if tool calls were handled, False otherwise
        """
        if not message.tool_calls:
            logger.debug("No tool calls to handle")
            return False
            
        logger.debug(f"Processing {len(message.tool_calls)} tool calls")
        for i, tool_call in enumerate(message.tool_calls):
            function_name = tool_call.function.name
            function_args = json.loads(tool_call.function.arguments)
            
            logger.debug(f"Tool call {i+1}: {function_name} with args: {function_args}")
            
            # Execute the tool
            logger.debug(f"Executing tool: {function_name}")
            tool_result = self.tool_manager.execute_tool(function_name, **function_args)
            logger.debug(f"Tool execution result: {tool_result}")
            
            # Add the tool result to messages
            messages.append({
                "role": "tool",
                "tool_call_id": tool_call.id,
                "content": str(tool_result)
            })
            logger.debug(f"Added tool result to messages. Message count: {len(messages)}")
            
        return True
    
    def process_query(self, query):
        """
        Process a user query using the LLM model with function calling.
        Handles multiple tool calls in sequence.
        """
        logger.debug(f"Processing query: {query}")
        try:
            # Add the user query to the conversation history
            self.conversation_history.append({"role": "user", "content": query})
            logger.debug(f"Added query to conversation history. History length: {len(self.conversation_history)}")
            
            # System prompt that defines the agent's capabilities
            system_prompt = SYSTEM_PROMPT
            
            # Start with system message and conversation history
            messages = [{"role": "system", "content": system_prompt}] + self.conversation_history
            logger.debug(f"Prepared messages for API call. Message count: {len(messages)}")
            
            # Safety mechanism to prevent infinite loops
            iteration = 0

            # Continue the conversation until no more tool calls are needed or max iterations reached
            while iteration < DEFAULT_MAX_ITERATIONS:
                iteration += 1
                logger.debug(f"Starting iteration {iteration} of conversation loop (max: {DEFAULT_MAX_ITERATIONS})")
                
                # Call the model with function calling capability
                logger.debug(f"Calling API with model: {self.model}")
                response = self.client.chat.completions.create(
                    model=self.model,
                    messages=messages,
                    tools=self.tool_schemas,
                    tool_choice="auto"
                )
                
                # Extract the assistant's message
                message = response.choices[0].message
                logger.debug("Received response from API")
                logger.debug(
                    f"Response details: role: {message.role}, "
                    f"content: {message.content or '[no content]'}, "
                    f"tool_calls: {message.tool_calls}"
                )
                
                # Add the assistant's response to messages - handle null content
                messages.append({
                    "role": message.role,
                    "content": message.content or "",  # Use empty string instead of null
                    "tool_calls": [
                        {
                            "id": tool_call.id,
                            "type": "function",
                            "function": {
                                "name": tool_call.function.name,
                                "arguments": tool_call.function.arguments
                            }
                        } for tool_call in (message.tool_calls or [])
                    ] if message.tool_calls else None
                })
                logger.debug(f"Added assistant's message to messages. Message count: {len(messages)}")
                
                # If there are no tool calls, we're done
                if not message.tool_calls:
                    logger.debug("No tool calls in response, finishing conversation")
                    self.conversation_history = messages[1:]  # Skip the system message
                    if len(self.conversation_history) > DEFAULT_HISTORY_SIZE:
                        logger.debug(f"Trimming conversation history from {len(self.conversation_history)} messages")
                        self.conversation_history = self.conversation_history[-DEFAULT_HISTORY_SIZE:]
                    return message.content or ""  # Return the final response
                
                # Handle tool calls
                self._handle_tool_calls(message, messages)
                
            # If we reached the maximum number of iterations, return a message about it
            logger.warning(f"Reached maximum number of iterations ({DEFAULT_MAX_ITERATIONS})")
            self.conversation_history = messages[1:]  # Skip the system message
            if len(self.conversation_history) > DEFAULT_HISTORY_SIZE:
                logger.debug(f"Trimming conversation history from {len(self.conversation_history)} messages")
                self.conversation_history = self.conversation_history[-DEFAULT_HISTORY_SIZE:]
            return (
                f"I've made multiple attempts to process your query but couldn't reach a final answer. "
                f"This might indicate a complex request or an issue with the available tools. "
                f"Please try rephrasing your question or breaking it into smaller parts."
            )
            
        except Exception as e:
            logger.error(f"Error processing query: {str(e)}", exc_info=True)
            return f"An error occurred: {str(e)}"

Our MarketMindChatAgent class demonstrates how to orchestrate the conversation flow with the Chat Completion API. Let’s examine its key aspects:

6.3.4 Conversation Loop

The heart of our agent is the conversation loop in process_query:

  1. Iteration Management: Unlike the SDK which handles this internally, we explicitly manage a conversation loop with a maximum iteration count. This prevents infinite loops while allowing multiple rounds of tool calls.

  2. Message Array Management: We manually maintain and update the message array, adding user queries, assistant responses, and tool results. This is fundamentally different from the SDK’s abstracted approach and gives us complete control over the conversation structure.

  3. Tool Call Detection: We explicitly check for tool calls in the response and handle them accordingly. This direct approach provides more visibility into the model’s decision-making process than the SDK’s abstracted approach.

The loop structure reveals how the Chat Completion API works under the hood - it doesn’t inherently support multi-turn tool calling, so we need to implement this pattern ourselves. This is a significant difference from the Response API (which we’ll see in Chapter 6), which has more built-in support for this workflow.

6.3.5 Memory Management

Our memory implementation is simpler than the SDK version:

  1. Direct Message Storage: We store the entire conversation history as a list of messages, rather than using a specialized memory class. This is more straightforward but less structured than our SDK implementation.

  2. Size Management: We implement a simple trimming mechanism to prevent the conversation history from growing too large. This is crucial for managing token usage in long conversations.

  3. No Context System: Unlike our SDK implementation, we don’t have a separate context object - everything is in the message array. This is simpler but less flexible for complex state management.

This approach illustrates the trade-off between simplicity and sophistication in memory management. The SDK’s context system offers more structure and type safety, while our direct approach is more transparent but requires manual management.

6.4 Update our CLI

Now, let’s update our CLI to include the Chat Completion API implementation:

# Add this to src/cli/main.py after the openai_agent_sdk command
@cli.command()
@click.option('--model', default=DEFAULT_MODEL, help='The model to use for the agent')
@click.option('--debug', is_flag=True, help='Enable debug logging')
def chat_completion(model, debug):
    """Start MarketMind using Chat Completion API."""
    
    # Set up logging - always log to file if debug is enabled, never to console for CLI
    log_filename = setup_logging(
        debug=debug,
        module_loggers=DEFAULT_DEBUG_MODULES,
        log_to_file=debug,
        console_output=False  # Don't output logs to console for CLI apps
    )
    
    logger.info(f"Starting MarketMind Chat Completion Agent with model={model}")
    
    # Initialize the agent
    agent = MarketMindChatAgent(model=model)
    
    # Register all the tools
    agent.register_tool(
        "get_stock_price",
        "Get the current price of a stock",
        get_stock_price
    )
    
    agent.register_tool(
        "get_stock_history",
        "Get historical price data for a stock",
        get_stock_history
    )
    
    agent.register_tool(
        "get_company_info",
        "Get basic information about a company",
        get_company_info
    )
    
    agent.register_tool(
        "get_financial_metrics",
        "Get key financial metrics for a company",
        get_financial_metrics
    )
    
    click.echo(click.style("\n🤖 MarketMind Financial Assistant powered by Chat Completion API", fg='blue', bold=True))
    click.echo(click.style("Ask me about stocks, companies, or financial metrics. Type 'exit' to quit.\n", fg='blue'))
    
    if log_filename:
        click.echo(click.style(f"Log file: {log_filename}", fg='yellow'))
    
    # Main conversation loop
    while True:
        # Get user input
        user_input = click.prompt(click.style("You", fg='green', bold=True))
        
        # Check for exit command
        if user_input.lower() in ('exit', 'quit', 'q'):
            logger.info("User requested exit")
            click.echo(click.style("\nThank you for using MarketMind! Goodbye.", fg='blue'))
            break

        # Process the query
        click.echo(click.style("MarketMind", fg='blue', bold=True) + " is thinking...")
        
        click.echo(click.style("  🤔 Processing query and deciding on actions...", fg="yellow"))

        try:
            # Process the query
            response = agent.process_query(user_input)
            click.echo(click.style("  ✅ Analysis complete, generating response...", fg="green"))
            
            # Display the response
            click.echo(click.style("MarketMind", fg='blue', bold=True) + f": {response}\n")
        except Exception as e:
            logger.error(f"Error processing query: {str(e)}", exc_info=True)
            click.echo(click.style("  ❌ Error processing query", fg="red"))
            click.echo(click.style("MarketMind", fg='blue', bold=True) + 
                      f": I encountered an error while processing your request. Please try again.\n")

Our CLI implementation for the Chat Completion agent follows the same pattern as the SDK version, ensuring a consistent user experience across different implementations. Key differences include:

  1. New Imports:
    • Added imports for MarketMindChatAgent and the financial tool functions
    • Imported configuration from the agent_from_scratch module
  2. New Command:
    • Added a new chat_completion command to the CLI group
    • This command initializes and runs the Chat Completion API version of our agent
  3. Tool Registration:
    • Explicitly registered each financial tool with the agent
    • Each tool has a name, description, and function implementation
  4. Consistent UI:
    • Maintained the same user interface style as the SDK version
    • Used the same color coding and progress indicators

6.5 Key Differences from the Agent SDK

Our Chat Completion implementation reveals what’s happening “under the hood” of the SDK:

  1. More Code, More Control: We write significantly more code than with the SDK, but gain complete visibility and control over the process.

  2. Manual Orchestration: We handle the conversation loop, tool execution, and memory management ourselves, rather than relying on the SDK’s abstractions.

  3. Lower-Level API: We interact directly with the Chat Completion API, giving us more flexibility but requiring more code to implement common patterns.

Our from-scratch implementation gives us more control over the conversation flow, but requires more code than the SDK version, illustrating the trade-off between abstraction and flexibility.

6.6 Testing the Chat Completion API Implementation

You can now run the agent with the Chat Completion API implementation:

market-mind chat-completion

This should provide a similar experience to the SDK implementation, but using our custom code to manage the conversation flow and tool execution.

Now users can run the CLI with either implementation:

# Run with the OpenAI Agent SDK implementation
market-mind openai-agent-sdk

# Run with our Chat Completion API implementation
market-mind chat-completion

Both commands accept the same options: - --model: Specify which model to use - --debug: Enable detailed logging

Here’s what an example session with the Chat Completion implementation might look like:

 market-mind chat-completion

🤖 MarketMind Financial Assistant powered by Chat Completion API
Ask me about stocks, companies, or financial metrics. Type 'exit' to quit.

You: tell me about tesla
MarketMind is thinking...
  🤔 Processing query and deciding on actions...
  🏢 Getting company info for TSLA
  💰 Getting financial metrics for TSLA
   Analysis complete, generating response...
MarketMind: Tesla, Inc. (TSLA) is a leading company in the consumer cyclical sector, primarily engaged in designing, developing, manufacturing, leasing, and selling electric vehicles as well as energy generation and storage systems. The company operates globally with a significant presence in the United States and China. Tesla's operations are divided into two main segments: Automotive and Energy Generation and Storage.

Tesla's automotive segment offers electric vehicles, including sedans and SUVs, along with related services such as charging networks, vehicle insurance, and after-sales services. The energy segment focuses on solar energy products, energy storage solutions, and related services for residential, commercial, and industrial customers.

Financially, Tesla has a market capitalization of approximately $917.81 billion, with a revenue of about $95.72 billion. Its P/E ratio is around 163.76, indicating high market expectations for future growth. Tesla's profit margin is approximately 6.38%, and it has a return on equity of about 8.77%. The company's stock has experienced significant volatility over the past year, with a 52-week trading range between $167.41 and $488.54.

You: how about apple
MarketMind is thinking...
  🤔 Processing query and deciding on actions...
  🏢 Getting company info for AAPL
  💰 Getting financial metrics for AAPL
   Analysis complete, generating response...
MarketMind: Apple Inc. (AAPL) is a major player in the technology sector, specializing in consumer electronics, software, and digital services. Headquartered in Cupertino, California, Apple designs, manufactures, and markets a wide range of products including the iPhone smartphones, Mac personal computers, iPads, and wearables like the Apple Watch and AirPods. The company also offers various services such as Apple Music, Apple TV+, iCloud, and the App Store.

Financially, Apple boasts a market capitalization of approximately $3.14 trillion, making it one of the most valuable companies in the world. Its revenue reaches about $395.76 billion, with a profit margin of roughly 24.3%. The company's P/E ratio is around 33.22, and it has a very high return on equity at approximately 136.52%. Apple also provides a dividend yield of 48.00%, which is notably high, indicating significant dividend payouts relative to its stock price. Its stock has traded within a 52-week range of $169.11 to $260.10.

You: compare them for me
MarketMind is thinking...
  🤔 Processing query and deciding on actions...
  ✅ Analysis complete, generating response...
MarketMind: Here's a comparison of Tesla and Apple based on key financial and operational metrics:

| Aspect                       | Tesla (TSLA)                                        | Apple (AAPL)                                        |
|------------------------------|-----------------------------------------------------|-----------------------------------------------------|
| Sector                       | Consumer Cyclical                                   | Technology                                          |
| Industry                     | Auto Manufacturers                                  | Consumer Electronics                                |
| Market Capitalization        | ~$917.81 billion                                   | ~$3,143.83 billion                                |
| Revenue                      | ~$95.72 billion                                    | ~$395.76 billion                                  |
| P/E Ratio                    | 163.76                                              | 33.22                                              |
| Profit Margin                | 6.38%                                               | 24.30%                                             |
| Return on Equity             | 8.77%                                               | 136.52%                                            |
| Dividend Yield               | N/A                                                 | 48.00%                                            |
| 52-Week Range                | $167.41 - $488.54                                   | $169.11 - $260.10                                 |
| Employees                    | Approximately 125,665                                | Approximately 150,000                             |

**Summary:**
- **Market Cap & Revenue:** Apple is significantly larger in terms of market capitalization and revenue.
- **Valuation:** Tesla has a much higher P/E ratio, indicating higher growth expectations, but also higher valuation risk.
- **Profitability:** Apple is more profitable with a higher profit margin and return on equity.
- **Dividends:** Apple offers a substantial dividend yield, unlike Tesla.
- **Stock Range:** Tesla's stock has experienced more volatility over the past year compared to Apple.

Overall, Apple is a mature, highly profitable company with a strong dividend policy and large market valuation. Tesla, on the other hand, is a high-growth company with a focus on electric vehicles and energy solutions, reflected in its higher valuation multiples and more volatile stock price.

6.7 Key Takeaways

In this chapter, we’ve: - Implemented a tool manager that handles tool registration, schema generation, and execution - Built an agent that uses the Chat Completion API directly - Managed conversation history and tool calls manually - Added the Chat Completion implementation to our CLI

By building the agent from scratch, we’ve gained a deeper understanding of how the Chat Completion API works and how to manage the conversation flow and tool execution without relying on the SDK’s abstractions.