Build Your First AI Agent in Python with 50 Lines of Code (No Complex Frameworks)

Build AI Agent Python

Build Your First AI Agent in Python: A Beginner’s Guide

Everyone in the tech world is talking about AI Agents. But what exactly are they, and how do you build one? If you are looking to build an AI agent with Python but feel overwhelmed by complex frameworks like LangChain, you are in the right place.

In this tutorial, we will strip away the complexity. We will write a simple, powerful Python script that allows an AI to actually do things—not just talk. By the end of this guide, you will understand the core logic behind LLM function calling and the ReAct pattern.

What is the Difference Between a Chatbot and an AI Agent?

Before we write code, it is crucial to understand the logic:

  • 🗣️ Chatbots (like standard ChatGPT) generate text based on training data.
  • 🛠️ AI Agents use tools. They can execute Python functions, search the web, or query a database to get real answers.

For beginners, the best way to learn AI engineering is to build an agent from scratch without “black box” libraries.

The “ReAct” Loop: How AI Agents Think

Successful agents follow a loop called ReAct (Reason + Act). This is the secret sauce of Python automation with AI:

  1. Thought: The AI analyzes your request (“What is the weather?”).
  2. Decision: It looks at its available tools and selects the right Python function.
  3. Action: It executes the code.
  4. Observation: It reads the output of the function.
  5. Answer: It formulates a final response for you.

Step 1: Install Required Libraries

To follow this Python AI tutorial, you need the OpenAI library. Open your terminal and run:

pip install openai

Note: You will need a valid OpenAI API key. This logic also works with local models (like Llama 3 via Ollama) if you are building an offline agent.

Step 2: Create Your First “Tool” 🛠️

Large Language Models (LLMs) cannot feel the wind or see the sun. They need a tool to fetch data. Let’s create a simple Python function that acts as our weather tool.


import json
from openai import OpenAI

# Initialize the client (Ensure your API key is set in environment variables)
client = OpenAI()

# This function is the "Tool" our Agent will use
def get_current_weather(location):
    """
    Mock function to return weather for a city.
    In a real AI agent, this would call a live Weather API.
    """
    print(f"🕵️ Agent is accessing tool: Weather Check for {location}...")
    
    # Simulating data for demonstration
    if "london" in location.lower():
        return json.dumps({"location": "London", "temperature": "15C", "condition": "Cloudy"})
    elif "new york" in location.lower():
        return json.dumps({"location": "New York", "temperature": "22C", "condition": "Sunny"})
    else:
        return json.dumps({"location": location, "temperature": "Unknown", "condition": "Unknown"})

Step 3: Building the Agent Brain 🧠

Now we implement function calling. We describe our tool to the AI using a JSON schema. This allows the model to “understand” that it has a weather tool available.


def run_agent(user_query):
    # 1. Define available tools (The Agent's Toolkit)
    tools = [
        {
            "type": "function",
            "function": {
                "name": "get_current_weather",
                "description": "Get the current weather in a given location",
                "parameters": {
                    "type": "object",
                    "properties": {
                        "location": {
                            "type": "string",
                            "description": "The city and state, e.g. San Francisco, CA",
                        },
                    },
                    "required": ["location"],
                },
            },
        }
    ]

    # 2. Send query + tool definitions to GPT
    print(f"🤖 User asks: {user_query}")
    
    response = client.chat.completions.create(
        model="gpt-3.5-turbo",
        messages=[{"role": "user", "content": user_query}],
        tools=tools,
        tool_choice="auto", 
    )

    response_message = response.choices[0].message
    tool_calls = response_message.tool_calls

    # 3. Check if the AI wants to use a tool
    if tool_calls:
        # The AI decided to act!
        available_functions = {
            "get_current_weather": get_current_weather,
        }
        
        messages = [{"role": "user", "content": user_query}]
        messages.append(response_message)  # Add the assistant's "thought" to history

        for tool_call in tool_calls:
            function_name = tool_call.function.name
            function_to_call = available_functions[function_name]
            function_args = json.loads(tool_call.function.arguments)
            
            # 4. EXECUTE THE PYTHON CODE
            function_response = function_to_call(
                location=function_args.get("location"),
            )
            
            # 5. Feed the result back to the AI
            messages.append(
                {
                    "tool_call_id": tool_call.id,
                    "role": "tool",
                    "name": function_name,
                    "content": function_response,
                }
            )
            
        # 6. Final Answer Generation
        second_response = client.chat.completions.create(
            model="gpt-3.5-turbo",
            messages=messages,
        )
        return second_response.choices[0].message.content
    else:
        # No tools needed (regular conversation)
        return response_message.content

# Test your new Python AI Agent
print("Final Answer: " + run_agent("What is the weather like in New York today?"))

The Output

When you run this script, you will see the Agentic workflow in action:

🤖 User asks: What’s the weather like in New York today?
🕵️ Agent is accessing tool: Weather Check for New York…
Final Answer: The weather in New York today is sunny with a temperature of 22°C.

Why This Matters for LogicPy Readers

You have just bridged the gap between logic and language. This script is the foundation for advanced AI applications.

From here, you can expand your agent’s capabilities. Imagine connecting it to your Pandas dataframes to analyze CSV files or giving it access to the web. The possibilities for Python AI development are endless.

Want to learn more about handling data? Check out our guide on Polars vs Pandas to see how to process data faster for your agents.

External Links:

https://www.python.org

https://openai.com/api

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top