Tuesday, October 21, 2025

Construct An AI Agent with Perform Calling and GPT-5


and Giant Language Fashions (LLMs)

Giant language fashions (LLMs) are superior AI methods constructed on deep neural community corresponding to transformers and skilled on huge quantities of textual content to generate human-like language. LLMs like ChatGPT, Claude, Gemini and Grok can sort out many difficult duties and are used throughout fields corresponding to science, healthcare, training, and finance.

An AI agent extends the capabilites of LLMs to resolve duties which are past their pre-trained information. An LLM can write a Python tutorial from what it discovered throughout coaching. In case you ask it to ebook a flight, the duty requires entry to your calendar, net search and the power to take actions, these fall past the LLM’s pre-trained information. Among the frequent actions embrace:

  • Climate forecast: The LLM connects to an online search device to fetch the most recent climate forecast.
  • Reserving agent: An AI agent that may verify a consumer’s calendar, search the online to go to a reserving website like Expedia to search out out there choices for flights and accommodations, current them to the consumer for affirmation, and full the reserving on behalf of the consumer.

How an AI Agent Works

AI brokers type a system that makes use of a Giant Language Mannequin to plan, cause, and take steps to work together with its surroundings utilizing instruments urged from the mannequin’s reasoning to resolve a specific job.

Fundamental Construction of an AI Agent

Picture Generated By Gemini
  • A Giant Language Mannequin (LLM): the LLM is the mind of an AI agent. It takes a consumer’s immediate, plans and causes by the request and breaks the issue into steps that decide which instruments it ought to use to finish the duty.
  • A device is the framework that the agent makes use of to carry out an motion primarily based on the plan and reasoning from the Giant Language Mannequin. In case you ask an LLM to ebook a desk for you at a restaurant, attainable instruments that will likely be used embrace calendar to verify your availability and an online search device to entry the restaurant web site and make a reservation for you.

Ilustrated Determination Making of a Reserving AI Agent

Picture Generated By ChatGPT

AI brokers can entry totally different instruments relying on the duty. A device could be a knowledge retailer, corresponding to a database. For instance, a customer-support agent may entry a buyer’s account particulars and buy historical past and determine when to retrieve that data to assist resolve a problem.

AI brokers are used to resolve a variety of duties, and there are various highly effective brokers out there. Coding brokers, significantly agentic IDEs corresponding to Cursor, Windsurf, and GitHub Copilot assist engineers write and debug code sooner and construct initiatives shortly. CLI Coding brokers like Claude Code and Codex CLI can work together with a consumer’s desktop and terminal to hold out coding duties. ChatGPT helps brokers that may carry out actions corresponding to reserving reservations on a consumer’s behalf. Brokers are additionally built-in into buyer help workflows to speak with clients and resolve their points.

Perform Calling

Perform calling is a way for connecting a big language mannequin (LLM) to exterior instruments corresponding to APIs or databases. It’s utilized in creating AI brokers to attach LLMs to instruments. In operate calling, every device is outlined as a code operate (for instance, a climate API to fetch the most recent forecast) together with a JSON Schema that specifies the operate’s parameters and instructs the LLM on when and tips on how to name the operate for a given job.

The kind of operate outlined is dependent upon the duty the agent is designed to carry out. For instance, for a buyer help agent we will outline a operate that may extract data from unstructured information, corresponding to PDFs containing particulars a couple of enterprise’s merchandise.

On this put up I’ll reveal tips on how to use operate calling to construct a easy net search agent utilizing GPT-5 as the massive language mannequin.

Fundamental Construction of a Net Search Agent

Picture Generated By Gemini

The principle logic behind the online search agent:

  • Outline a code operate to deal with the online search.
  • Outline customized directions that information the massive language mannequin in figuring out when to name the online search operate primarily based on the question. For instance, if the question asks in regards to the present climate, the online search agent will acknowledge the necessity to search the web to get the most recent climate studies. Nonetheless, if the question asks it to jot down a tutorial a couple of programming language like Python, one thing it will probably reply from its pre-trained information it is not going to name the online search operate and can reply immediately as an alternative.

Prerequisite

Create an OpenAI account and generate an API key
1: Create an OpenAI Account in case you don’t have one
2: Generate an API Key

Arrange and Activate Setting

python3 -m venv env
supply env/bin/activate

Export OpenAI API Key

export OPENAI_API_KEY="Your Openai API Key"

Setup Tavily for Net Search
Tavily is a specialised web-search device for AI brokers. Create an account on Tavily.com, and as soon as your profile is ready up, an API key will likely be generated that you could copy into your surroundings. New accounta obtain 1000 free credit that can be utilized for as much as 1000 net searches.

Export TAVILY API Key

export TAVILY_API_KEY="Your Tavily API Key"

Set up Packages

pip3 set up openai
pip3 set up tavily-python

Constructing a Net Search Agent with Perform Calling Step by Step

Step 1: Create Net Search Perform with Tavily

An online search operate is carried out utilizing Tavily, serving because the device for operate calling within the net search agent.

from tavily import TavilyClient
import os

tavily = TavilyClient(api_key=os.getenv("TAVILY_API_KEY"))

def web_search(question: str, num_results: int = 10):
    attempt:
        end result = tavily.search(
            question=question,
            search_depth="fundamental",
            max_results=num_results,
            include_answer=False,       
            include_raw_content=False,
            include_images=False
        )

        outcomes = end result.get("outcomes", [])

        return {
            "question": question,
            "outcomes": outcomes, 
            "sources": [
                {"title": r.get("title", ""), "url": r.get("url", "")}
                for r in results
            ]
        }

    besides Exception as e:
        return {
            "error": f"Search error: {e}",
            "question": question,
            "outcomes": [],
            "sources": [],
        }

Net operate code breakdown

Tavily is initialized with its API key. Within the web_search operate, the next steps are carried out:

  • Tavily search operate known as to look the web and retrieve the highest 10 outcomes.
  • The search outcomes and their corresponding sources are returned.

This returned output will function related context for the online search agent: which we are going to outline later on this article, to fetch up-to-date data for queries (prompts) that require real-time information corresponding to climate forecasts.

Step 2: Create Software Schema

The device schema defines customized directions for an AI mannequin on when it ought to name a device, on this case the device that will likely be utilized in an online search operate. It additionally specifies the circumstances and actions to be taken when the mannequin calls a device. A json device schema is outlined under primarily based on the OpenAI device schema construction.

tool_schema = [
    {
        "type": "function",
        "name": "web_search",

        "description": """Execute a web search to fetch up to date information. Synthesize a concise, 
        self-contained answer from the content of the results of the visited pages.
        Fetch pages, extract text, and provide the best available result while citing 1-3 sources (title + URL). 
        If sources conflict, surface the uncertainty and prefer the most recent evidence.
        """,

        "strict": True,
        "parameters": {
            "type": "object",
            "properties": {
                "query": {
                    "type": "string",
                    "description": "Query to be searched on the web.",
                },
            },
            "required": ["query"],
            "additionalProperties": False
        },
    },
]

Software schema’s Properties

  • sort: Specifies that the kind of device is a operate.
  • title: the title of the operate that will likely be used for device name, which is web_search.
  • description: Describes what the AI mannequin ought to do when calling the online search device. It instructs the mannequin to look the web utilizing the web_search operate to fetch up-to-date data and extract related particulars to generate the very best response.
  • strict: It’s set to true, this property instructs the LLM to strictly observe the device schema’s directions.
  • parameters: Defines the parameters that will likely be handed into the web_search operate. On this case, there is just one parameter: question which represents the search time period to search for on the web.
  • required: Instructs the LLM that question is a compulsory parameter for the web_search operate.
  • additionalProperties: it’s set to false, that means that the device’s arguments object can not embrace any parameters aside from these outlined beneath parameters.properties.

Step 3: Create the Net Search Agent Utilizing GPT-5 and Perform Calling

Lastly I’ll construct an agent that we will chat with, which may search the online when it wants up-to-date data. I’ll use GPT-5-mini, a quick and correct mannequin from OpenAI, together with operate calling to invoke the device schema and the net search operate already outlined.

from datetime import datetime, timezone
import json
from openai import OpenAI
import os 

shopper = OpenAI(api_key=os.getenv("OPENAI_API_KEY"))

# tracker for the final mannequin's response id to take care of dialog's state 
prev_response_id = None

# an inventory for storing device's outcomes from the operate name 
tool_results = []

whereas True:
    # if the device outcomes is empty immediate message 
    if len(tool_results) == 0:
        user_message = enter("Person: ")

        """ instructions for exiting chat """
        if isinstance(user_message, str) and user_message.strip().decrease() in {"exit", "q"}:
            print("Exiting chat. Goodbye!")
            break

    else:
        user_message = tool_results.copy()
    
        # clear the device outcomes for the following name 
        tool_results = []

    # get hold of present's date to be handed into the mannequin as an instruction to help in resolution making
    today_date = datetime.now(timezone.utc).date().isoformat()     

    response = shopper.responses.create(
        mannequin = "gpt-5-mini",
        enter = user_message,
        directions=f"Present date is {today_date}.",
        instruments = tool_schema,
        previous_response_id=prev_response_id,
        textual content = {"verbosity": "low"},
        reasoning={
            "effort": "low",
        },
        retailer=True,
        )
    
    prev_response_id = response.id

    # Handles mannequin response's output 
    for output in response.output:
        
        if output.sort == "reasoning":
            print("Assistant: ","Reasoning ....")

            for reasoning_summary in output.abstract:
                print("Assistant: ",reasoning_summary)

        elif output.sort == "message":
            for merchandise in output.content material:
                print("Assistant: ",merchandise.textual content)

        elif output.sort == "function_call":
            # get hold of operate title 
            function_name = globals().get(output.title)
            # hundreds operate arguments 
            args = json.hundreds(output.arguments)
            function_response = function_name(**args)
            tool_results.append(
                {
                    "sort": "function_call_output",
                    "call_id": output.call_id,
                    "output": json.dumps(function_response)
                }
            )

Step by Step Code Breakdown

from openai import OpenAI
import os 

shopper = OpenAI(api_key=os.getenv("OPENAI_API_KEY"))
prev_response_id = None
tool_results = []
  • Initialized the OpenAI mannequin API with an API key.
  • Initialized two variables prev_response_id and tool_resultsprev_response_id retains monitor of the mannequin’s response to take care of dialog state, and tool_results is an inventory that shops outputs returned from the web_search operate name.

The chat runs contained in the loop. A consumer enters a message and the mannequin referred to as with device schema accepts the message, causes over it, decides whether or not to name the online search device, after which the device’s output is handed again to the mannequin. The mannequin generates a context-aware response. This continues till the consumer exits the chat.

Code Walkthrough of the Loop

if len(tool_results) == 0:
    user_message = enter("Person: ")
    if isinstance(user_message, str) and user_message.strip().decrease() in {"exit", "q"}:
        print("Exiting chat. Goodbye!")
        break

else:
    user_message = tool_results.copy()
    tool_results = []

today_date = datetime.now(timezone.utc).date().isoformat()     

response = shopper.responses.create(
    mannequin = "gpt-5-mini",
    enter = user_message,
    directions=f"Present date is {today_date}.",
    instruments = tool_schema,
    previous_response_id=prev_response_id,
    textual content = {"verbosity": "low"},
    reasoning={
        "effort": "low",
    },
    retailer=True,
    )

prev_response_id = response.id
  • Checks if the tool_results is empty. Whether it is, the consumer will likely be prompted to sort in a message, with an choice to stop utilizing exit or q.
  • If the tool_results shouldn’t be empty, user_message will likely be set to the collected device outputs to be despatched to the mannequin. tool_results is cleared to keep away from resending the identical device outputs on the following loop iteration.
  • The present date (today_date) is obtained for use by the mannequin to make time-aware choices.
  • Calls shopper.responses.create to generate the mannequin’s response and it accepts the next parameters:
    • mannequin: set to gpt-5-mini.
    • enter: accepts the consumer’s message.
    • directions: set to present’s date (today_date).
    • instruments: set to the device schema that was outlined earlier.
    • previous_response_id: set to the earlier response’s id so the mannequin can keep dialog state.
    • textual content: verbosity is ready to low to maintain mannequin’s response concise.
    • reasoning: GPT-5-mini is a reasoning mannequin, set the reasoning’s effort to low for sooner’s response. For extra complicated duties we will set it to excessive.
    • retailer: tells the mannequin to retailer the present’s response so it may be retrieved later and helps with dialog continuity.
  • prev_response_id is ready to present’s response id so the following operate name can thread onto the identical dialog.
for output in response.output:
    if output.sort == "reasoning":
        print("Assistant: ","Reasoning ....")

        for reasoning_summary in output.abstract:
            print("Assistant: ",reasoning_summary)

    elif output.sort == "message":
        for merchandise in output.content material:
            print("Assistant: ",merchandise.textual content)

    elif output.sort == "function_call":
        # get hold of operate title 
        function_name = globals().get(output.title)
        # hundreds operate arguments 
        args = json.hundreds(output.arguments)
        function_response = function_name(**args)
        # append device outcomes listing with the the operate name's id and performance's response 
        tool_results.append(
            {
                "sort": "function_call_output",
                "call_id": output.call_id,
                "output": json.dumps(function_response)
            }
        )

This processes the mannequin’s response output and does the next;

  • If the output sort is reasoning, print every merchandise within the reasoning abstract.
  • If the output sort is message, iterate by the content material and print every textual content merchandise.
  • If the output sort is a operate name, get hold of the operate’s title, parse its arguments, and go them to the operate (web_search) to generate a response. On this case, the online search response accommodates up-to-date data related to the consumer’s message. Lastly appends the operate name’s response and performance name id to tool_results. This lets the following loop ship the device end result again to the mannequin.

Full Code for the Net Search Agent

from datetime import datetime, timezone
import json
from openai import OpenAI
import os 
from tavily import TavilyClient

tavily = TavilyClient(api_key=os.getenv("TAVILY_API_KEY"))

def web_search(question: str, num_results: int = 10):
    attempt:
        end result = tavily.search(
            question=question,
            search_depth="fundamental",
            max_results=num_results,
            include_answer=False,       
            include_raw_content=False,
            include_images=False
        )

        outcomes = end result.get("outcomes", [])

        return {
            "question": question,
            "outcomes": outcomes, 
            "sources": [
                {"title": r.get("title", ""), "url": r.get("url", "")}
                for r in results
            ]
        }

    besides Exception as e:
        return {
            "error": f"Search error: {e}",
            "question": question,
            "outcomes": [],
            "sources": [],
        }


tool_schema = [
    {
        "type": "function",
        "name": "web_search",
        "description": """Execute a web search to fetch up to date information. Synthesize a concise, 
        self-contained answer from the content of the results of the visited pages.
        Fetch pages, extract text, and provide the best available result while citing 1-3 sources (title + URL). "
        If sources conflict, surface the uncertainty and prefer the most recent evidence.
        """,
        "strict": True,
        "parameters": {
            "type": "object",
            "properties": {
                "query": {
                    "type": "string",
                    "description": "Query to be searched on the web.",
                },
            },
            "required": ["query"],
            "additionalProperties": False
        },
    },
]

shopper = OpenAI(api_key=os.getenv("OPENAI_API_KEY"))

# tracker for the final mannequin's response id to take care of dialog's state 
prev_response_id = None

# an inventory for storing device's outcomes from the operate name 
tool_results = []

whereas True:
    # if the device outcomes is empty immediate message 
    if len(tool_results) == 0:
        user_message = enter("Person: ")

        """ instructions for exiting chat """
        if isinstance(user_message, str) and user_message.strip().decrease() in {"exit", "q"}:
            print("Exiting chat. Goodbye!")
            break

    else:
        # set the consumer's messages to the device outcomes to be despatched to the mannequin 
        user_message = tool_results.copy()
    
        # clear the device outcomes for the following name 
        tool_results = []

    # get hold of present's date to be handed into the mannequin as an instruction to help in resolution making
    today_date = datetime.now(timezone.utc).date().isoformat()     

    response = shopper.responses.create(
        mannequin = "gpt-5-mini",
        enter = user_message,
        directions=f"Present date is {today_date}.",
        instruments = tool_schema,
        previous_response_id=prev_response_id,
        textual content = {"verbosity": "low"},
        reasoning={
            "effort": "low",
        },
        retailer=True,
        )
    
    prev_response_id = response.id


    # Handles mannequin response's output 
    for output in response.output:
        
        if output.sort == "reasoning":
            print("Assistant: ","Reasoning ....")

            for reasoning_summary in output.abstract:
                print("Assistant: ",reasoning_summary)

        elif output.sort == "message":
            for merchandise in output.content material:
                print("Assistant: ",merchandise.textual content)

        # checks if the output sort is a operate name and append the operate name's outcomes to the device outcomes listing
        elif output.sort == "function_call":
            # get hold of operate title 
            function_name = globals().get(output.title)
            # hundreds operate arguments 
            args = json.hundreds(output.arguments)
            function_response = function_name(**args)
            # append device outcomes listing with the the operate name's id and performance's response 
            tool_results.append(
                {
                    "sort": "function_call_output",
                    "call_id": output.call_id,
                    "output": json.dumps(function_response)
                }
            )

Once you run the code, you may simply chat with the agent to ask questions that require the most recent data, corresponding to the present climate or the most recent product releases. The agent responds with up-to-date data together with the corresponding sources from the web. Under is a pattern output from the terminal.

Person: What's the climate like in London at the moment?
Assistant:  Reasoning ....
Assistant:  Reasoning ....
Assistant:  Proper now in London: overcast, about 18°C (64°F), humidity ~88%, mild SW wind ~16 km/h, no precipitation reported. Supply: WeatherAPI (present circumstances) — https://www.weatherapi.com/

Person: What's the newest iPhone mannequin?
Assistant:  Reasoning ....
Assistant:  Reasoning ....
Assistant:  The newest iPhone fashions are the iPhone 17 lineup (together with iPhone 17, iPhone 17 Professional, iPhone 17 Professional Max) and the brand new iPhone Air — introduced by Apple on Sept 9, 2025. Supply: Apple Newsroom — https://www.apple.com/newsroom/2025/09/apple-debuts-iphone-17/

Person: Multiply 500 by 12.           
Assistant:  Reasoning ....
Assistant:  6000
Person: exit   
Exiting chat. Goodbye!

You’ll be able to see the outcomes with their corresponding net sources. Once you ask it to carry out a job that doesn’t require up-to-date data, corresponding to maths calculations or writing code the agent responds immediately with none net search.

Be aware: The net search agent is a straightforward, single-tool agent. Superior agentic methods orchestrate a number of specialised instruments and use environment friendly reminiscence to take care of context, plan, and resolve extra complicated duties.

Conclusion

On this put up I defined how an AI agent works and the way it extends the capabilities of a big language mannequin to work together with its surroundings, carry out actions and resolve duties by the usage of instruments. I additionally defined operate calling and the way it permits LLMs to name instruments. I demonstrated tips on how to create a device schema for operate calling that defines when and the way an LLM ought to name a device to carry out an motion. I outlined an online search operate utilizing Tavily to fetch data from the online after which confirmed step-by-step tips on how to construct an online search agent utilizing operate calling and GPT-5-mini because the LLM. Ultimately, we constructed an online search agent able to retrieving up-to-date data from the web to reply consumer queries.

Try my GitHub repo, GenAI-Programs the place I’ve printed extra programs on numerous Generative AI matters. It additionally features a information on constructing an Agentic RAG utilizing operate calling.

Attain out to me by way of:

Electronic mail: [email protected]

Linkedin: https://www.linkedin.com/in/ayoola-olafenwa-003b901a9/

References

https://platform.openai.com/docs/guides/function-calling?api-mode=responses

https://docs.tavily.com/documentation/api-reference/endpoint/search

Related Articles

Latest Articles