Understanding LLM Tools in LangChain with OpenAI

Large Language Models (LLMs) are powerful AI tools capable of processing and generating human-like text. However, they often require external tools to extend their capabilities beyond text-based reasoning. In this article, we will explore how to integrate tools with an LLM using LangChain and OpenAI to provide dynamic responses based on real-world data. Setting Up the LLM To begin, we import the necessary module from LangChain and instantiate an OpenAI chat model: from langchain_openai import ChatOpenAI llm = ChatOpenAI() We can invoke the model with a simple query: llm.invoke("How will the weather be in Lagos today?") At this stage, the model can generate text-based responses, but it lacks access to real-world data such as weather updates or seating availability. To enhance its functionality, we introduce custom tools. Defining Custom Tools LangChain allows us to define custom tools, which act as external functions that the LLM can call when needed. Weather Tool The get_weather tool provides real-time weather updates based on the location. from langchain_core.tools import tool @tool def get_weather(location: str): """Call to get the current weather.""" if location.lower() in ["Lagos"]: return "It's 15 degrees Celsius and cloudy." else: return "It's 32 degrees Celsius and sunny." Seating Availability Tool The check_seating_availability tool allows users to check if outdoor or indoor seating is available in a given location. @tool def check_seating_availability(location: str, seating_type: str): """Call to check seating availability.""" if location.lower() == "Lagos" and seating_type.lower() == "outdoor": return "Yes, we still have seats available outdoors." elif location.lower() == "Lagos" and seating_type.lower() == "indoor": return "Yes, we have indoor seating available." else: return "Sorry, seating information for this location is unavailable." Binding Tools to the LLM To enable the LLM to use these tools, we bind them to the model: tools = [get_weather, check_seating_availability] llm_with_tools = llm.bind_tools(tools) Now, when we ask a question that matches a tool’s capability, the model will call the appropriate tool: result = llm_with_tools.invoke("How will the weather be in Lagos today?") print(result) We can also ask multiple questions in one request: result = llm_with_tools.invoke( "How will the weather be in Lagos today? Do you still have seats outdoor available?" ) print(result) Handling Multiple Tool Calls Sometimes, the LLM needs to call multiple tools in response to a single user query. To manage this, we structure the interaction as follows: Conclusion By binding tools to the LLM, we enhance its ability to interact with external data sources, making it more practical and useful for real-world applications. This approach allows for seamless integration of custom functions, enabling LLMs to handle dynamic and context-aware queries efficiently. With LangChain and OpenAI, we can build intelligent, tool-augmented AI assistants capable of performing complex tasks beyond text generation. Whether checking weather updates, seating availability, or integrating with other APIs, tool-based LLMs offer a powerful way to bridge AI with real-world applications.

Apr 3, 2025 - 12:17
 0
Understanding LLM Tools in LangChain with OpenAI

Large Language Models (LLMs) are powerful AI tools capable of processing and generating human-like text. However, they often require external tools to extend their capabilities beyond text-based reasoning. In this article, we will explore how to integrate tools with an LLM using LangChain and OpenAI to provide dynamic responses based on real-world data.

Setting Up the LLM

To begin, we import the necessary module from LangChain and instantiate an OpenAI chat model:

from langchain_openai import ChatOpenAI

llm = ChatOpenAI()

We can invoke the model with a simple query:

llm.invoke("How will the weather be in Lagos today?")

At this stage, the model can generate text-based responses, but it lacks access to real-world data such as weather updates or seating availability. To enhance its functionality, we introduce custom tools.

Defining Custom Tools

LangChain allows us to define custom tools, which act as external functions that the LLM can call when needed.

Weather Tool

The get_weather tool provides real-time weather updates based on the location.

from langchain_core.tools import tool

@tool
def get_weather(location: str):
    """Call to get the current weather."""
    if location.lower() in ["Lagos"]:
        return "It's 15 degrees Celsius and cloudy."
    else:
        return "It's 32 degrees Celsius and sunny."

Seating Availability Tool

The check_seating_availability tool allows users to check if outdoor or indoor seating is available in a given location.

@tool
def check_seating_availability(location: str, seating_type: str):
    """Call to check seating availability."""
    if location.lower() == "Lagos" and seating_type.lower() == "outdoor":
        return "Yes, we still have seats available outdoors."
    elif location.lower() == "Lagos" and seating_type.lower() == "indoor":
        return "Yes, we have indoor seating available."
    else:
        return "Sorry, seating information for this location is unavailable."

Binding Tools to the LLM

To enable the LLM to use these tools, we bind them to the model:

tools = [get_weather, check_seating_availability]
llm_with_tools = llm.bind_tools(tools)

Now, when we ask a question that matches a tool’s capability, the model will call the appropriate tool:

result = llm_with_tools.invoke("How will the weather be in Lagos today?")
print(result)

We can also ask multiple questions in one request:

result = llm_with_tools.invoke(
    "How will the weather be in Lagos today? Do you still have seats outdoor available?"
)
print(result)

Handling Multiple Tool Calls

Sometimes, the LLM needs to call multiple tools in response to a single user query. To manage this, we structure the interaction as follows:

Conclusion

By binding tools to the LLM, we enhance its ability to interact with external data sources, making it more practical and useful for real-world applications. This approach allows for seamless integration of custom functions, enabling LLMs to handle dynamic and context-aware queries efficiently.

With LangChain and OpenAI, we can build intelligent, tool-augmented AI assistants capable of performing complex tasks beyond text generation. Whether checking weather updates, seating availability, or integrating with other APIs, tool-based LLMs offer a powerful way to bridge AI with real-world applications.