Summary of LLM Function Calling

What I’m discussing here is how Wrtn Technologies is building open source software. I’ll explain in several parts how our team is preparing for the AI era. If you’re not a developer, you can safely skip over the code details without missing the main ideas. If you’re curious about our technology, please check out our open source repository at https://github.com/wrtnlabs/agentica. Introduction ‘AI’ is the hottest keyword right now. We often hear stories of liberation from labor through AI, which can be both hopeful and daunting, while others express a strong desire not to return to a time without AI. However, discussions about AI tend to be so entangled with complex metrics that it becomes difficult to understand what is really being said. So, from a pure backend developer’s perspective, let’s talk about how AI is transforming our lives, particularly the field of development. I’ve tried to write this as simply as possible so that even those with no development background can grasp some of the insights. The Past of Backend Development, and Function Calling Backend development is essentially server development, and server development is all about designing contracts. A server is a collection of promises like “if you give me A, I will give you B”, which is why I often liken it to a vending machine. Imagine a vending machine built from a series of promises about how much money to insert, which button to press, and what drink will come out. However, with the advent of AI, one element has changed: the entity responsible for calling APIs. Until now, API calls were triggered by the pages developed by frontend developers or user actions. A user would click a button, and just like pressing a button on a vending machine to get a drink, the process was straightforward. But the emergence of AI has introduced a scenario where the function can be “called” without any user clicking a button or scrolling. We call this concept “Function Calling”. Defining Function Calling Enable models to fetch data and take actions. **Function calling* provides a powerful and flexible way for OpenAI models to interface with your code or external services, and has two primary use cases:* Fetching Data Taking Action According to OpenAI’s definition of “function calling,” it enables models to fetch data or perform actions. After all, fetching data in web development is essentially an HTTP GET method, so you could say that the action is essentially the same. This definition means that at the moment when needed, the model will make a GET or POST request, effectively “firing” an API call. So, how exactly does the model autonomously “fire” an API call? OpenAI SDK and the Principle of Function Calling To explain this, let’s first discuss how we interact with APIs through LLMs. Calling an API via an LLM gives the impression of having a “conversation.” But for those who understand the underlying structure, it’s really just asking “What would you say in this situation?” at every moment. Consider this pseudo-code: The function’s name is “What would you say in this situation?” The function’s argument is the entire conversation history: This includes what the user said, what the AI said, and any additional contextual information. That’s it. Thus, a conversation through an LLM simply accumulates the conversation history as parameters, and the impression of dialogue is merely the result of continuously appending previous messages. It might sound trivial, but this very mechanism allows us to talk about function calling. Since you can manipulate the conversation history by asking “What would you say in this situation?”, isn’t it possible for the API call result to simply be inserted into the conversation as if the AI had made the call? For example: User: “What should I eat today?” LLM: “I received the result *** from your request.”* LLM: “Based on a search through a mapping app, the best restaurant around Gangnam Station is A!” Function Calling at the Code Level import { OpenAI } from "openai"; const openai = new OpenAI(); const tools = [{ "type": "function", "function": { "name": "get_weather", "description": "Get current temperature for a given location.", "parameters": { "type": "object", "properties": { "location": { "type": "string", "description": "City and country e.g. Bogotá, Colombia" } }, "required": [ "location" ], "additionalProperties": false }, "strict": true } }]; const completion = await openai.chat.completions.create({ model: "gpt-4o", messages: [{ role: "user", content: "What is the weather like in Paris today?" }], tools, // When you pass tools along with the prompt, GPT can choose from these tools! store: true, }); The LLM itself isn’t executing the API call. So, while it may seem like the API is being fired, in re

Mar 25, 2025 - 09:12
 0
Summary of LLM Function Calling

Image description

What I’m discussing here is how Wrtn Technologies is building open source software.

I’ll explain in several parts how our team is preparing for the AI era.

If you’re not a developer, you can safely skip over the code details without missing the main ideas.

If you’re curious about our technology, please check out our open source repository at https://github.com/wrtnlabs/agentica.

Introduction

‘AI’ is the hottest keyword right now.

We often hear stories of liberation from labor through AI, which can be both hopeful and daunting,

while others express a strong desire not to return to a time without AI.

However, discussions about AI tend to be so entangled with complex metrics that it becomes difficult to understand what is really being said.

So, from a pure backend developer’s perspective, let’s talk about how AI is transforming our lives, particularly the field of development.

I’ve tried to write this as simply as possible so that even those with no development background can grasp some of the insights.

The Past of Backend Development, and Function Calling

Backend development is essentially server development, and server development is all about designing contracts.

A server is a collection of promises like “if you give me A, I will give you B”, which is why I often liken it to a vending machine.

Imagine a vending machine built from a series of promises about how much money to insert, which button to press, and what drink will come out.

However, with the advent of AI, one element has changed: the entity responsible for calling APIs.

Until now, API calls were triggered by the pages developed by frontend developers or user actions.

A user would click a button, and just like pressing a button on a vending machine to get a drink, the process was straightforward.

But the emergence of AI has introduced a scenario where the function can be “called” without any user clicking a button or scrolling.

We call this concept “Function Calling”.

Defining Function Calling

Enable models to fetch data and take actions.

**Function calling* provides a powerful and flexible way for OpenAI models to interface with your code or external services, and has two primary use cases:*

  • Fetching Data
  • Taking Action

According to OpenAI’s definition of “function calling,” it enables models to fetch data or perform actions.

After all, fetching data in web development is essentially an HTTP GET method, so you could say that the action is essentially the same.

This definition means that at the moment when needed, the model will make a GET or POST request, effectively “firing” an API call.

So, how exactly does the model autonomously “fire” an API call?

OpenAI SDK and the Principle of Function Calling

To explain this, let’s first discuss how we interact with APIs through LLMs.

Calling an API via an LLM gives the impression of having a “conversation.”

But for those who understand the underlying structure, it’s really just asking “What would you say in this situation?” at every moment.

Consider this pseudo-code:

  • The function’s name is “What would you say in this situation?”
  • The function’s argument is the entire conversation history:
    • This includes what the user said, what the AI said, and any additional contextual information.

That’s it.

Thus, a conversation through an LLM simply accumulates the conversation history as parameters,

and the impression of dialogue is merely the result of continuously appending previous messages.

It might sound trivial, but this very mechanism allows us to talk about function calling.

Since you can manipulate the conversation history by asking “What would you say in this situation?”,

isn’t it possible for the API call result to simply be inserted into the conversation as if the AI had made the call?

For example:

  • User: “What should I eat today?”
  • LLM: “I received the result *** from your request.”*
  • LLM: “Based on a search through a mapping app, the best restaurant around Gangnam Station is A!”

Function Calling at the Code Level

import { OpenAI } from "openai";

const openai = new OpenAI();

const tools = [{
  "type": "function",
  "function": {
    "name": "get_weather",
    "description": "Get current temperature for a given location.",
    "parameters": {
      "type": "object",
      "properties": {
        "location": {
          "type": "string",
          "description": "City and country e.g. Bogotá, Colombia"
        }
      },
      "required": [
        "location"
      ],
      "additionalProperties": false
    },
    "strict": true
  }
}];

const completion = await openai.chat.completions.create({
  model: "gpt-4o",
  messages: [{ role: "user", content: "What is the weather like in Paris today?" }],
  tools, // When you pass tools along with the prompt, GPT can choose from these tools!
  store: true,
});

The LLM itself isn’t executing the API call.

So, while it may seem like the API is being fired, in reality, the LLM is simply choosing a tool and filling in the parameters for an API request.

Please pay close attention to the commented part in the code!