A Beginner’s Guide to Getting Started with Messages in LangChain
If you've spent any time developing AI, whether it's a chatbot, a support agent, or a simple Q&A app, you've already come across messages. Even if you didn’t pay them much attention, they were there, quietly doing the heavy lifting behind every interaction. So, why should you care about messages in the first place? They are the foundation of how chat models communicate. Messages carry the what, who, and how of a conversation. Without understanding them, you're essentially flying blind. But once you do, you gain precise control over your model's behavior, clarity in structuring your prompts, and flexibility when building more advanced workflows. Before we dive in, here’s something you’ll love: We are currently working on Langcasts.com, a resource crafted specifically for AI engineers, whether you're just getting started or already deep in the game. We'll be sharing guides, tips, hands-on walkthroughs, and extensive classes to help you master every piece of the puzzle. If you’d like to be notified the moment new materials drop, you can subscribe here to get updates directly. In LangChain, messages aren’t just random text blobs. They’re well-defined structures that capture the role of the speaker, the content being shared, and sometimes even extra metadata like tool calls or token usage. LangChain brings all of this together through a unified message format that works seamlessly across different chat model providers, so you don’t have to worry about the small differences each one introduces. This article will guide you through LangChain messages. You'll become familiar with the various message types, understand the roles they play, see how content and metadata combine, and explore how LangChain handles streaming and special message scenarios. What is a Message in LangChain? At the heart of every AI-powered conversation lies a simple yet powerful concept: the message. Think of it as the basic unit of communication between you and the model, just like a text in a group chat, but with a bit more structure and purpose. A message carries who is speaking (the role), what they’re saying (the content), and sometimes a bit of extra information (the metadata), like timestamps, token usage, or even the message ID. The role tells the model how to interpret a message. It’s like assigning characters in a script. Here are the main roles you’ll encounter: User: This represents the person interacting with the AI. It’s the prompt, question, or command that starts the conversation. Assistant: The AI’s response. Whether it’s a helpful answer or a witty comeback, this is the voice of the model. System: A behind-the-scenes guide that helps steer the assistant’s behavior. It might set the tone, define goals, or clarify instructions. Not all models support it, but when they do, it’s powerful. Tool: Used when the model calls on an external tool or function and gets a response back. This allows the AI to do things like look up data or interact with external APIs. Function: A legacy role tied to OpenAI’s older function-calling system. Today, the tool role is preferred for this kind of interaction. The content is what’s actually being said. Most of the time, it’s text, but it can also include images, audio, or other media when the model supports it. Content is what carries the heart of the conversation. Finally, there’s Other Message Data. Depending on the chat model provider, messages can include other data such as: ID: An optional unique identifier for the message. Name: An optional name property which allows differentiate between different entities/speakers with the same role. Not all models support this! Metadata: Additional information about the message, such as timestamps, token usage, etc. Tool Calls: A request made by the model to call one or more tools Now, here’s where things get interesting: different model providers structure messages in different ways. Without a standardized format, switching between providers can get messy. That’s why LangChain offers a unified message structure, it smooths out the differences, so you don’t have to think about what format OpenAI wants vs what Gemini prefers. You just send a message, and LangChain handles the rest. Let’s take a quick look: Raw message format (OpenAI-style): { role: "user", content: "Tell me a joke." } LangChain message format: import { HumanMessage } from "@langchain/core/messages"; new HumanMessage("Tell me a joke."); Same message, cleaner structure, and much easier to build with, especially when your app scales or starts juggling different chat models. Core LangChain Message Types Now that we understand the anatomy of a message (role, content, metadata), let’s look at the main message types in LangChain. LangChain structures messages into different types, each serving a unique purpose in conversation flow. These message types ensure clarity in communication between users, the AI model, and exte

If you've spent any time developing AI, whether it's a chatbot, a support agent, or a simple Q&A app, you've already come across messages. Even if you didn’t pay them much attention, they were there, quietly doing the heavy lifting behind every interaction.
So, why should you care about messages in the first place?
They are the foundation of how chat models communicate. Messages carry the what, who, and how of a conversation. Without understanding them, you're essentially flying blind. But once you do, you gain precise control over your model's behavior, clarity in structuring your prompts, and flexibility when building more advanced workflows.
Before we dive in, here’s something you’ll love:
We are currently working on Langcasts.com, a resource crafted specifically for AI engineers, whether you're just getting started or already deep in the game. We'll be sharing guides, tips, hands-on walkthroughs, and extensive classes to help you master every piece of the puzzle. If you’d like to be notified the moment new materials drop, you can subscribe here to get updates directly.
In LangChain, messages aren’t just random text blobs. They’re well-defined structures that capture the role of the speaker, the content being shared, and sometimes even extra metadata like tool calls or token usage. LangChain brings all of this together through a unified message format that works seamlessly across different chat model providers, so you don’t have to worry about the small differences each one introduces.
This article will guide you through LangChain messages. You'll become familiar with the various message types, understand the roles they play, see how content and metadata combine, and explore how LangChain handles streaming and special message scenarios.
What is a Message in LangChain?
At the heart of every AI-powered conversation lies a simple yet powerful concept: the message. Think of it as the basic unit of communication between you and the model, just like a text in a group chat, but with a bit more structure and purpose.
A message carries who is speaking (the role), what they’re saying (the content), and sometimes a bit of extra information (the metadata), like timestamps, token usage, or even the message ID.
The role tells the model how to interpret a message. It’s like assigning characters in a script. Here are the main roles you’ll encounter:
- User: This represents the person interacting with the AI. It’s the prompt, question, or command that starts the conversation.
- Assistant: The AI’s response. Whether it’s a helpful answer or a witty comeback, this is the voice of the model.
- System: A behind-the-scenes guide that helps steer the assistant’s behavior. It might set the tone, define goals, or clarify instructions. Not all models support it, but when they do, it’s powerful.
- Tool: Used when the model calls on an external tool or function and gets a response back. This allows the AI to do things like look up data or interact with external APIs.
- Function: A legacy role tied to OpenAI’s older function-calling system. Today, the tool role is preferred for this kind of interaction.
The content is what’s actually being said. Most of the time, it’s text, but it can also include images, audio, or other media when the model supports it. Content is what carries the heart of the conversation.
Finally, there’s Other Message Data. Depending on the chat model provider, messages can include other data such as:
- ID: An optional unique identifier for the message.
-
Name: An optional
name
property which allows differentiate between different entities/speakers with the same role. Not all models support this! - Metadata: Additional information about the message, such as timestamps, token usage, etc.
- Tool Calls: A request made by the model to call one or more tools
Now, here’s where things get interesting: different model providers structure messages in different ways. Without a standardized format, switching between providers can get messy. That’s why LangChain offers a unified message structure, it smooths out the differences, so you don’t have to think about what format OpenAI wants vs what Gemini prefers. You just send a message, and LangChain handles the rest.
Let’s take a quick look:
Raw message format (OpenAI-style):
{
role: "user",
content: "Tell me a joke."
}
LangChain message format:
import { HumanMessage } from "@langchain/core/messages";
new HumanMessage("Tell me a joke.");
Same message, cleaner structure, and much easier to build with, especially when your app scales or starts juggling different chat models.
Core LangChain Message Types
Now that we understand the anatomy of a message (role, content, metadata), let’s look at the main message types in LangChain. LangChain structures messages into different types, each serving a unique purpose in conversation flow. These message types ensure clarity in communication between users, the AI model, and external tools. Let’s break them down.
SystemMessage
– Priming Model Behavior
A SystemMessage
provides the AI with instructions on how to behave throughout a conversation. It can set the model’s persona, define interaction rules, or establish a specific tone. This is particularly useful when guiding AI responses within a controlled environment.
Example:
import { SystemMessage } from "@langchain/core/messages";
const message = new SystemMessage("You are a helpful travel assistant.");
HumanMessage
– Capturing User Input
A HumanMessage
represents input from the user. It can contain plain text or multimodal content like images, audio, or video, depending on the model’s capabilities.
Example:
import { HumanMessage } from "@langchain/core/messages";
const userMessage = new HumanMessage("What’s the weather like today?");
AIMessage
– The Model’s Response
An AIMessage
is the model’s reply to a user. It typically contains text but can also include tool calls or metadata related to the response.
Example:
import { AIMessage } from "@langchain/core/messages";
const aiResponse = new AIMessage("It’s sunny with a high of 25°C.");
AIMessageChunk
– Streaming Responses
Instead of waiting for a full response, AIMessageChunk
allows messages to be streamed in smaller parts. This improves user experience by displaying AI-generated responses as they are being processed.
Example:
for await (const chunk of model.stream([new HumanMessage("Tell me a story.")])) {
console.log(chunk);
}
ToolMessage
– Handling Tool Call Results
When an AI model requests external data or processing, ToolMessage
is used to return those results. It contains tool invocation details, ensuring the AI has the necessary context.
Example:
import { ToolMessage } from "@langchain/core/messages";
const toolMessage = new ToolMessage({
tool_call_id: "weather-api",
content: "The current temperature is 22°C.",
});
Special & Utility Message Types
In addition to the core types used in everyday interactions, LangChain includes a few specialized messages designed for legacy support or complex workflows. These messages help manage conversations efficiently and ensure compatibility with external systems.
RemoveMessage
– Managing Chat History in LangGraph
In complex workflows, especially those using LangGraph, there may be a need to remove messages dynamically. RemoveMessage
allows you to prune chat history, keeping interactions concise and relevant.
This is useful when limiting memory usage or ensuring outdated information does not affect future responses.
Example:
import { RemoveMessage } from "@langchain/core/messages";
const removeMsg = new RemoveMessage("Forget the last response.");
FunctionMessage
– Legacy Support for OpenAI
FunctionMessage
exists primarily for backward compatibility with OpenAI’s legacy function calling system. While newer models now support tool calls, this message type remains relevant for older implementations.
Unless you are working with older OpenAI APIs, ToolMessage
is generally preferred over FunctionMessage
.
Example:
import { FunctionMessage } from "@langchain/core/messages";
const functionMsg = new FunctionMessage({
name: "get_weather",
content: "It’s currently 20°C.",
});
When to Use These Types (or Not)
-
Use
RemoveMessage
if you need to clean up chat history dynamically, especially in structured workflows. -
Use
FunctionMessage
only if you are working with legacy OpenAI integrations. Otherwise, stick toToolMessage
for handling tool calls in modern implementations.
These utility message types might not be needed in every project, but they provide valuable flexibility when working with structured workflows or legacy systems.
A Quick Summary Table
Message Type | Role | Use Case |
---|---|---|
SystemMessage |
system | Set AI tone, instructions |
HumanMessage |
user | Capture user input |
AIMessage |
assistant | AI’s reply |
ToolMessage |
tool | Tool call results |
AIMessageChunk |
assistant | Stream responses |
RemoveMessage |
special | Delete message history (LangGraph) |
FunctionMessage |
legacy | OpenAI legacy support |
Structuring Conversations with Messages
Every AI conversation is really just a structured list of messages. Each message holds a specific role, whether it is setting the scene, asking a question, giving a response, or passing along tool output. When sequenced correctly, they form a coherent, flowing dialogue the model can understand and build on.
How Conversations Are Built
A conversation starts with a SystemMessage
to set expectations or define behavior. That’s followed by alternating HumanMessages
and AIMessages
that represent the back-and-forth exchange between the user and the model. If tools are involved, ToolMessages
and sometimes FunctionMessages
get added to the mix.
At every turn, LangChain relies on this sequence to maintain the conversation’s context and continuity.
Example Chat Sequence
Here’s a simple interaction structured with LangChain message objects:
import {
SystemMessage,
HumanMessage,
AIMessage,
ToolMessage,
} from "@langchain/core/messages";
const messages = [
new SystemMessage("You’re an assistant that gives cooking tips."),
new HumanMessage("How do I make fluffy pancakes?"),
new AIMessage("Use buttermilk, don’t overmix the batter, and cook on medium heat."),
new HumanMessage("Thanks! What toppings go well with that?"),
new AIMessage("Fresh berries, maple syrup, or a sprinkle of powdered sugar work great."),
];
Each message builds on the previous one, preserving context and helping the model stay on track.
Good Practices for Managing Chat History
-
Start with a clear
SystemMessage
to guide model behavior. - Keep message order intact as the sequence matters.
- Trim old or irrelevant messages when conversations get long, especially in memory-constrained environments.
- Use metadata wisely if you need to tag messages with context like user ID, timestamp, or response scores.
By structuring your conversation as a consistent chain of messages, you give the model all it needs to carry the thread naturally, no guesswork, no confusion, just smooth, flowing interactions.
Real-World Example: Invoking a Chat Model with Messages
Let’s put it all together with a practical example by actually sending messages to a chat model. Whether it’s for a lighthearted chatbot or a more complex agent, the structure stays the same: messages go in, a response comes out.
Basic Joke-Telling Interaction
Here’s a simple interaction where a user asks the model for a joke. We’ll use HumanMessage
and AIMessage
to drive the exchange.
import { ChatOpenAI } from "@langchain/openai";
import { HumanMessage } from "@langchain/core/messages";
const model = new ChatOpenAI({ temperature: 0.7 });
const response = await model.invoke([
new HumanMessage("Tell me a joke about cats.")
]);
console.log(response.content);
// Output: "Why was the cat sitting on the computer? It wanted to keep an eye on the mouse!"
Behind the scenes, the model reads the message, keeps track of the context, and generates a fitting response.
Streaming Responses with AIMessageChunk
For chatbots that feel more alive, streaming the output token by token creates a smoother experience. Here’s how to do that with AIMessageChunk
:
const stream = await model.stream([
new HumanMessage("Tell me a joke about pizza.")
]);
for await (const chunk of stream) {
process.stdout.write(chunk.content);
}
// Output (character by character): "Why did the pizza maker quit his job? He just couldn't make ends meat."
This makes the interaction feel more real-time—ideal for UI-driven applications.
Tool Invocation Roundtrip
If your AI assistant needs to call external tools (like a calculator or search engine), you can include ToolMessage
to pass results back to the model:
import { AIMessage, ToolMessage } from "@langchain/core/messages";
const messages = [
new HumanMessage("What’s the capital of France?"),
new AIMessage({
content: "",
additional_kwargs: {
tool_calls: [{
id: "search_1",
name: "search_tool",
args: { query: "Capital of France" }
}]
}
}),
new ToolMessage({
tool_call_id: "search_1",
content: "Paris"
}),
];
const result = await model.invoke(messages);
console.log(result.content);
// Output: "The capital of France is Paris."
This flow enables your AI to act, retrieve, and respond seamlessly.
By structuring your interactions with the right message types, you’re not just sending prompts, you’re giving your model a well-formed conversation it can reason over.
Messages are the structured, context-aware building blocks that power everything from basic chats to sophisticated, tool-using AI systems. Whether you're setting the tone with a system prompt, responding in real time, or juggling tools and user inputs, messages are what hold it all together.
Understanding them doesn’t just level up your knowledge, it helps you build smarter, clearer, and more responsive AI apps.
So, here’s the exciting part: We are currently working on Langcasts.com, a resource crafted especially for all AI engineers, whether you're just getting started or already deep in the game. We'll be sharing guides, tips, and hands-on walkthroughs to help you master every piece of the puzzle.
We’ll update everyone here as new releases drop.
Want to stay in the loop? You can subscribe here to get updates directly.
Build with clarity. Build with confidence. Build seamlessly.