Golang - How to Use Dataloader
Introduction In this article, we will explore the concept of dataloader in Golang and how to use it effectively. Dataloader is a library that helps to improve the performance of your application by reducing the number of database queries and caching the results. What is Dataloader? Dataloader is a library that provides a way to batch and cache database queries. It was originally designed for GraphQL applications, but it can be used in any Golang application that requires database queries. Why Use Dataloader? There are several reasons why you should use dataloader in your Golang application: Improved Performance: Dataloader reduces the number of database queries by batching them together. This can significantly improve the performance of your application. Reduced Database Load: By caching the results of database queries, dataloader reduces the load on your database. Efficient Data Retrieval: Dataloader allows you to retrieve data in batches, which can be more efficient than retrieving data one by one. What is the N+1 Problem? The N+1 problem occurs when a GraphQL server fetches a list of objects, and then for each object, it fetches additional data from the database. This can lead to a large number of database queries, resulting in slow query times and performance issues. Example of the N+1 Problem Suppose we have a GraphQL server that fetches a list of blogs, and for each blog, it fetches the name of their writer. The query might look like this: query { Blogs { name description writer { name } } } In this example, the server would first fetch a list of all blogs, and then for each blog, it would fetch the writer object. This would result in a large number of database queries, one for each writer on the list. How Dataloader Solves the N+1 Problem Dataloader is a library that provides a way to batch and cache database queries. It works by creating a request-scoped instance of the dataloader, which collects the IDs of the objects to be fetched and then makes a single query to fetch all the required data. Code Example : package main import ( "context" "fmt" "log" "github.com/graph-gophers/dataloader" ) // User represents a user type User struct { ID int Name string } // UserRepository is a mock repository for users type UserRepository struct{} // GetUser retrieves a user by ID func (r *UserRepository) GetUser(ctx context.Context, id int) (*User, error) { fmt.Println("calling for userid: ", id) // Simulate a database query if id == 1 { return &User{ID: 1, Name: "John"}, nil } return nil, fmt.Errorf("user not found") } // BatchGetUsers retrieves multiple users by IDs func (r *UserRepository) BatchGetUsers(ctx context.Context, ids []int) ([]*User, []error) { fmt.Println("calling for BatchGetUsers") users := make([]*User, len(ids)) errors := make([]error, len(ids)) for i, id := range ids { user, err := r.GetUser(ctx, id) users[i] = user errors[i] = err } return users, errors } // IntKey is a custom key type that implements dataloader.Key type IntKey struct { ID int } func (k *IntKey) Raw() interface{} { return k.ID } func (k *IntKey) String() string { return fmt.Sprintf("IntKey(%d)", k.ID) } func main() { // Create a dataloader for users userLoader := dataloader.NewBatchedLoader( func(ctx context.Context, keys dataloader.Keys) []*dataloader.Result { ids := make([]int, len(keys)) for i, key := range keys { ids[i] = key.(*IntKey).ID } users, errors := (&UserRepository{}).BatchGetUsers(ctx, ids) results := make([]*dataloader.Result, len(keys)) for i, user := range users { results[i] = &dataloader.Result{Data: user, Error: errors[i]} } return results }, ) // Load users ctx := context.Background() user1 := userLoader.Load(ctx, &IntKey{ID: 1}) user2 := userLoader.Load(ctx, &IntKey{ID: 2}) data, err := user1() if err != nil { log.Default().Println(err) } fmt.Printf("User-1: %+v\n", data) data, err = user2() if err != nil { log.Default().Println(err) } fmt.Printf("User-2: %+v\n", data) // additional api calls data3, _ := userLoader.Load(ctx, &IntKey{ID: 1})() fmt.Printf("User-3: %+v\n", data3) userLoader.Load(ctx, &IntKey{ID: 1})() userLoader.Load(ctx, &IntKey{ID: 1})() userLoader.Load(ctx, &IntKey{ID: 1})() userLoader.Load(ctx, &IntKey{ID: 1})() userLoader.Load(ctx, &IntKey{ID: 2})() userLoader.Load(ctx, &IntKey{ID: 2})() userLoader.Load(ctx, &IntKey{ID: 4})() } Output : calling for BatchGetUsers calling for userid: 1 calling for userid: 2 User-1: &{ID:1 Name:John} 2025/03/13 09:59:28 user not found U

Introduction
In this article, we will explore the concept of dataloader in Golang and how to use it effectively. Dataloader is a library that helps to improve the performance of your application by reducing the number of database queries and caching the results.
What is Dataloader?
Dataloader is a library that provides a way to batch and cache database queries. It was originally designed for GraphQL applications, but it can be used in any Golang application that requires database queries.
Why Use Dataloader?
There are several reasons why you should use dataloader in your Golang application:
Improved Performance: Dataloader reduces the number of database queries by batching them together. This can significantly improve the performance of your application.
Reduced Database Load: By caching the results of database queries, dataloader reduces the load on your database.
Efficient Data Retrieval: Dataloader allows you to retrieve data in batches, which can be more efficient than retrieving data one by one.
What is the N+1 Problem?
The N+1 problem occurs when a GraphQL server fetches a list of objects, and then for each object, it fetches additional data from the database. This can lead to a large number of database queries, resulting in slow query times and performance issues.
Example of the N+1 Problem
Suppose we have a GraphQL server that fetches a list of blogs, and for each blog, it fetches the name of their writer. The query might look like this:
query {
Blogs {
name
description
writer {
name
}
}
}
In this example, the server would first fetch a list of all blogs, and then for each blog, it would fetch the writer object. This would result in a large number of database queries, one for each writer on the list.
How Dataloader Solves the N+1 Problem
Dataloader is a library that provides a way to batch and cache database queries. It works by creating a request-scoped instance of the dataloader, which collects the IDs of the objects to be fetched and then makes a single query to fetch all the required data.
Code Example :
package main
import (
"context"
"fmt"
"log"
"github.com/graph-gophers/dataloader"
)
// User represents a user
type User struct {
ID int
Name string
}
// UserRepository is a mock repository for users
type UserRepository struct{}
// GetUser retrieves a user by ID
func (r *UserRepository) GetUser(ctx context.Context, id int) (*User, error) {
fmt.Println("calling for userid: ", id)
// Simulate a database query
if id == 1 {
return &User{ID: 1, Name: "John"}, nil
}
return nil, fmt.Errorf("user not found")
}
// BatchGetUsers retrieves multiple users by IDs
func (r *UserRepository) BatchGetUsers(ctx context.Context, ids []int) ([]*User, []error) {
fmt.Println("calling for BatchGetUsers")
users := make([]*User, len(ids))
errors := make([]error, len(ids))
for i, id := range ids {
user, err := r.GetUser(ctx, id)
users[i] = user
errors[i] = err
}
return users, errors
}
// IntKey is a custom key type that implements dataloader.Key
type IntKey struct {
ID int
}
func (k *IntKey) Raw() interface{} {
return k.ID
}
func (k *IntKey) String() string {
return fmt.Sprintf("IntKey(%d)", k.ID)
}
func main() {
// Create a dataloader for users
userLoader := dataloader.NewBatchedLoader(
func(ctx context.Context, keys dataloader.Keys) []*dataloader.Result {
ids := make([]int, len(keys))
for i, key := range keys {
ids[i] = key.(*IntKey).ID
}
users, errors := (&UserRepository{}).BatchGetUsers(ctx, ids)
results := make([]*dataloader.Result, len(keys))
for i, user := range users {
results[i] = &dataloader.Result{Data: user, Error: errors[i]}
}
return results
},
)
// Load users
ctx := context.Background()
user1 := userLoader.Load(ctx, &IntKey{ID: 1})
user2 := userLoader.Load(ctx, &IntKey{ID: 2})
data, err := user1()
if err != nil {
log.Default().Println(err)
}
fmt.Printf("User-1: %+v\n", data)
data, err = user2()
if err != nil {
log.Default().Println(err)
}
fmt.Printf("User-2: %+v\n", data)
// additional api calls
data3, _ := userLoader.Load(ctx, &IntKey{ID: 1})()
fmt.Printf("User-3: %+v\n", data3)
userLoader.Load(ctx, &IntKey{ID: 1})()
userLoader.Load(ctx, &IntKey{ID: 1})()
userLoader.Load(ctx, &IntKey{ID: 1})()
userLoader.Load(ctx, &IntKey{ID: 1})()
userLoader.Load(ctx, &IntKey{ID: 2})()
userLoader.Load(ctx, &IntKey{ID: 2})()
userLoader.Load(ctx, &IntKey{ID: 4})()
}
Output :
calling for BatchGetUsers
calling for userid: 1
calling for userid: 2
User-1: &{ID:1 Name:John}
2025/03/13 09:59:28 user not found
User-2:
User-3: &{ID:1 Name:John}
calling for BatchGetUsers
calling for userid: 4
We can clearly see that api is called only once for 1 userid. This can save lot of api calls and increase our app performance.