A Story of Building a Storage-Agnostic Message Queue
A year ago, I was knee-deep in Golang, trying to build a simple concurrent queue as a learning project. Coming from a Node.js background, where I’d spent years working with tools like BullMQ and RabbitMQ, Go’s concurrency model felt like a puzzle. My first attempt—a minimal queue with round-robin channel selection—was, well, buggy. Let’s just say it worked until it didn’t. But that’s how learning goes, right? The Spark of an Idea In my professional work, I’ve used tools like BullMQ and RabbitMQ for event-driven solutions, and p-queue and p-limit for handling concurrency. Naturally, I began wondering if there were similar tools in Go. I found packages like asynq, ants, and various worker pools—solid, battle-tested options. But suddenly, a thought struck me: what if I built something different? A package with zero dependencies, high concurrency control, and designed as a message queue rather than submitting functions? With that spark, I started building my first Go package, released it, and named it Gocq (Go Concurrent Queue). The core API was straightforward, as you can see here: // Create a queue with 2 concurrent workers queue := gocq.NewQueue(2, func(data int) int { time.Sleep(500 * time.Millisecond) return data * 2 }) defer queue.Close() // Add a single job result :=

A year ago, I was knee-deep in Golang, trying to build a simple concurrent queue as a learning project. Coming from a Node.js background, where I’d spent years working with tools like BullMQ and RabbitMQ, Go’s concurrency model felt like a puzzle. My first attempt—a minimal queue with round-robin channel selection—was, well, buggy. Let’s just say it worked until it didn’t.
But that’s how learning goes, right?
The Spark of an Idea
In my professional work, I’ve used tools like BullMQ and RabbitMQ for event-driven solutions, and p-queue and p-limit for handling concurrency. Naturally, I began wondering if there were similar tools in Go. I found packages like asynq
, ants
, and various worker pools—solid, battle-tested options. But suddenly, a thought struck me: what if I built something different? A package with zero dependencies, high concurrency control, and designed as a message queue rather than submitting functions?
With that spark, I started building my first Go package, released it, and named it Gocq (Go Concurrent Queue). The core API was straightforward, as you can see here:
// Create a queue with 2 concurrent workers
queue := gocq.NewQueue(2, func(data int) int {
time.Sleep(500 * time.Millisecond)
return data * 2
})
defer queue.Close()
// Add a single job
result := <-queue.Add(5)
fmt.Println(result) // Output: 10
// Add multiple jobs
results := queue.AddAll(1, 2, 3, 4, 5)
for result := range results {
fmt.Println(result) // Output: 2, 4, 6, 8, 10 (unordered)
}
From the excitement, I posted it on Reddit. To my surprise, it got traction—upvotes, comments, and appreciations. Here’s the fun part: coming from the Node.js ecosystem, I totally messed up Go’s package system at first.
Within a week, I released the next version with a few major changes and shared it on Reddit again. More feedback rolled in, and one person asked for "persistence abstractions support".
The Missing Piece
That hit home—I’d felt this gap before, Persistence. It’s the backbone of any reliable queue system. Without persistence, the package wouldn’t be complete. But then a question is: if I add persistence, would I have to tie it to a specific tool like Redis or another database?
I didn’t want to lock users into Redis, SQLite, or any specific storage. What if the queue could adapt to any database?
So I tore gocq apart.
I rewrote most of it, splitting the core into two parts: a worker pool and a queue interface. The worker would pull jobs from the queue without caring where those jobs lived.
The result? VarMQ, a queue system that doesn’t care if your storage is Redis, SQLite, or even in-memory.
How It Works Now
Imagine you need a simple, in-memory queue:
w := varmq.NewWorker(func(data any) (any, error) {
return nil, nil
}, 2)
q := w.BindQueue() // Done. No setup, no dependencies.
if you want persistence, just plug in an adapter. Let’s say SQLite:
import "github.com/goptics/sqliteq"
db := sqliteq.New("test.db")
pq, _ := db.NewQueue("orders")
q := w.WithPersistentQueue(pq) // Now your jobs survive restarts.
Or Redis for distributed workloads:
import "github.com/goptics/redisq"
rdb := redisq.New("redis://localhost:6379")
pq := rdb.NewDistributedQueue("transactions")
q := w.WithDistributedQueue(pq) // Scale across servers.
The magic? The worker doesn’t know—or care—what’s behind the queue. It just processes jobs.
Lessons from the Trenches
Building this taught me two big things:
- Simplicity is hard.
- Feedback is gold.
Why This Matters
Message queues are everywhere—order processing, notifications, data pipelines. But not every project needs Redis. Sometimes you just want SQLite for simplicity, or to switch databases later without rewriting code.
With Varmq, you’re not boxed in. Need persistence? Add it. Need scale? Swap adapters. It’s like LEGO for queues.
What’s Next?
The next step is to integrate the PostgreSQL adapter and a monitoring system.
If you’re curious, check out Varmq on GitHub. Feel free to share your thoughts and opinions in the comments below, and let's make this Better together.