Mutex vs RWMutex in Golang: A Developer’s Guide
Hi there! I'm Shrijith Venkatrama, founder of Hexmos. Right now, I’m building LiveAPI, a tool that makes generating API docs from your code ridiculously easy. Concurrency in Go is powerful with goroutines, but shared data can cause trouble. The sync package offers Mutex and RWMutex to manage this. This post explains what they do, how they work, and when to use them with examples and real-world insights. What Problems They Solve Concurrency issues arise when goroutines access shared data at the same time. This leads to race conditions—unpredictable results from overlapping operations. Here’s a quick demo of the problem: package main import ( "fmt" "sync" ) func main() { var count int var wg sync.WaitGroup for i := 0; i

Hi there! I'm Shrijith Venkatrama, founder of Hexmos. Right now, I’m building LiveAPI, a tool that makes generating API docs from your code ridiculously easy.
Concurrency in Go is powerful with goroutines, but shared data can cause trouble.
The sync
package offers Mutex
and RWMutex
to manage this.
This post explains what they do, how they work, and when to use
them with examples and real-world insights.
What Problems They Solve
Concurrency issues arise when goroutines access shared data at the same time. This leads to race conditions—unpredictable results from overlapping operations. Here’s a quick demo of the problem:
package main
import (
"fmt"
"sync"
)
func main() {
var count int
var wg sync.WaitGroup
for i := 0; i < 100; i++ {
wg.Add(1)
go func() {
defer wg.Done()
count++
}()
}
wg.Wait()
fmt.Println("Count:", count) // Likely less than 100!
}
Run this with go run -race
, and you’ll see the race condition flagged. The final count
varies because increments overlap.
Example output:
==================
WARNING: DATA RACE
Read at 0x00c000014188 by goroutine 9:
main.main.func1()
/home/shrsv/bin/goconcurrency/race1.go:16 +0x84
Previous write at 0x00c000014188 by goroutine 7:
main.main.func1()
/home/shrsv/bin/goconcurrency/race1.go:16 +0x96
Goroutine 9 (running) created at:
main.main()
/home/shrsv/bin/goconcurrency/race1.go:14 +0x78
Goroutine 7 (finished) created at:
main.main()
/home/shrsv/bin/goconcurrency/race1.go:14 +0x78
==================
==================
WARNING: DATA RACE
Write at 0x00c000014188 by goroutine 9:
main.main.func1()
/home/shrsv/bin/goconcurrency/race1.go:16 +0x96
Previous write at 0x00c000014188 by goroutine 8:
main.main.func1()
/home/shrsv/bin/goconcurrency/race1.go:16 +0x96
Goroutine 9 (running) created at:
main.main()
/home/shrsv/bin/goconcurrency/race1.go:14 +0x78
Goroutine 8 (finished) created at:
main.main()
/home/shrsv/bin/goconcurrency/race1.go:14 +0x78
==================
Count: 98
Found 2 data race(s)
exit status 66
Mutex fixes this by allowing only one goroutine to access the data at a time. It’s ideal for read and write safety.
RWMutex handles a specific case: when reads are more common than writes. It allows multiple readers simultaneously but locks fully for writes, improving efficiency in read-heavy scenarios.
Mutex Basics with an Example
A Mutex
ensures exclusive access to shared data. Here’s a safe counter using it:
package main
import (
"fmt"
"sync"
)
type SafeCounter struct {
mu sync.Mutex
count int
}
func (c *SafeCounter) Increment() {
c.mu.Lock()
defer c.mu.Unlock()
c.count++
}
func (c *SafeCounter) Value() int {
c.mu.Lock()
defer c.mu.Unlock()
return c.count
}
func main() {
counter := SafeCounter{}
var wg sync.WaitGroup
for i := 0; i < 100; i++ {
wg.Add(1)
go func() {
defer wg.Done()
counter.Increment()
}()
}
wg.Wait()
fmt.Println("Final count:", counter.Value()) // Always 100
}
The Lock()
and Unlock()
pair ensures no overlap. It’s simple and effective for any shared resource.
RWMutex Basics with an Example
RWMutex
separates read and write locks. Here’s a cache example where reads dominate:
package main
import (
"fmt"
"sync"
"time"
)
type SafeCache struct {
mu sync.RWMutex // RWMutex to manage concurrent read/write access
cache map[string]string // Shared map to store key-value pairs
}
// Set adds or updates a key-value pair in the cache
func (c *SafeCache) Set(key, value string) {
c.mu.Lock() // Exclusive write lock: no readers or writers allowed
defer c.mu.Unlock() // Unlock when done, using defer for safety
c.cache[key] = value // Write to the map
}
// Get retrieves a value by key from the cache
func (c *SafeCache) Get(key string) string {
c.mu.RLock() // Read lock: allows multiple readers, blocks writers
defer c.mu.RUnlock() // Release read lock when done
return c.cache[key] // Read from the map (may return "" if key not set yet)
}
func main() {
// Initialize the cache with an empty map
cache := SafeCache{cache: make(map[string]string)}
var wg sync.WaitGroup // WaitGroup to synchronize goroutines
// Launch one writer goroutine
wg.Add(1)
go func() {
defer wg.Done() // Signal completion when goroutine finishes
cache.Set("key1", "value1") // Set "key1" to "value1" (takes 10ms due to sleep)
time.Sleep(10 * time.Millisecond) // Simulate a slow write operation
}()
// Launch five reader goroutines
for i := 0; i < 5; i++ {
wg.Add(1)
go func() {
defer wg.Done() // Signal completion when goroutine finishes
// Print the value of "key1" (may be "" or "value1" depending on timing)
fmt.Println(cache.Get("key1"))
}()
}
// Wait for all goroutines to finish
wg.Wait()
// Expected output: Mix of "" (empty string) and "value1".
// Readers starting before the writer finishes see "", after see "value1".
// Order is unpredictable due to goroutine scheduling.
}
RLock()
allows multiple readers, while Lock()
is for writes. This reduces waiting in read-heavy cases. In this program, the writer takes 10ms, so some readers may run before the write completes, printing an empty string. Others will see "value1" after the write. The output varies due to concurrency.
Methods and Examples Compared
Both types come from the sync
package (docs). Here’s a method breakdown:
Type | Method | Purpose | Example Use |
---|---|---|---|
Mutex |
Lock() |
Exclusive access | Updating a counter |
Mutex |
Unlock() |
Releases the lock | After the update |
RWMutex |
Lock() |
Exclusive write access | Modifying a map |
RWMutex |
Unlock() |
Releases write lock | After writing |
RWMutex |
RLock() |
Read-only access (multiple OK) | Fetching a value |
RWMutex |
RUnlock() |
Releases read lock | After reading |
Mutex Example: Bank Account
type Account struct {
mu sync.Mutex
balance int
}
func (a *Account) Deposit(amount int) {
a.mu.Lock()
a.balance += amount
a.mu.Unlock()
}
RWMutex Example: Config Reader
type Config struct {
mu sync.RWMutex
data string
}
func (c *Config) Update(newData string) {
c.mu.Lock()
c.data = newData
c.mu.Unlock()
}
func (c *Config) Read() string {
c.mu.RLock()
defer c.mu.RUnlock()
return c.data
}
Always match locks with unlocks to avoid deadlocks.
Real-World Use Cases
Mutex: Inventory System
For an online store, Mutex
prevents overselling:
type Inventory struct {
mu sync.Mutex
stock int
}
func (i *Inventory) Sell() bool {
i.mu.Lock()
defer i.mu.Unlock()
if i.stock > 0 {
i.stock--
return true
}
return false
}
Writes dominate here, so Mutex
fits.
RWMutex: Dashboard Stats
A live dashboard with frequent reads benefits from RWMutex
:
type Stats struct {
mu sync.RWMutex
visitors int
}
func (s *Stats) Update(newCount int) {
s.mu.Lock()
s.visitors = newCount
s.mu.Unlock()
}
func (s *Stats) GetVisitors() int {
s.mu.RLock()
defer s.mu.RUnlock()
return s.visitors
}
Readers access freely, writes lock fully.
Trade-Offs
Scenario | Mutex | RWMutex |
---|---|---|
Mostly writes | Simple | Too complex |
Mostly reads | Slow | Faster |
Mixed | Test it | Test it |
Use go test -bench
to measure.
Key Takeaways and Further Reading
Mutex is your go-to for simplicity and write-heavy tasks. RWMutex excels when reads outnumber writes. Test your workload to choose wisely.
For real-world examples, check these repos:
-
Kubernetes: Uses
Mutex
in scheduler code (e.g.,pkg/scheduler
). -
HashiCorp Vault: Employs
RWMutex
for config management (e.g.,vault/core.go
).
Explore the sync docs or this GopherCon talk for more. Happy coding!