Why uber ditched postgres for mysql: What every developer can learn from it
From index bloat to schemaless systems inside uber’s bold switch and what it means for your next database decision Introduction: when postgres couldn’t keep up with uber’s speed Picture this: you’re cruising along on a smooth PostgreSQL powered ride, then suddenly BOOM your app scales to millions of users, and your database just can’t keep up. That’s not a plot twist from a DevOps horror story it’s what happened at Uber. Back in its early days, Uber embraced PostgreSQL, one of the most loved open-source relational databases. But as the platform grew from a scrappy startup to a global ride-hailing juggernaut, PostgreSQL started showing cracks under pressure index bloat, replication issues, and painful upgrade paths. The engineers were spending more time babysitting the database than building features. So, Uber made the controversial move: ditching PostgreSQL and switching to MySQL. Not just any vanilla MySQL setup either they built a custom Schemaless system on top of it to handle massive scale and evolving data needs. Some developers cheered, others scratched their heads. And many let’s be honest just wanted to know: “Is MySQL really better than Postgres?” This article breaks down: Why Uber made the switch What went wrong with PostgreSQL What went right with MySQL The lessons you can steal for your own stack And whether this kind of migration makes sense for you The early days: uber’s initial tech stack Rewind to when Uber was just a sleek little black car app making waves in San Francisco. Like any fast-moving startup, they picked tools that let them ship features quickly and back then, PostgreSQL checked all the boxes. Why postgres? Postgres offered everything a growing engineering team could want: Full ACID transactions for handling sensitive ride and payment data A reliable relational structure that played well with business logic Support for JSON when flexibility was needed A deep, mature ecosystem of tools and extensions For the early stages of Uber where reliability and structure mattered more than infinite scale Postgres was a solid pick. Where things started to break But Postgres wasn’t built to keep up with what Uber was becoming: a global, always-on, real-time logistics engine. As rides flooded in from thousands of cities, the cracks started to show: Write-heavy workloads began choking performance Replication lag made real-time data unreliable Indexes ballooned in size, slowing down queries Schema changes became a massive operational risk What started as occasional hiccups turned into recurring fire drills. Engineering velocity slowed not because of bad code, but because the database couldn’t keep pace with the app’s ambition. The reality check This wasn’t about Postgres being “bad” it just wasn’t the right fit anymore. Uber needed a data layer that was easier to horizontally scale, more forgiving to schema changes, and less prone to bloat when write volumes exploded. That realization led to one of the most high-profile database migrations in tech. Coming up next: what exactly went wrong under the hood, and why the switch to MySQL made more sense. Section 3: where postgres started to burn out When you’re handling thousands of ride requests per second across multiple continents, your database isn’t just a backend component it’s the beating heart of your platform. And Postgres? Well, it started having chest pains. Here’s a breakdown of where things went sideways for Uber: Index bloat: the silent performance killer Postgres uses a technique called Multi-Version Concurrency Control (MVCC) to handle concurrent transactions. It’s great in theory no locks, no delays but in practice, it came with a cost: index bloat. Every update creates a new row version while the old one hangs around until vacuumed. And when your tables get really big? Your indexes do too. Queries got slower Disk usage exploded Vacuums couldn’t keep up Devs spent way too much time manually tuning autovacuum settings like database gardeners with flamethrowers 2. Replication lag: data that’s always late to the party Uber needed real-time consistency across data centers, but Postgres’ native replication is: Synchronous (slower, but safe), or Asynchronous (faster, but riskier) Neither was ideal at Uber’s scale. Write-heavy workloads led to replication lag, meaning the standby replicas were always a few seconds behind. That delay wasn’t just annoying it broke user experiences. Imagine booking a ride and not seeing it appear across the system instantly. Ouch. 3. upgrades were like boss fights Upgrading PostgreSQL in production felt like rolling dice in traffic. You couldn’t just hit a version-up button. It often required: Full backups App downtime A sacrificial offering to the database gods (optional but recommended) At Uber’s size, even minor upgrades became project-level events a huge red flag for any system that’s supposed to move fast. 4. vertical scaling limits Uber ran into the ceiling of what v

From index bloat to schemaless systems inside uber’s bold switch and what it means for your next database decision
Introduction: when postgres couldn’t keep up with uber’s speed
Picture this: you’re cruising along on a smooth PostgreSQL powered ride, then suddenly BOOM your app scales to millions of users, and your database just can’t keep up. That’s not a plot twist from a DevOps horror story it’s what happened at Uber.
Back in its early days, Uber embraced PostgreSQL, one of the most loved open-source relational databases. But as the platform grew from a scrappy startup to a global ride-hailing juggernaut, PostgreSQL started showing cracks under pressure index bloat, replication issues, and painful upgrade paths. The engineers were spending more time babysitting the database than building features.
So, Uber made the controversial move: ditching PostgreSQL and switching to MySQL. Not just any vanilla MySQL setup either they built a custom Schemaless system on top of it to handle massive scale and evolving data needs.
Some developers cheered, others scratched their heads. And many let’s be honest just wanted to know: “Is MySQL really better than Postgres?”
This article breaks down:
- Why Uber made the switch
- What went wrong with PostgreSQL
- What went right with MySQL
- The lessons you can steal for your own stack
- And whether this kind of migration makes sense for you

The early days: uber’s initial tech stack
Rewind to when Uber was just a sleek little black car app making waves in San Francisco. Like any fast-moving startup, they picked tools that let them ship features quickly and back then, PostgreSQL checked all the boxes.
Why postgres?
Postgres offered everything a growing engineering team could want:
- Full ACID transactions for handling sensitive ride and payment data
- A reliable relational structure that played well with business logic
- Support for JSON when flexibility was needed
- A deep, mature ecosystem of tools and extensions
For the early stages of Uber where reliability and structure mattered more than infinite scale Postgres was a solid pick.
Where things started to break
But Postgres wasn’t built to keep up with what Uber was becoming: a global, always-on, real-time logistics engine. As rides flooded in from thousands of cities, the cracks started to show:
- Write-heavy workloads began choking performance
- Replication lag made real-time data unreliable
- Indexes ballooned in size, slowing down queries
- Schema changes became a massive operational risk
What started as occasional hiccups turned into recurring fire drills. Engineering velocity slowed not because of bad code, but because the database couldn’t keep pace with the app’s ambition.
The reality check
This wasn’t about Postgres being “bad” it just wasn’t the right fit anymore. Uber needed a data layer that was easier to horizontally scale, more forgiving to schema changes, and less prone to bloat when write volumes exploded.
That realization led to one of the most high-profile database migrations in tech. Coming up next: what exactly went wrong under the hood, and why the switch to MySQL made more sense.
Section 3: where postgres started to burn out
When you’re handling thousands of ride requests per second across multiple continents, your database isn’t just a backend component it’s the beating heart of your platform. And Postgres? Well, it started having chest pains.
Here’s a breakdown of where things went sideways for Uber:
Index bloat: the silent performance killer
Postgres uses a technique called Multi-Version Concurrency Control (MVCC) to handle concurrent transactions. It’s great in theory no locks, no delays but in practice, it came with a cost:
index bloat.
Every update creates a new row version while the old one hangs around until vacuumed. And when your tables get really big? Your indexes do too.
- Queries got slower
- Disk usage exploded
- Vacuums couldn’t keep up
- Devs spent way too much time manually tuning autovacuum settings like database gardeners with flamethrowers
2. Replication lag: data that’s always late to the party
Uber needed real-time consistency across data centers, but Postgres’ native replication is:
- Synchronous (slower, but safe), or
- Asynchronous (faster, but riskier)
Neither was ideal at Uber’s scale. Write-heavy workloads led to replication lag, meaning the standby replicas were always a few seconds behind. That delay wasn’t just annoying it broke user experiences.
Imagine booking a ride and not seeing it appear across the system instantly. Ouch.
3. upgrades were like boss fights
Upgrading PostgreSQL in production felt like rolling dice in traffic. You couldn’t just hit a version-up button. It often required:
- Full backups
- App downtime
- A sacrificial offering to the database gods (optional but recommended)
At Uber’s size, even minor upgrades became project-level events a huge red flag for any system that’s supposed to move fast.
4. vertical scaling limits
Uber ran into the ceiling of what vertical scaling could offer. There’s only so much RAM and CPU you can throw at a single Postgres node before you’re maxed out. Scaling horizontally (aka sharding) was possible but extremely complex in Postgres.
And if you mess up sharding logic at scale? It’s like cutting a pizza with a chainsaw: messy, uneven, and someone’s gonna get hurt.
The big takeaway
Postgres didn’t fail. It just wasn’t designed for Uber-level madness. What Uber needed was:
- Easier horizontal scaling
- Better write performance
- Simpler replication
- Operational sanity
Section 4: why mysql made more sense for uber
So, Uber needed a new database engine one that didn’t crumble under massive writes, could scale horizontally without black magic, and wouldn’t require a dedicated team just to babysit replication. Their answer? MySQL. But not the plain-old, out-of-the-box flavor — this was MySQL with a serious twist.
Let’s break down why the switch happened and what Uber actually did.
1. Replication that kept up with real life
Unlike Postgres, MySQL’s replication model is simpler and more predictable at scale. While it still supports both synchronous and asynchronous modes, Uber leaned into asynchronous replication with a twist custom tooling to keep lag in check and ensure replicas weren’t miles behind the primary.
Why it worked:
- Simpler architecture
- Easier to monitor
- More consistent replica behavior
- Less overhead on the write path
Real-time ride tracking and matching? Happier drivers and riders.
2. Schemaless on top of mysql: uber’s custom layer
Here’s where things get spicy: Uber didn’t just plug MySQL in and call it a day. They built a powerful abstraction layer called Schemaless (yes, that’s a real thing).
What is it?
A custom ORM-like service that sits on top of MySQL and:
- Stores data in a flexible key-value structure
- Lets engineers skip traditional schema changes
- Supports dynamic fields and versioned writes
- Scales horizontally with minimal friction
Imagine having the flexibility of NoSQL, but backed by the maturity and reliability of MySQL. That’s what Schemaless gave them.
3. Operational simplicity is underrated
With MySQL, upgrades became easier. Replication was less flaky. And tooling thanks to MySQL’s long life in web-scale infra was rich and battle-tested.
MySQL also played nice with Uber’s containerized services. Spinning up new instances, handling failovers, and automating backups became routine rather than risky.
In other words:
Less time spent fighting fires = more time building features.
4. Better ecosystem support
Let’s be honest MySQL has been around forever. And with that comes:
- More cloud-native tools
- Mature monitoring plugins
- A massive community
- Tons of docs, war stories, and hacks
That matters when you’re scaling fast and can’t afford to reinvent wheels every week.
5. It wasn’t “mysql vs postgres.” It was, “What solves our problem?”
This wasn’t about religion. Uber didn’t move because they hated Postgres. They moved because:
- MySQL fit their use case better
- They could extend it with custom tooling
- It reduced developer pain across teams
This wasn’t a downgrade. It was a redesign.
Section 5: dev world reacts cheers, jeers, and reality checks
Uber’s switch from PostgreSQL to MySQL didn’t exactly go unnoticed. When they published their engineering blog outlining the reasons, the internet reacted the way it always does when someone touches a beloved tech stack: chaos, think pieces, and memes.
Here’s how the dev community responded and what we can learn from the noise.
Some devs said, “Finally, someone said it out loud.
Plenty of engineers who’ve wrestled with Postgres at scale were relieved to see Uber validate their own pain:
- “Yes, index bloat is real and terrifying.”
- “We had to tune autovacuum for months, and it still wasn’t enough.”
- “Postgres is great… until it isn’t.”
There was a sense of, “Thank you for confirming I’m not crazy.”
Especially in the startup-to-scaleup phase, many teams saw this as permission to rethink their own tech choices and stop blindly picking Postgres just because it’s popular on Hacker News.
Others shouted, “Blasphemy!”
Of course, the other side of the aisle Postgres purists and old-school DBAs had strong feelings:
- “Postgres scales fine. You just didn’t know how to use it.
- “Why didn’t you shard? Or switch to Citus? Or try logical replication?”
- “Throwing hardware at the problem isn’t a solution.”
And they weren’t entirely wrong. Many of Postgres’ problems can be mitigated with proper expertise, tuning, and third-party extensions. But that’s also part of the issue Uber didn’t want a PhD in PostgreSQL internals just to ship code faster.
The balanced take: it’s about trade offs, not tech loyalty
At the end of the day, this isn’t a Marvel-vs-DC situation. Both Postgres and MySQL are fantastic tools but like any tool, they shine in specific contexts.
Uber had:
- Ultra-high write throughput
- Rapidly evolving data models
- Global latency constraints
- A need for operational sanity
And in that world, MySQL + custom tooling was the better fit.
What devs can take away from the debate
- Don’t pick tech like it’s a popularity contest. Choose based on your system’s actual needs, not Twitter polls.
- Know when “good enough” isn’t enough. What works at 10k users might collapse at 10 million.
- Tech migrations are never simple. Uber’s move involved custom layers, internal tools, and deep expertise. It’s not just “switch DB, press play.”
Section 6: lessons for developers and architects
Uber’s migration wasn’t just a headline-grabbing tech decision it was a masterclass in scaling pragmatism. Whether you’re building the next ride-hailing empire or just wrangling a side project, there’s gold here for any developer or systems architect.
Let’s unpack the lessons that matter in the real world where deadlines loom, bugs creep, and your database might just throw a tantrum at 3 a.m.
1.No database is perfect only more suitable
Forget fanboy wars. The truth is, every database has trade-offs:
- Postgres is flexible, strict, and powerful but sensitive under pressure.
- MySQL is fast, simple, and widely supported but less fancy with features.
Uber didn’t switch because MySQL was “better.” They switched because it was a better fit for their evolving needs. That’s the mindset: always ask what works best for the problem, not what’s trending.
2. Scale kills elegance prioritize resilience
In the early days, you optimize for readability, speed of development, and clean design. But once you start handling tens of thousands of writes per second?
You start optimizing for survival.
At scale, priorities shift:
- Performance > feature completeness
- Operability > elegance
- Stability > “purity” of architecture
Uber didn’t chase the prettiest stack. They chased the least painful one to operate at scale.
3. Observability is your best friend
One of the reasons Uber’s team spotted their bottlenecks was their obsession with monitoring and metrics. They didn’t rely on gut feelings — they used data to prove:
- Where replication was lagging
- How index bloat was affecting read speeds
- When schema changes were causing downtime
If you’re not tracking key database metrics like query performance, index size, and replication health, you’re flying blind.
4. Abstract complexity with internal tools
Uber didn’t just swap Postgres for MySQL they built Schemaless to abstract the pain. That internal tool:
- Let teams evolve schemas without fear
- Simplified data writes across services
- Reduced mental load for developers
Even at smaller scales, building simple internal tooling (e.g. wrappers, dashboards, validators) can multiply productivity and reduce human error.
5. Think future-proof, not just MVP
It’s tempting to duct tape a working system and move on. But if you have growth in your sights, think ahead:
- Can your DB handle 10x traffic?
- How painful are version upgrades?
- Is your replication strategy battle-tested?
Uber didn’t plan everything perfectly but they adapted. And that’s the mindset that scales: build fast, observe early, refactor relentlessly.
Section 7: conclusion what we learned from uber’s database pivot
Uber’s decision to switch from PostgreSQL to MySQL wasn’t about chasing trends or abandoning ship. It was a calculated move born from real pain, massive scale, and the need for relentless reliability. And while you might not be running a global ride-sharing empire (yet), their journey holds valuable lessons for any team building at speed.
key takeaways
- Tools are context-dependent. Postgres wasn’t wrong. It just wasn’t right anymore for them.
- Scaling issues aren’t hypothetical. At some point, your early choices will get stress-tested. Be ready to adapt.
- Simplicity wins at scale. Operational burden can drag down the fastest tech. Uber chose tooling that was stable, maintainable, and battle-tested.
- Custom solutions work when you own the problem. Schemaless wasn’t a generic fix. It was Uber’s response to Uber’s challenges.
real talk for your own stack
Ask yourself:
- Is your current database choice serving your actual traffic patterns? Or just your ideal ones?
- Do you have the tooling and visibility to catch early signs of database pain?
- Are you thinking about scaling now or waiting for things to break?
If you’re building for growth, these aren’t just nice-to-ask questions they’re survival checklists.
Want to go deeper?
Here are some great resources to keep exploring:
- Uber’s engineering post on Schemaless how they layered flexibility over MySQL
- High Scalability’s take on Uber’s architecture
- MySQL vs. PostgreSQL a balanced, technical comparison
- Database reliability engineering book essential reading if you’re scaling anything serious
Let’s keep the convo going
If this breakdown gave you a few “aha” moments, drop a comment.
Got a horror story from scaling your own database? Even better. Share it below.
Also, don’t forget to like, share with your team, and subscribe for more real dev content minus the buzzwords.
Section 8: helpful resources, tooling, and real-world inspiration
To help you dig deeper into everything covered in this piece and maybe even plan your own graceful escape from database hell here’s a curated set of tools, reads, and frameworks from the frontlines of scale.
Must-read engineering blogs & case studies
- Uber Engineering Moving from Postgres to MySQL Their official deep dive into why the shift happened, and how they built a custom abstraction layer on top of MySQL.
- Airbnb’s MySQL scale tips Great breakdown of how Airbnb handles MySQL at scale and where things can get tricky.
- Pinterest on migrating to Vitess Another example of custom scaling MySQL with powerful tooling.
Tools & libraries for DB sanity
- Vitess MySQL sharding at hyperscale. Used by YouTube, Slack, and more. Basically MySQL on steroids for distributed environments.
- Citus PostgreSQL extension for horizontal scaling. If you still want to try Postgres at scale, this is your sword.
- Percona Monitoring & Management (PMM) Excellent open-source monitoring for both MySQL and Postgres. Helps you track query performance, replication lag, and more.
- pgBadger A solid PostgreSQL log analyzer for deep performance insights.
Frameworks & books for architectural thinking
- Designing Data-Intensive Applications by Martin Kleppmann The holy book of data architecture. Covers distributed systems, replication, consistency models, and real-world trade-offs.
- Database Reliability Engineering by Charity Majors & Laine Campbell Pragmatic, dev-friendly guide to keeping databases alive in production.
- Awesome Database Tools An ongoing GitHub list of cool tools for DB monitoring, analysis, backup, and migration.
Enjoyed this story?
If you liked it, please leave a comment sharing what topic you’d like to see next!
Feel free to like, share it with your friends, and subscribe to get updates when new posts go live.