Why we broke up with kubernetes and found happiness in simplicity
Kubernetes alternatives, devops team happiness, cloud infrastructure simplicity Introduction: when complex gets too complex Kubernetes was supposed to be our golden ticket the container orchestration king that would solve deployment, scaling, recovery, and world peace. We thought we were joining the elite club of DevOps wizards by adopting it. We even flexed our .yaml files like they were arcane spells from some sysadmin grimoire. But here’s the punchline: most of us don’t need Kubernetes. At least, we didn’t. What we did need was stability, simplicity, and sleep the kind where you’re not jolted awake by a 3 a.m. PagerDuty alert because a sidecar proxy decided to ghost its containerized roommate. For a long time, we thought we were doing things “the right way.” But over time, it felt like our infra was built more for the tools than the team. Debugging became an Olympic sport. Every new hire had to take a crash course in “Kubernetes 101: How Not To Cry When Your Cluster Fails.” Eventually, we stopped. Cold turkey. And weirdly enough? We’ve never been happier. Section 2: our original dream with kubernetes Let’s rewind. We were scaling. Traffic was growing, deployments were getting more frequent, and someone said the magic words: “Let’s use Kubernetes.” It sounded smart powerful autoscaling, rolling updates, self-healing nodes, container orchestration so smooth it’d make Docker blush. The blogs made it sound like we’d be deploying like Netflix in no time. So we dove in. We containerized everything. Wrapped our services in layers of Deployment, Service, Ingress, and ConfigMap. Set up Helm charts. Even spun up a staging cluster because you know professionalism. We were ready to conquer the world, one pod at a time. At first, it was… kinda cool. There was a certain DevOps swagger that came with kubectl. Infrastructure felt programmable, dynamic, modern. We felt like we were doing it right. But that shine wore off fast. Because the truth is, Kubernetes doesn’t just run on your cloud. It runs on your time, your brain cells, and your team’s sanity. And unless your app is a sprawling, multi-region, microservice behemoth it’s probably overkill. We learned that the hard way. Section 3: the breaking point It started with small things a rollout getting stuck because a liveness probe was misconfigured. Then came mysterious networking issues that only occurred in one of three clusters. Then the dreaded moment: a production outage caused by… an improperly indented yaml line. No joke. A single space broke prod. At one point, our on-call DevOps lead said: “I’m not debugging microservices anymore. I’m debugging Kubernetes itself.” And he wasn’t wrong. Our job titles hadn’t changed, but our day-to-day looked like we were SREs for a cloud provider we didn’t own. Things escalated. Our dashboards were redder than a failed Git merge. kubectl describe became a daily ritual. We had incidents where restarting a pod required 3 Slack threads, 2 Notion docs, and 1 existential crisis. We were managing Kubernetes, not our app. We had effectively built a second system just to run the first. Section 4: we asked what do we actually need? After we rage-quit our Kubernetes setup, we did something wild: We grabbed a whiteboard, sat down as a team, and asked a simple question: “If we were starting fresh today, what infrastructure would we actually need?” And the answers were shockingly reasonable: Autoscaling? Blue-green or canary deploys? Metrics and observability? Absolutely. Mesh networks, service discovery layers, and three tiers of controllers? We realized we didn’t need half the stuff Kubernetes was offering or rather, requiring us to learn. We were shipping a monolith + a few background services. We weren’t Netflix, we weren’t Shopify, and we weren’t managing thousands of containers. We needed: Fast deploys Simple rollbacks High availability Dev environments that didn’t require a PhD in kubectl And most importantly: We needed less to go wrong. Because every extra abstraction was another point of failure. So we stopped trying to be cloud-native superstars. We stopped architecting for a scale we hadn’t reached. We re-optimized around what our team needed to ship confidently not what Hacker News said was cool. Section 5: what we moved to instead (and why) Once we admitted Kubernetes wasn’t for us, the obvious question became: So what now? We explored a few options, each with pros and trade-offs: Option 1: ECS (Amazon Elastic Container Service) Honestly? A solid step down in complexity from EKS. We already lived in AWS, so it felt native. Fargate handled the provisioning. We stopped worrying about nodes and started focusing on services. Simple deploymentsBuilt-in auto-scalingStill some vendor lock-in and clunky AWS UI moments Option 2: Fly.io This one was surprisingly dev-friendly. Launch a container with a simple fly deploy Built-in metrics, scaling, and even global edge deployment Minimal configDeveloper e

Kubernetes alternatives, devops team happiness, cloud infrastructure simplicity
Introduction: when complex gets too complex
Kubernetes was supposed to be our golden ticket the container orchestration king that would solve deployment, scaling, recovery, and world peace. We thought we were joining the elite club of DevOps wizards by adopting it. We even flexed our .yaml
files like they were arcane spells from some sysadmin grimoire.
But here’s the punchline: most of us don’t need Kubernetes. At least, we didn’t.
What we did need was stability, simplicity, and sleep the kind where you’re not jolted awake by a 3 a.m. PagerDuty alert because a sidecar proxy decided to ghost its containerized roommate.
For a long time, we thought we were doing things “the right way.” But over time, it felt like our infra was built more for the tools than the team. Debugging became an Olympic sport. Every new hire had to take a crash course in “Kubernetes 101: How Not To Cry When Your Cluster Fails.”
Eventually, we stopped. Cold turkey.
And weirdly enough? We’ve never been happier.
Section 2: our original dream with kubernetes
Let’s rewind.
We were scaling. Traffic was growing, deployments were getting more frequent, and someone said the magic words:
“Let’s use Kubernetes.”
It sounded smart powerful autoscaling, rolling updates, self-healing nodes, container orchestration so smooth it’d make Docker blush. The blogs made it sound like we’d be deploying like Netflix in no time.
So we dove in.
We containerized everything. Wrapped our services in layers of Deployment
, Service
, Ingress
, and ConfigMap
. Set up Helm charts. Even spun up a staging cluster because you know professionalism. We were ready to conquer the world, one pod at a time.
At first, it was… kinda cool.
There was a certain DevOps swagger that came with kubectl. Infrastructure felt programmable, dynamic, modern. We felt like we were doing it right.
But that shine wore off fast.
Because the truth is, Kubernetes doesn’t just run on your cloud. It runs on your time, your brain cells, and your team’s sanity. And unless your app is a sprawling, multi-region, microservice behemoth it’s probably overkill.
We learned that the hard way.
Section 3: the breaking point
It started with small things a rollout getting stuck because a liveness probe was misconfigured. Then came mysterious networking issues that only occurred in one of three clusters. Then the dreaded moment: a production outage caused by… an improperly indented yaml
line.
No joke. A single space broke prod.
At one point, our on-call DevOps lead said:
“I’m not debugging microservices anymore. I’m debugging Kubernetes itself.”
And he wasn’t wrong. Our job titles hadn’t changed, but our day-to-day looked like we were SREs for a cloud provider we didn’t own.
Things escalated.
- Our dashboards were redder than a failed Git merge.
-
kubectl describe
became a daily ritual. - We had incidents where restarting a pod required 3 Slack threads, 2 Notion docs, and 1 existential crisis.
We were managing Kubernetes, not our app. We had effectively built a second system just to run the first.
Section 4: we asked what do we actually need?
After we rage-quit our Kubernetes setup, we did something wild:
We grabbed a whiteboard, sat down as a team, and asked a simple question:
“If we were starting fresh today, what infrastructure would we actually need?”
And the answers were shockingly reasonable:
- Autoscaling?
- Blue-green or canary deploys?
- Metrics and observability? Absolutely.
- Mesh networks, service discovery layers, and three tiers of controllers?
We realized we didn’t need half the stuff Kubernetes was offering or rather, requiring us to learn. We were shipping a monolith + a few background services. We weren’t Netflix, we weren’t Shopify, and we weren’t managing thousands of containers.
We needed:
- Fast deploys
- Simple rollbacks
- High availability
- Dev environments that didn’t require a PhD in
kubectl
And most importantly:
We needed less to go wrong. Because every extra abstraction was another point of failure.
So we stopped trying to be cloud-native superstars. We stopped architecting for a scale we hadn’t reached. We re-optimized around what our team needed to ship confidently not what Hacker News said was cool.
Section 5: what we moved to instead (and why)
Once we admitted Kubernetes wasn’t for us, the obvious question became: So what now?
We explored a few options, each with pros and trade-offs:
Option 1: ECS (Amazon Elastic Container Service)
Honestly? A solid step down in complexity from EKS.
- We already lived in AWS, so it felt native.
- Fargate handled the provisioning.
- We stopped worrying about nodes and started focusing on services.
Simple deployments
Built-in auto-scaling
Still some vendor lock-in and clunky AWS UI moments
Option 2: Fly.io
This one was surprisingly dev-friendly.
- Launch a container with a simple
fly deploy
- Built-in metrics, scaling, and even global edge deployment
Minimal config
Developer experience was top-tier
Debugging could get tricky when things broke (but at least they broke less)
Option 3: Just… Docker + CI/CD
Yep, back to basics.
- Dockerized app
- GitHub Actions → push → deploy to VPS
- Health checks, failovers, logging all handled by things we actually understood
Predictable
Cheap
No YAML black magic
No cool dashboards, but we slept better
The key takeaway? We traded complexity for clarity.
Instead of asking “how do we make this work with Kubernetes?”, we started asking:
“What’s the fastest, most stable way to ship?”
And guess what shipping got faster. Downtime dropped. And our on-call DevOps team stopped threatening to quit every other sprint.
Section 6: benefits we didn’t expect
We thought moving off Kubernetes would just simplify infra.
What we didn’t expect? It transformed our entire engineering culture.
1. Our DevOps team stopped burning out
No more 2-hour war rooms over failing liveness probes. No more surprise service crashes because of a rogue init container. On-call became actually bearable. People started taking weekends off again — and nothing exploded.
“I no longer fear deployments,” one of our engineers actually said in a retro.
2. New devs ramped up in days, not weeks
Before: onboarding meant deep-diving into Kubernetes docs, Helm chart spaghetti, and kubectl
rituals.
After:
- “Here’s the Dockerfile”
- “Here’s the deploy command”
- “Go break staging”
Confidence soared. So did productivity.
3. Incidents dropped by over 60%
With fewer moving parts, there was less room for things to go sideways. Monitoring became simpler. Rollbacks went from “wait, which deployment is this?” to one-click redeploys.
4. Our codebase got cleaner
Since we weren’t stuffing Kubernetes-specific configs everywhere, our repos looked less like DevOps crime scenes and more like actual application logic.
5. We focused more on product, less on platform
Every hour saved from debugging kube was an hour spent building features. We finally shipped the dashboard redesign that had been sitting in Jira since the Jurassic period.
Section 7: but what about scaling?
Ah yes the classic pushback:
“But won’t you hit scaling issues without Kubernetes?”
Maybe. Eventually. But let’s be honest…
Most teams don’t need hyperscale solutions.
We weren’t pushing millions of requests per second. We weren’t dynamically spinning up hundreds of pods per region. We were serving steady, predictable traffic with the occasional spike during launches or outages (ironic, I know).
Here’s what we realized:
- Autoscaling works without Kubernetes. Services like AWS Fargate, Fly.io, or even basic load balancers + auto-scaling groups handled it just fine.
- Caching + smart query handling gave us more mileage than infrastructure gymnastics ever did.
- Vertical scaling was still a thing. And honestly, easier to manage for our workload.
The myth of “Kubernetes is inevitable”
A lot of infra choices are driven by fear. Fear of future scale. Fear of not being “cloud-native” enough. But building for a level of scale you don’t yet have is like pre-buying furniture for a house you might build in 5 years in a city you haven’t moved to.
“Premature optimization is the root of all evil.” every sane developer ever
We decided to grow into complexity, not start with it.
When we actually hit scaling bottlenecks, we’d revisit our architecture. Until then? Simple wins. Every time.
Section 8: the cultural shift complexity isn’t cool anymore
Kubernetes made us feel smart. For a while.
We’d talk about custom controllers, sidecars, and ingress layers like we were building the next Google. But over time, we realized something kinda painful:
We were optimizing for prestige, not practicality.
Simplicity became our new north star
Once we ditched Kubernetes, a weird shift happened in how we made decisions.
- We stopped reaching for “the most powerful tool” and started asking “what’s the least painful way to solve this?”
- Meetings became shorter because infra was simpler.
- Devs had fewer blockers and shipped more often.
- People got comfortable saying, “I don’t understand this can we do it simpler?” without shame.
Complexity ≠ innovation
Somewhere along the way, we started associating complex infra with doing something “innovative.” But honestly? Most users don’t care about your infra flex. They care about fast load times and working features.
No one ever said:
“Wow, this product must use a service mesh! What a beautiful experience!”
Dev quote that hit different:
“I’d rather ship a feature than debug a Helm chart.”
That’s where we landed. Not because we gave up. But because we finally admitted:
Simplicity takes courage and Kubernetes wasn’t it.
Section 9: when kubernetes does make sense
Let’s be clear: this isn’t a “Kubernetes bad” rant.
K8s isn’t evil. It’s just… not for everyone. And definitely not for every stage.
There are real use cases where Kubernetes absolutely shines:
1. You’re at FAANG scale
If you’ve got a dozen teams pushing hundreds of microservices, global traffic, and multi-region failover needs yeah, Kubernetes was probably made for you. You’ve got a dedicated platform team? Perfect. Let them kube.
2. You’re doing wild infra things
Running ML pipelines that autoscale GPU nodes?
Custom event-driven systems across hybrid cloud setups?
Managing multi-tenant SaaS platforms with isolation needs?
Okay. K8s is your friend now. YAML away.
3. You need ultimate portability
Vendor-agnostic infra between AWS, GCP, and on-prem?
Your clients demand it. Your lawyers demand it. Your CTO probably also demands it.
That’s Kubernetes’ sweet spot. It’s just… not everyone’s spot.
The rule of thumb we go by now:
If you need a platform team to manage your platform,
maybe you’re overengineering your stack.
Start boring. Stay boring as long as you can.
Only get fancy when you’re forced to scale, not when Medium told you to.
Section 10: we chose boring and it rocks
Stepping away from Kubernetes wasn’t a step back.
It was the exact move we needed to move forward.
What we gained:
- Confidence in deployments
- Clarity in architecture
- Peace of mind on weekends
- A DevOps team that doesn’t hate life
We stopped treating infrastructure like a status symbol.
We started treating it like what it actually is: a tool to help us ship.
Final thoughts:
- Kubernetes is powerful but so is knowing when not to use it.
- Start with tools that match your team’s mental model, not just your scaling fantasies.
- The best infra is the kind you barely notice, because it just works.
Helpful resources:
- Kubernetes alternatives for smaller teams
- Why simplicity scales better than complexity
- Fly.io Deploy apps globally without Kubernetes
- ECS vs EKS When to skip Kubernetes on AWS
- 12-Factor App (Still relevant!)
