Stop using Docker like it’s your first dev job
A real-world guide for devs to ditch outdated container habits and build like it’s the 2020s Introduction Let’s get this out of the way: Docker is not dying. But if your workflow still looks like a Frankenstein script of docker run, scattered Dockerfiles, and copied docker-compose.yml from Stack Overflow... you might be. It’s 2025. We’ve got distroless images, BuildKit, rootless containers, VSCode devcontainers, and even AI pair programmers and yet, so many devs are still treating Docker like a toy they just unwrapped at their first bootcamp. This article isn’t here to roast you (well, maybe just a little). It’s here to help you upgrade not just your toolset, but your mindset. We’ll go through the worst Docker sins developers still commit, show you real-world examples of how the pros do it, and drop some tools and best practices you can start using today to containerize smarter. Whether you’re a junior trying to look senior, or a senior secretly copy-pasting from outdated blog posts, this one’s for you. Section 1: docker has grown up have you? Back in the early days of Docker, just getting a container to run felt like wizardry. You’d docker run -it ubuntu bash your way into glory, install Node or Python inside the container like it was your personal playground, and call it a day. But here’s the thing Docker has evolved. It’s not just a local dev tool anymore; it’s now a critical part of production pipelines, CI/CD workflows, edge deployments, and even Kubernetes clusters. So if you’re still using Docker like it’s just a fancy replacement for your terminal… you’re using a supercar to go grocery shopping. Here’s what’s new (and what you’re probably not using yet): Docker Compose v2 is now a plugin, fully integrated with the Docker CLI. You no longer need to install it separately or treat it like an afterthought. BuildKit is the default backend for building images. It’s faster, supports advanced caching, and can handle secrets, SSH forwarding, and parallel builds. (Still using docker build . without enabling BuildKit? You're missing out.) Docker Extensions have opened the door to a plugin ecosystem that gives you GUI-based insights, logs, and metrics. Stuff you had to glue together with shell scripts is now a click away. And yet… so many devs are stuck running Docker like they did when Pokémon Go launched. If you’re not evolving your Docker usage, you’re not just being inefficient you’re making life harder for your future self, your team, and even your CI/CD budget (bloated images = bloated costs). Stop hand-writing Dockerfiles like it’s a novel Let’s be real: most beginner Dockerfiles look like someone tried to write a short story with RUN commands. You’ve seen it before — maybe you’ve written one. It starts with FROM node:latest, then proceeds to install 50 packages, manually copy over files, run a build, install curl for no reason, and ends with some CMD ["npm", "start"] glued on at the bottom like an afterthought. What’s wrong with this? No caching awareness one code change and the entire image rebuilds. Massive image sizes because you left the .git folder, test files, and node_modules in there. Debug-only tools in production like leaving vim, nano, or curl in your container for that “just in case” moment. Missing multi-stage builds meaning your final image includes build tools it never actually uses at runtime. Here’s how pros write Dockerfiles today Multi-stage builds are the MVP. You use one stage for compiling, bundling, or building your app, and another lean stage for running it. Dockerfile: # Stage 1 - BuildFROM node:20-alpine AS builderWORKDIR /appCOPY . .RUN npm ci && npm run build# Stage 2 - ServeFROM node:20-alpineWORKDIR /appCOPY --from=builder /app/dist ./distRUN npm install -g serveCMD ["serve", "dist"] Boom. Smaller image, faster build, no leftovers, and works like a charm in CI. You don’t need to reinvent the wheel just stop treating the Dockerfile like a place to “figure things out.” Think of it as your production recipe and no one wants spaghetti in production. If you’re still writing Dockerfiles like a diary entry, it’s time to move on. Section 3: the copy-paste docker-compose.yml curse Let’s talk about the Great Copy-Paste Epidemic — specifically the docker-compose.yml files floating around GitHub, Stack Overflow, and random blog posts from 2017. You know the ones: version: "3"services: web: build: . ports: - "3000:3000" volumes: - .:/app It works… until it doesn’t. Then you spend 4 hours debugging why your container can’t connect to Postgres or why hot-reloading suddenly died on a Thursday. Common sins in paste-first docker-compose setups: Using default bridge networks and wondering why services can’t talk to each other. Mounting everything (.:/app) and destroying performance (especially on macOS). Hardcoding secrets right into the YAML. No environment separation — running dev and prod with the same docker-compose.yml file lik

A real-world guide for devs to ditch outdated container habits and build like it’s the 2020s
Introduction
Let’s get this out of the way: Docker is not dying. But if your workflow still looks like a Frankenstein script of docker run
, scattered Dockerfile
s, and copied docker-compose.yml
from Stack Overflow... you might be.
It’s 2025. We’ve got distroless images, BuildKit, rootless containers, VSCode devcontainers, and even AI pair programmers and yet, so many devs are still treating Docker like a toy they just unwrapped at their first bootcamp.
This article isn’t here to roast you (well, maybe just a little). It’s here to help you upgrade not just your toolset, but your mindset. We’ll go through the worst Docker sins developers still commit, show you real-world examples of how the pros do it, and drop some tools and best practices you can start using today to containerize smarter.
Whether you’re a junior trying to look senior, or a senior secretly copy-pasting from outdated blog posts, this one’s for you.
Section 1: docker has grown up have you?
Back in the early days of Docker, just getting a container to run felt like wizardry. You’d docker run -it ubuntu bash
your way into glory, install Node or Python inside the container like it was your personal playground, and call it a day.
But here’s the thing Docker has evolved. It’s not just a local dev tool anymore; it’s now a critical part of production pipelines, CI/CD workflows, edge deployments, and even Kubernetes clusters. So if you’re still using Docker like it’s just a fancy replacement for your terminal… you’re using a supercar to go grocery shopping.
Here’s what’s new (and what you’re probably not using yet):
- Docker Compose v2 is now a plugin, fully integrated with the Docker CLI. You no longer need to install it separately or treat it like an afterthought.
-
BuildKit is the default backend for building images. It’s faster, supports advanced caching, and can handle secrets, SSH forwarding, and parallel builds. (Still using
docker build .
without enabling BuildKit? You're missing out.) - Docker Extensions have opened the door to a plugin ecosystem that gives you GUI-based insights, logs, and metrics. Stuff you had to glue together with shell scripts is now a click away.
And yet… so many devs are stuck running Docker like they did when Pokémon Go launched.
If you’re not evolving your Docker usage, you’re not just being inefficient you’re making life harder for your future self, your team, and even your CI/CD budget (bloated images = bloated costs).
Stop hand-writing Dockerfiles like it’s a novel
Let’s be real: most beginner Dockerfiles look like someone tried to write a short story with RUN
commands.
You’ve seen it before — maybe you’ve written one. It starts with FROM node:latest
, then proceeds to install 50 packages, manually copy over files, run a build, install curl
for no reason, and ends with some CMD ["npm", "start"]
glued on at the bottom like an afterthought.
What’s wrong with this?
- No caching awareness one code change and the entire image rebuilds.
-
Massive image sizes because you left the
.git
folder, test files, andnode_modules
in there. -
Debug-only tools in production like leaving
vim
,nano
, orcurl
in your container for that “just in case” moment. - Missing multi-stage builds meaning your final image includes build tools it never actually uses at runtime.
Here’s how pros write Dockerfiles today
Multi-stage builds are the MVP. You use one stage for compiling, bundling, or building your app, and another lean stage for running it.
Dockerfile:
# Stage 1 - Build
FROM node:20-alpine AS builder
WORKDIR /app
COPY . .
RUN npm ci && npm run build
# Stage 2 - Serve
FROM node:20-alpine
WORKDIR /app
COPY --from=builder /app/dist ./dist
RUN npm install -g serve
CMD ["serve", "dist"]
Boom. Smaller image, faster build, no leftovers, and works like a charm in CI.
You don’t need to reinvent the wheel just stop treating the Dockerfile like a place to “figure things out.” Think of it as your production recipe and no one wants spaghetti in production.
If you’re still writing Dockerfiles like a diary entry, it’s time to move on.
Section 3: the copy-paste docker-compose.yml curse
Let’s talk about the Great Copy-Paste Epidemic — specifically the docker-compose.yml
files floating around GitHub, Stack Overflow, and random blog posts from 2017. You know the ones:
version: "3"
services:
web:
build: .
ports:
- "3000:3000"
volumes:
- .:/app
It works… until it doesn’t. Then you spend 4 hours debugging why your container can’t connect to Postgres or why hot-reloading suddenly died on a Thursday.
Common sins in paste-first docker-compose setups:
- Using default bridge networks and wondering why services can’t talk to each other.
-
Mounting everything (
.:/app
) and destroying performance (especially on macOS). - Hardcoding secrets right into the YAML.
-
No environment separation — running dev and prod with the same
docker-compose.yml
file like a chaos gremlin.
Smarter ways to use Compose today:
-
Use
.env
files properly Don’t shove all your secrets into the Compose file. Let your.env
handle it:
POSTGRES_PASSWORD=supersecret
Then in Compose:
environment:
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
-
Separate dev and prod configs Use
docker-compose.override.yml
or even different files:
docker-compose -f docker-compose.yml -f docker-compose.prod.yml up
- Use named volumes and networks Define them instead of relying on implicit behavior:
volumes:
db-data:
networks:
backend:
- Understand service healthchecks Don’t wait 30 seconds for your app to fail silently when Postgres isn’t ready. Use:
depends_on:
db:
condition: service_healthy
Bottom line: docker-compose is not a script, it’s your dev environment’s blueprint. Stop using random blueprints and expecting a stable house.
Section 4: the real dev magic is in Docker + Make or Taskfiles
Here’s a dirty little secret of productive dev teams: nobody actually types docker compose up
repeatedly. And if they do, they're one bad typo away from a broken workflow.
Serious devs script their commands. Not in bash in Makefile
or Taskfile form. Why? Because typing long Docker commands every time is like writing HTML without a framework technically fine, but painfully inefficient.
You might be doing this:
docker compose -f docker-compose.dev.yml up --build --remove-orphans
Cool. Now try typing that 5 times a day on 3 projects. Enjoy your carpal tunnel.
Instead, do this:
Makefile example:
up:
docker compose up --build
down:
docker compose down
logs:
docker compose logs -f
Or even better, use Taskfile, which supports YAML, cross-platform, better output, and dependencies:
version: '3'
tasks:
dev:
cmds:
- docker compose up --build
stop:
cmds:
- docker compose down
Now your team just runs:
task dev
Why this matters:
- Your junior devs don’t need to memorize Docker incantations.
- Onboarding becomes a one-command setup.
- Your scripts are versioned, shared, and extendable.
- You can bundle in testing, linting, DB setup, and teardown too.
This isn’t “just Docker best practices” this is how real engineering teams move fast without breaking stuff.
Stop shelling into containers to debug
Ah yes, the classic move:
docker exec -it my-app-container bash
Then you’re inside the matrix manually inspecting logs, tweaking environment variables, maybe even installing curl
to poke something. For a second, you feel like a hacker. Until your team asks, “Did you just hotfix that inside the running container?”
Yeah… don’t be that dev.
Why this is a bad habit:
- You’re making changes that disappear on container restart.
- It’s untracked, so nobody knows what you just did.
- You might be debugging a problem that only exists because you’re inside the container.
Better debugging techniques for 2025:
1. Use logs like a normal person
docker logs -f my-app-container
Or if you’re using Compose:
docker compose logs -f web
2. Mount your code instead of baking it in
In development, mount your project with a volume so changes reflect live no rebuild needed.
volumes:
- .:/app
3. Use healthchecks to catch silent failures
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:3000"]
interval: 30s
timeout: 10s
retries: 5
4. Use devcontainers (VSCode or JetBrains)
No more debugging through the CLI. Open the container as a full-featured dev environment with autocomplete, breakpoints, and everything:
5. When you must go inside, use sh
and be quick
docker exec -it my-container sh
But treat it like a read-only visit not a place to live.
Section 6: image bloat is real trim it down
Let’s talk about the silent killer of CI/CD pipelines, developer machines, and your team’s sanity: bloated Docker images.
You built a “simple app” and ended up with a 2.6GB image. Why? Because you threw everything but the kitchen sink into it build tools, test files, node_modules
, .git
, your hopes and dreams.
And now, every time someone runs docker pull
, they can hear their laptop fan whisper: “I hate you.”
Common causes of Docker bloat:
- Using full
ubuntu
ordebian
images when a slim base would do. - Not using multi-stage builds (see Section 2, you rebel).
- Including unnecessary files via
COPY . .
yes, your.env
,.git
, andnode_modules
are in the image now. - Forgetting
.dockerignore
even exists.
Your cleanup toolkit:
1. Use lean base images
- Go from
node:20
➝node:20-alpine
- Python? Use
python:3.12-slim
- Want extreme? Try distroless images from Google.
2. Add a .dockerignore
file
You wouldn’t commit node_modules
, so don’t bake it into your image either.
.git
node_modules
tests
.env
Dockerfile.
3. Explore with Dive
Want to see what’s taking up space in your image? Run:
dive your-image-name
You’ll get a visual breakdown of each layer and maybe cry a little.
4. Don’t install dev tools in prod images
Use multi-stage builds to separate build dependencies (TypeScript, pip, compilers) from the final runtime image.
Section 7: modern tools to stop doing Docker the hard way
If your Docker workflow still revolves around just docker build
and docker run
, it’s like playing Elden Ring with a broken sword — unnecessarily painful and wildly inefficient.
The Docker ecosystem has grown. There are tools that make your life easier, safer, and 10x faster. But most devs aren’t using them… because they’re too busy debugging YAML indentation errors.