At some point, Docker Desktop on my MacBook became the loudest thing in the room.
The fan would spin up the moment I started a compose stack. Activity Monitor showed Docker using 6GB of RAM, and it would just sit there — containers idle, nothing actively running — hogging memory that my other apps needed.
I complained about it to a colleague. He sent me a screenshot of his Docker settings. They looked nothing like mine.
Here’s everything I changed, and why each one helped.
The memory cap (most impactful change)
Docker Desktop on macOS and Windows runs inside a Linux VM. By default, it can use up to half your system RAM — or more, depending on your version.
On my 16GB MacBook, Docker was claiming up to 8GB for itself. I never needed that. My typical compose stack was two services and a database.
Fix:
Open Docker Desktop → Settings → Resources → Memory
Set it to something reasonable. I use 4GB. For most development work — a Node.js app, a database, maybe Redis — 4GB is plenty. You’ll know you need more when containers start OOM-killing themselves.
Also set CPUs to a lower number. I set mine to 4 (I have 8 cores). Docker doesn’t need all your cores during normal development.
After this change: the fan stopped. Idle memory usage dropped from 6GB to about 2GB.
The Rosetta layer on Apple Silicon
If you’re on an M-series Mac and pulling x86_64 images, Docker is running them through Rosetta emulation. That’s CPU-intensive.
Check what architecture your images are:
docker inspect --format='{{.Architecture}}' nginx
# x86_64 means it's being emulated
# arm64 means it's native
Pull ARM-native images where available:
# Pull the ARM version explicitly
docker pull --platform linux/arm64 postgres:16-alpine
# Or let Docker pick the right one for your machine
docker pull postgres:16-alpine
# (modern images support multi-arch — Docker picks the right one)
Most major images (postgres, redis, node, nginx) have ARM64 versions. If the image you need doesn’t, that’s when emulation is unavoidable.
Dev vs prod images — stop running prod images locally
Early on I was running the same Dockerfile in dev that I used for production. That meant multi-stage builds compiling TypeScript, no hot reload, and rebuilding the entire image every time I changed a file.
For local development, I use a much lighter setup:
# docker-compose.yml (development)
services:
app:
image: node:20-alpine
working_dir: /app
volumes:
- .:/app
- /app/node_modules # anonymous volume to avoid overwriting container's node_modules
command: npm run dev
ports:
- "3000:3000"
environment:
- NODE_ENV=development
No Dockerfile. Just mount the code directly and run the dev server. Hot reload works because the source is volume-mounted. The image stays small because it’s just the base Node image.
The production Dockerfile with multi-stage builds only runs in CI.
Layer caching — why rebuilds were slow
Every time I ran docker compose up --build, it took 3 minutes. The culprit was copy order in my Dockerfile:
# Bad: copies everything first
COPY . .
RUN npm install # runs every time ANY file changes
Docker invalidates the cache when a layer’s inputs change. If COPY . . comes before npm install, changing a single .ts file triggers a full npm install.
The fix is the standard two-step copy:
# Good: dependencies cached separately
COPY package*.json ./
RUN npm ci # only runs when package.json changes
COPY . . # source code copied after
After this change, builds went from 3 minutes to 15 seconds for a typical code change.
Prune regularly — disk was filling up silently
Docker accumulates junk. Stopped containers, dangling images, unused volumes. I had 40GB of Docker data on my machine and had no idea.
# See how much space Docker is using
docker system df
# TYPE TOTAL ACTIVE SIZE RECLAIMABLE
# Images 47 3 12.3GB 11.1GB (94%)
# Containers 12 2 234MB 189MB (81%)
# Local Volumes 8 3 2.1GB 1.4GB (67%)
# Build Cache - - 4.3GB 4.3GB
# Reclaim everything unused
docker system prune -a
I set up a monthly reminder to run this. You can also automate it:
# Weekly cleanup cron
0 0 * * 0 docker system prune -f
The -a flag removes all unused images, not just dangling ones. It freed up 18GB on my first run.
Compose healthchecks — containers starting before they’re ready
My app was crashing on startup because it tried to connect to the database before the database was ready. I was using depends_on, but that only waits for the container to start — not for the service inside to be ready.
Fix — add a healthcheck and condition: service_healthy:
services:
db:
image: postgres:16-alpine
environment:
POSTGRES_PASSWORD: mypassword
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 5s
timeout: 5s
retries: 10
app:
build: .
depends_on:
db:
condition: service_healthy # wait until db passes healthcheck
environment:
DATABASE_URL: postgresql://postgres:mypassword@db:5432/mydb
Now the app container doesn’t start until Postgres is actually accepting connections.
The settings summary
Docker Desktop → Resources:
- Memory: 4GB (adjust based on your typical workload)
- CPUs: half your cores
- Disk image size: set a limit, then prune regularly
Dockerfile:
- Copy
package.jsonbefore source code - Use multi-stage builds for production
- Use volume mounts + base image for local dev (no Dockerfile needed)
Compose:
- Use
healthcheck+condition: service_healthyfor service dependencies - Use
platform: linux/arm64on Apple Silicon where supported
Maintenance:
docker system prune -amonthlydocker system dfto monitor usage
The fan barely runs now. Idle memory is around 1.5GB when containers are stopped, 3GB when a typical dev stack is running. Builds that took 3 minutes take 15 seconds.
None of these were hard changes. They were just things I didn’t know to look for until I was annoyed enough to go looking.
Related guides: Docker Cheat Sheet | How to Write a Production Dockerfile for Node.js