My Docker Debugging Toolkit: Lessons Learned from Real-World Issues

My Docker Debugging Toolkit: Lessons Learned from Real-World Issues

After hours debugging containerized applications in development, I’ve developed a systematic toolkit of Docker commands that consistently help me diagnose and resolve deployment issues. This guide shares the exact commands and patterns I use when troubleshooting full-stack applications, particularly those involving React frontends, Rails backends, nginx proxies, and PostgreSQL databases.

The Philosophy Behind My Debugging Approach

When I encounter a development issue, I follow a methodical pattern that starts with the broadest possible view and progressively narrows down to specific components. Think of it like a funnel—I begin by checking if containers are running, then verify they can communicate, examine their configurations, and finally test specific functionality. This approach has saved me from countless rabbit holes and helps me identify root causes quickly.

Container Status: My Starting Point

Understanding What's Actually Running

The first thing I always check is the overall health of my containers. I've learned that Docker's status messages often hide crucial details, so I use formatted output to get exactly what I need:

# My go-to command for a quick system overview
docker-compose ps --format "table {{.Name}}\t{{.Status}}\t{{.Ports}}"

This formatted output immediately shows me three critical pieces of information: which containers are running, their health status, and most importantly, the port mappings. I can't count how many times port mapping issues have been the root cause of connectivity problems.

When I need more detail about health checks specifically, I use:

# Check if health checks are passing
docker-compose ps

The health status tells me whether the application inside the container is actually functional, not just whether the container process is running. This distinction is crucial—a container can be "Up" but still failing its health checks, indicating application-level problems.

Investigating Service Logs

Once I know which containers might be problematic, I dive into the logs. I've developed a pattern for log investigation that helps me quickly identify issues without getting overwhelmed by noise:

# Start with recent logs to see current issues
docker-compose logs backend --tail=20
docker-compose logs frontend --tail=10

# If I need to watch issues develop in real-time
docker-compose logs -f

# For a broad overview when the problem service is unknown
docker-compose logs --tail=50

I always start with a small number of recent log lines rather than dumping everything. This prevents me from getting lost in startup logs when the actual problem is happening now. The --tail flag has saved me countless hours of scrolling through irrelevant historical data.

Network Connectivity: The Heart of Microservices

Testing External API Endpoints

Network issues are perhaps the most common problems I encounter in containerized applications. I've developed a systematic approach to testing connectivity at different layers of the stack:

# First, I test the backend API directly from the host
curl -X POST http://localhost:3000/api/v1/auth/login \
  -H "Content-Type: application/json" \
  -d '{"email":"admin@pcvn.com","password":"password123"}' \
  -w "\nHTTP Status: %{http_code}\n"

The -w "\nHTTP Status: %{http_code}\n" flag is crucial—it explicitly shows me the HTTP status code even when the response body is empty or malformed. This has helped me distinguish between 401 (authentication issues), 502 (proxy issues), and 503 (service unavailable) errors that might otherwise look identical.

Next, I test through the nginx proxy to ensure the full request path works:

# Testing through the nginx proxy layer
curl -X POST http://localhost:8080/api/v1/auth/login \
  -H "Content-Type: application/json" \
  -d '{"email":"admin@pcvn.com","password":"password123"}' \
  -w "\nHTTP Status: %{http_code}\n"

Comparing these two responses immediately tells me whether the problem is in the backend service itself or in the proxy layer.

Verifying Internal Container Communication

Container-to-container networking operates differently than host-to-container networking. I use these commands to verify internal communication:

# Test if frontend can reach backend internally
docker-compose exec frontend wget -qO- http://backend:80/up

# Verify DNS resolution is working
docker-compose exec frontend nslookup backend

# Check actual network connectivity
docker-compose exec frontend ping -c 3 backend

The key insight here is that containers communicate using service names (like backend) and internal ports, not the mapped external ports. This distinction has been the source of many configuration errors I've encountered.

Configuration Inspection: Finding What's Actually Deployed

Analyzing Frontend Configuration

One of the most frustrating debugging experiences involves configuration mismatches between what I think is deployed and what actually is. For frontend applications, especially those built with Vite or webpack, I need to inspect the compiled JavaScript:

# Find what API URL is actually compiled into the frontend
docker-compose exec frontend sh -c "grep -o 'VITE_API_URL:[^,}]*' /usr/share/nginx/html/assets/index-*.js"

# Search for any hardcoded localhost references that shouldn't be there
docker-compose exec frontend grep -r "localhost:3000" /usr/share/nginx/html/

# Extract configuration from minified JavaScript
docker-compose exec frontend sh -c "strings /usr/share/nginx/html/assets/index-*.js | grep -E 'localhost|API_URL'"

The strings command is particularly useful when dealing with minified JavaScript—it extracts readable text from binary or compressed files, helping me find configuration values that might be buried in obfuscated code.

Examining Nginx Configuration

Nginx misconfigurations are a common source of proxy errors. I've developed a routine for quickly identifying nginx issues:

# Check the actual proxy configuration
docker-compose exec frontend cat /etc/nginx/nginx.conf | grep -A 5 "location /api"

# Verify nginx configuration is valid
docker-compose exec frontend nginx -t

# Check nginx error logs for rate limiting or proxy errors
docker-compose exec frontend tail -20 /var/log/nginx/error.log

The nginx -t command is invaluable—it validates the configuration without restarting the service, preventing me from accidentally breaking a running system with invalid configuration changes.

Environment Variable Verification

Environment variables are often the source of configuration issues, especially when there's confusion between build-time and runtime variables:

# Check frontend environment
docker-compose exec frontend printenv | grep -i api

# Check backend environment
docker-compose exec backend printenv | grep -i rails

# Get a sorted list of all environment variables for comparison
docker-compose exec backend env | sort

Sorting environment variables helps me quickly spot missing or incorrectly named variables when comparing against documentation or working environments.

Runtime Modifications: Fixing Without Rebuilding

Updating Configurations on the Fly

When I've identified a configuration issue, I often need to test fixes quickly without going through a full rebuild cycle. These techniques allow me to validate solutions rapidly:

# Copy a fixed configuration file into a running container
docker cp nginx-fixed.conf pcvn-erp-frontend:/etc/nginx/nginx.conf

# Reload nginx without losing connections
docker-compose exec frontend nginx -s reload

# Make in-place configuration changes for testing
docker-compose exec frontend sed -i 's/backend:3000/backend:80/g' /etc/nginx/nginx.conf

The ability to reload nginx without restarting the container is crucial for development systems—it applies configuration changes without dropping active connections.

Database Operations

For Rails applications, database issues often require immediate intervention:

# Check migration status
docker-compose exec backend /rails/bin/rails db:migrate:status

# Run pending migrations
docker-compose exec backend /rails/bin/rails db:migrate

# Access Rails console for debugging
docker-compose exec backend /rails/bin/rails console

The Rails console within the container gives me direct access to the application environment, allowing me to test database connections, check model configurations, and verify that the application can actually communicate with its dependencies.

Advanced Debugging Patterns

Systematic Configuration Diagnosis

When facing configuration issues, I follow this three-step pattern that has proven highly effective:

# Step 1: Check what's compiled into the frontend
docker-compose exec frontend sh -c "grep -o 'VITE_API_URL:[^,}]*' /usr/share/nginx/html/assets/index-*.js"

# Step 2: Verify nginx proxy settings match
docker-compose exec frontend grep "proxy_pass" /etc/nginx/nginx.conf

# Step 3: Test the complete request path
curl -X POST http://localhost:8080/api/v1/auth/login \
  -H "Content-Type: application/json" \
  -d '{"email":"test@example.com","password":"password"}'

This progression from compiled code to configuration to actual testing ensures I understand exactly where the configuration breaks down.

Port Mapping Investigation

Port mapping confusion between internal and external ports is incredibly common. I use this systematic approach to clarify port configurations:

# Check the actual port mappings
docker-compose ps --format "table {{.Name}}\t{{.Ports}}"

# Test which internal port actually responds
for port in 80 3000 8080; do
  echo "Testing port $port:"
  docker-compose exec frontend wget -O- http://backend:$port/up 2>&1 | head -1
done

This loop quickly identifies which internal port the service is actually listening on, eliminating guesswork.

Rate Limiting Diagnosis

When I encounter intermittent 503 errors, rate limiting is often the culprit. Here's how I diagnose it:

# Check for rate limiting messages in logs
docker-compose exec frontend grep "limiting requests" /var/log/nginx/error.log | tail -5

# View current rate limit configuration
docker-compose exec frontend sh -c "nginx -T 2>/dev/null | grep -A 2 limit_req"

# Test with multiple rapid requests
for i in {1..20}; do
  curl -s -o /dev/null -w "%{http_code}\n" http://localhost:8080/api/health
done | sort | uniq -c

The request loop with counting reveals patterns in response codes—if I see a mix of 200s and 503s, rate limiting is confirmed.

My Power Commands: One-Liners for Quick Diagnosis

Over time, I've developed several compound commands that give me comprehensive system status in seconds:

Complete Health Check

docker-compose ps && echo "---" && \
curl -s http://localhost:8080/api/health && echo "---" && \
docker-compose exec backend /rails/bin/rails runner "puts 'Backend OK'" && \
docker-compose exec frontend nginx -t 2>&1 | grep successful

This single command verifies container status, API availability, backend functionality, and nginx configuration validity—essentially a complete system health check in one line.

Comprehensive Diagnosis

echo "=== Container Status ===" && docker-compose ps && \
echo -e "\n=== API Test ===" && \
curl -s http://localhost:8080/api/v1/auth/login -X POST \
  -H "Content-Type: application/json" \
  -d '{"email":"test","password":"test"}' -w "\nStatus: %{http_code}\n" && \
echo -e "\n=== Recent Errors ===" && \
docker-compose logs --tail=5 2>&1 | grep -i error

This gives me an instant snapshot of system status, API functionality, and recent errors—perfect for initial diagnosis when I'm called to investigate an issue.

Configuration Backup

Before making any changes, I always backup current configurations:

docker-compose exec frontend cat /etc/nginx/nginx.conf > nginx.backup.conf && \
docker-compose exec backend sh -c "env | sort" > backend.env.backup && \
echo "Configurations backed up"

This simple practice has saved me countless times when a "quick fix" unexpectedly makes things worse.

Building and Deployment: Getting It Right

Building with Proper Arguments

Understanding build arguments versus runtime environment variables is crucial for successful deployments:

# Build with specific environment variables
docker-compose build frontend --build-arg VITE_API_URL=/api/v1

# Force rebuild when debugging build issues
docker-compose build --no-cache

# Rebuild and recreate in one command
docker-compose up -d --build --force-recreate

The --build-arg flag is essential for build-time configuration like Vite environment variables, while runtime variables are set in docker-compose.yml or .env files.

Lessons Learned from Development Debugging

Through years of debugging containerized applications, I've learned several critical principles:

Start broad, then narrow: Always begin with basic health checks before diving into specific component debugging. This prevents wasting time on complex investigations when the problem might be a simple crashed container.

Verify, don't assume: Never assume configurations are correct—always verify what's actually deployed. The number of times I've found discrepancies between what should be deployed and what actually is has taught me this lesson well.

Understand internal vs external networking: Container-to-container communication uses different ports and hostnames than external access. This fundamental understanding resolves a huge percentage of networking issues.

Keep commands composable: Building a toolkit of simple, focused commands that can be combined gives me flexibility to investigate any issue without memorizing hundreds of complex commands.

Document while debugging: The commands in this guide came from my debugging notes. Documenting what works during crisis situations builds an invaluable reference for future issues.

Conclusion

This toolkit reflects the hours I’ve spent debugging containerized applications in development. Every command listed here has proven its value in real troubleshooting situations. For me, effective Docker debugging isn’t about memorizing commands — it’s about understanding a systematic approach: verifying assumptions, checking connectivity at each layer, and knowing where configuration values actually live within a containerized environment.

It’s easy to let container abstractions obscure the real problems. These commands help me peel back those layers step by step, so I can see what’s really going on. I recommend starting with these commands, adapting them to your specific stack, and building your own toolkit based on the issues you encounter. The next time I run into trouble in development or staging, I’ll be ready with a systematic approach and the right commands to troubleshoot and resolve the problem quickly.


Technical Stack: Docker, Docker Compose, nginx, React, Rails, PostgreSQL, Redis

Key Tools: curl, wget, grep, sed, docker-compose CLI


If you enjoyed this article, you can also find it published on LinkedIn and Medium.