Troubleshooting a Complex Docker Deployment: From TypeScript Errors to Production Success
Introduction
Deploying a full-stack application with Docker can present numerous challenges, especially when dealing with a production-ready ERP system built with Rails 8 and React. In this article, I'll walk through the systematic debugging process I used to resolve a cascade of issues that prevented a dockerized application from running successfully.
The Application Stack
The project consisted of:
- Backend: Rails 8 API with PostgreSQL and Redis
- Frontend: React application built with Vite and served by nginx
- Database: PostgreSQL 15 with Rails 8's new Solid* adapters
- Cache: Redis 7 for caching and real-time features
Issue 1: TypeScript Compilation Errors
The Problem
The frontend build failed during npm run build:prod
with multiple TypeScript errors:
Type 'Date' is not assignable to type 'string'
Type 'string' is not assignable to type 'Date'
Property 'mockReturnValue' does not exist on type AsyncThunk
Discovery Commands
# View the specific error details
docker compose logs frontend --tail 50
# Examine the TypeScript configuration
cat erp-frontend/tsconfig.json
# Check the mock data types
grep -n "Date\|date" src/test-utils/mock-data.ts
Root Cause
The Rails API was sending date fields as ISO 8601 strings, but the TypeScript interfaces expected Date objects. Additionally, test files were incorrectly mocking Redux async thunks.
Solution
I updated the mock data to use strings instead of Date objects and fixed the test mocking approach:
# Fix date fields in mock data
sed -i 's/orderDate: new Date(\([^)]*\))/orderDate: \1/g' src/test-utils/mock-data.ts
# Fix async thunk mocking
sed -i 's/mockReturnValue/mockImplementation/g' src/pages/production/__tests__/ProductionOrders.test.tsx
Issue 2: Environment Variable Processing Failure
The Problem
Rails couldn't connect to PostgreSQL, attempting to use Unix sockets instead of network connections:
PG::ConnectionBad: connection to server on socket "/var/run/postgresql/.s.PGSQL.5432" failed
Discovery Commands
# Check if environment variables are being passed
docker compose run --rm backend printenv | grep DATABASE_URL
# Verify Docker Compose configuration
docker compose config | grep -A10 "backend:"
# Check for hidden characters in .env file
cat -A .env | grep -E "DATABASE_URL|RAILS_ENV"
Root Cause
The .env
file had Windows line endings (CRLF) instead of Unix line endings (LF), causing Docker Compose to fail parsing the environment variables.
Solution
# Install and use dos2unix to fix line endings
sudo apt install dos2unix
dos2unix .env
Issue 3: Rails Database Configuration Mismatch
The Problem
Even with correct environment variables, Rails wasn't using the DATABASE_URL and continued trying socket connections.
Discovery Commands
# Examine Rails database configuration
docker compose run --rm backend cat /rails/config/database.yml | grep -A10 "production:"
# Check what database names Rails expects
docker compose run --rm backend head -30 /rails/config/database.yml
Root Cause
The production database configuration was hardcoded with different database names and credentials than what our PostgreSQL container provided.
Solution
I created a custom database configuration that uses DATABASE_URL:
# Create override configuration
cat > backend/config/database.yml.docker << 'EOF'
default: &default
adapter: postgresql
encoding: unicode
pool: <%= ENV.fetch("RAILS_MAX_THREADS") { 5 } %>
production:
<<: *default
url: <%= ENV['DATABASE_URL'] %>
EOF
# Mount it in docker-compose.yml
# Added to backend service:
volumes:
- ./backend/config/database.yml.docker:/rails/config/database.yml:ro
Issue 4: Zeitwerk Autoloading Conflicts
The Problem
Rails failed to start with a Zeitwerk::NameError:
expected file production_stages_controller_old.rb to define constant Api::V1::ProductionStagesControllerOld
Discovery Commands
# Find problematic files
docker compose run --rm backend find /rails/app/controllers -name "*_old.rb"
# Examine the class definition
docker compose run --rm backend head -20 /rails/app/controllers/api/v1/production_stages_controller_old.rb
Root Cause
A backup file production_stages_controller_old.rb
contained a class named ProductionStagesController
instead of ProductionStagesControllerOld
, violating Rails' naming conventions.
Solution
# Exclude the file from Docker build
echo "**/production_stages_controller_old.rb" >> backend/.dockerignore
# Rebuild the image
docker compose up -d --build
Issue 5: Rails 8 SolidCable Configuration
The Problem
The `cable` database is not configured for the `production` environment
Discovery Commands
# Check available database configurations
docker compose logs backend --tail 50 | grep "Available database configurations"
Root Cause
Rails 8's SolidCable adapter requires a separate database configuration for WebSocket data, which wasn't present in our database.yml.
Solution
I updated the database configuration to include all Solid* adapter databases:
# Added to database.yml.docker
cable:
<<: *default
url: <%= ENV['DATABASE_URL'] %>
database: <%= URI.parse(ENV['DATABASE_URL']).path.sub('/', '') %>_cable
Issue 6: Port Conflicts
The Problem
Error: ports are not available: exposing port TCP 0.0.0.0:80 -> 127.0.0.1:0
Discovery Commands
# Check container status
docker compose ps -a
# Verify which ports are mapped
docker compose ps | grep -E "PORTS|80"
Root Cause
Port 80 was already in use by a Windows service (likely IIS or another web server).
Solution
# Update docker-compose.yml to use port 8080
sed -i 's/"80:80"/"8080:80"/g' docker-compose.yml
Key Debugging Strategies
Throughout this debugging process, I employed several effective strategies:
Incremental Progress: Each fix revealed the next issue, showing steady progress through the startup sequence.
Log Analysis: Using
docker compose logs
with--tail
and piping throughhead
helped capture both the beginning and end of error messages.Direct Container Inspection: Running commands inside containers with
docker compose run --rm
provided direct access to file contents and configurations.Environment Verification: Always verifying that environment variables were properly passed and formatted before assuming configuration issues.
Clean Rebuilds: When facing cache corruption, using
docker system prune
anddocker builder prune
ensured a clean slate.
Conclusion
Successfully deploying a complex containerized application often requires systematically working through multiple interconnected issues. By maintaining a methodical approach, using appropriate debugging commands, and understanding the underlying systems, I transformed a failing deployment into a healthy, running application stack. The key is patience, systematic investigation, and understanding that each error message is a clue pointing toward the solution.
Final Verification
Once all issues were resolved, these commands confirmed the successful deployment:
# Check all container statuses
docker compose ps
# Verify backend health
curl -I http://localhost:3000/up
# Verify frontend accessibility
curl -I http://localhost:8080
The application is now accessible at http://localhost:8080 (frontend) and http://localhost:3000 (API), with all services running in perfect harmony.
If you enjoyed this article, you can also find it published on LinkedIn and Medium.