Logs & Debugging¶
Analyze logs and debug issues effectively in the New Hires Reporting System.
Viewing Logs¶
Basic Commands¶
# All services
docker-compose -f docker-compose.prod.yml logs -f
# Backend only
docker-compose -f docker-compose.prod.yml logs -f backend
# Workers only (most useful for AI troubleshooting)
docker-compose -f docker-compose.prod.yml logs -f workers
# Frontend only
docker-compose -f docker-compose.prod.yml logs -f frontend
# Last 100 lines
docker-compose -f docker-compose.prod.yml logs --tail=100 workers
# With timestamps
docker-compose -f docker-compose.prod.yml logs -f -t backend
# Since specific time
docker-compose -f docker-compose.prod.yml logs --since "2025-01-15T10:00:00" workers
Log Indicators¶
The system uses emoji indicators for quick scanning:
| Emoji | Type | Example |
|---|---|---|
| ✅ | Success | ✅ File validation completed |
| ❌ | Error | ❌ 15 errors found in file |
| 🤖 | AI Activity | 🤖 AWS Bedrock: Processing correction job |
| 🔍 | Search | 🔍 Searching for employer data |
| 📝 | Correction | 📝 Applied 10 corrections |
| ⚠️ | Warning | ⚠️ API rate limit approaching |
| 🔄 | Processing | 🔄 Worker polling for jobs... |
Common Log Patterns¶
Successful Validation & Correction¶
✅ File uploaded: sample.txt (50 records)
🔍 Validation complete: 10 errors found
🔄 Worker picked up correction job
🤖 Calling AWS Bedrock API...
🤖 Bedrock tokens used: input=1250, output=480
📝 Generated 8 auto-correctable suggestions
✅ Correction job completed
AWS Bedrock Errors¶
❌ AWS Bedrock API error: AccessDeniedException
⚠️ AWS credentials invalid or missing
❌ Bedrock timeout after 60 seconds
❌ Model not found: Check model access in AWS Console
⚠️ Throttling: Rate exceeded for InvokeModel
Network Issues¶
❌ Connection refused: backend:8000
⚠️ Frontend can't reach backend
❌ Database connection failed
⚠️ Worker can't connect to database
Filtering Logs¶
Find Errors Only¶
docker-compose -f docker-compose.prod.yml logs backend | grep -i error
docker-compose -f docker-compose.prod.yml logs workers | grep -i error
Find AWS Bedrock Activity¶
# AI activity indicator
docker logs newhires-workers | grep "🤖"
# Bedrock API calls
docker logs newhires-workers | grep -i bedrock
# Token usage
docker logs newhires-workers | grep "tokens used"
Find Validation Events¶
Search for Specific Job or File¶
# Find specific job ID
docker logs newhires-workers | grep "job_123"
# Find specific file
docker-compose -f docker-compose.prod.yml logs backend | grep "filename.txt"
Debugging Techniques¶
1. Check Service Status¶
# View all services
docker-compose -f docker-compose.prod.yml ps
# Check if specific service is running
docker ps | grep newhires-workers
docker ps | grep newhires-backend
2. Check Health Endpoints¶
# Backend health
curl http://localhost:8000/health | jq
# Frontend health
curl -I http://localhost:8080
# Database health
docker exec newhires-db pg_isready -U newhires
3. Monitor Resources¶
# Real-time stats
docker stats
# Check memory usage
docker stats --no-stream --format "{{.Container}}: {{.MemPerc}}"
# Check CPU usage
docker stats --no-stream --format "{{.Container}}: {{.CPUPerc}}"
4. Inspect Containers¶
# Full container details
docker inspect newhires-backend | jq
# Environment variables (check AWS credentials)
docker exec newhires-workers env | grep AWS
docker exec newhires-backend env | grep POSTGRES
# Network settings
docker inspect newhires-backend | jq '.[].NetworkSettings'
5. Shell Into Container¶
# Backend shell
docker exec -it newhires-backend bash
# Worker shell
docker exec -it newhires-workers bash
# Then inside container:
ls -la
env | grep AWS
curl http://backend:8000/health
python3 -c "import boto3; print(boto3.client('sts').get_caller_identity())"
Common Debugging Scenarios¶
Validation Not Working¶
# 1. Check backend is running
docker-compose -f docker-compose.prod.yml ps
# 2. Check backend health
curl http://localhost:8000/health
# 3. View recent logs
docker-compose -f docker-compose.prod.yml logs --tail=50 backend
# 4. Look for errors
docker-compose -f docker-compose.prod.yml logs backend | grep -i error
# 5. Check database connection
docker exec newhires-backend env | grep DATABASE_URL
Workers Not Processing Jobs¶
# 1. Check worker is running
docker-compose -f docker-compose.prod.yml ps workers
# 2. Check worker logs
docker logs newhires-workers --tail=50
# 3. Look for AWS Bedrock errors
docker logs newhires-workers | grep -i "error\|exception"
# 4. Verify AWS credentials
docker exec newhires-workers env | grep AWS
# 5. Test Bedrock access
docker exec newhires-workers python3 -c "
import boto3
client = boto3.client('bedrock-runtime', region_name='us-east-1')
print('Bedrock client created successfully')
"
# 6. Check job queue
docker exec newhires-db psql -U newhires -d newhires -c \
"SELECT status, COUNT(*) FROM correction_jobs GROUP BY status;"
AWS Bedrock Errors¶
# 1. Check worker logs for Bedrock errors
docker logs newhires-workers | grep -i "bedrock\|aws"
# 2. Verify credentials
docker exec newhires-workers env | grep -E "AWS_ACCESS_KEY_ID|AWS_SECRET_ACCESS_KEY|AWS_REGION"
# 3. Test AWS credentials
docker exec newhires-workers python3 -c "
import boto3
print(boto3.client('sts').get_caller-identity())
"
# 4. Check model access in AWS Console
# https://console.aws.amazon.com/bedrock/home?region=us-east-1#/modelaccess
# 5. Review IAM permissions
# Ensure policy includes bedrock:InvokeModel
# See: ../deployment/aws-bedrock.md for complete setup
Slow Performance¶
# 1. Check resource usage
docker stats
# 2. View worker processing times
docker logs newhires-workers | grep "response time"
# 3. Check for timeouts
docker-compose -f docker-compose.prod.yml logs workers | grep -i timeout
# 4. Monitor Bedrock latency
docker logs newhires-workers | grep "tokens used"
# 5. Check concurrent calls setting
docker exec newhires-workers env | grep MAX_CONCURRENT_BEDROCK_CALLS
# 6. Consider scaling workers
docker-compose -f docker-compose.prod.yml up -d --scale workers=3
Database Issues¶
# 1. Check database is running
docker ps | grep newhires-db
# 2. Check database health
docker exec newhires-db pg_isready -U newhires
# 3. View database logs
docker logs newhires-db --tail=50
# 4. Check connections
docker exec newhires-db psql -U newhires -d newhires -c \
"SELECT count(*) FROM pg_stat_activity;"
# 5. Check disk space
docker exec newhires-db psql -U newhires -d newhires -c \
"SELECT pg_size_pretty(pg_database_size('newhires'));"
Log Export¶
Save Logs to File¶
# All logs
docker-compose -f docker-compose.prod.yml logs > all-logs.txt
# Backend only
docker-compose -f docker-compose.prod.yml logs backend > backend-logs.txt
# Workers only (most useful for debugging)
docker logs newhires-workers > workers-logs.txt
# Last 24 hours
docker-compose -f docker-compose.prod.yml logs --since 24h > logs-24h.txt
# Errors only
docker logs newhires-workers 2>&1 | grep -i error > errors.txt
# Bedrock activity only
docker logs newhires-workers | grep -i bedrock > bedrock-activity.txt
Compress Logs¶
# Compress for sharing
docker logs newhires-workers | gzip > workers-logs.gz
docker logs newhires-backend | gzip > backend-logs.gz
# Extract later
gunzip -c workers-logs.gz | less
Production Logging¶
Persistent Logs¶
Configure log rotation in docker-compose.prod.yml:
services:
backend:
logging:
driver: "json-file"
options:
max-size: "10m"
max-file: "3"
compress: "true"
workers:
logging:
driver: "json-file"
options:
max-size: "10m"
max-file: "3"
compress: "true"
Send to Syslog¶
services:
backend:
logging:
driver: "syslog"
options:
tag: "newhires-backend"
workers:
logging:
driver: "syslog"
options:
tag: "newhires-workers"
View with:
Quick Reference¶
# View logs
docker-compose -f docker-compose.prod.yml logs -f backend # Live backend logs
docker-compose -f docker-compose.prod.yml logs -f workers # Live worker logs
docker-compose -f docker-compose.prod.yml logs --tail=100 workers # Last 100 lines
docker-compose -f docker-compose.prod.yml logs -f -t workers # With timestamps
# Search logs
docker logs newhires-workers | grep ERROR
docker logs newhires-workers | grep "🤖"
docker logs newhires-workers | grep -i bedrock
docker logs newhires-workers | grep "tokens used"
# Export logs
docker-compose -f docker-compose.prod.yml logs > logs.txt
docker logs newhires-workers | gzip > workers.gz
# Debug
docker-compose -f docker-compose.prod.yml ps # Service status
docker stats # Resource usage
docker exec -it newhires-workers bash # Shell into worker
docker exec -it newhires-backend bash # Shell into backend
curl http://localhost:8000/health | jq # Backend health
docker exec newhires-db pg_isready -U newhires # Database health
Debugging Checklist¶
When troubleshooting issues, follow this order:
- Check service status:
docker-compose -f docker-compose.prod.yml ps - Check health endpoints:
curl http://localhost:8000/health - View recent logs:
docker logs newhires-workers --tail=50 - Look for errors:
docker logs newhires-workers | grep -i error - Check AWS credentials:
docker exec newhires-workers env | grep AWS - Verify environment:
docker exec newhires-workers env - Test connectivity: Shell into container and test connections
- Review resource usage:
docker stats - Check disk space:
docker system df - Export logs for analysis: Save logs to files for detailed review
Advanced Debugging¶
Enable Verbose Logging¶
Edit .env and add:
Then restart services:
Provides: - Verbose error messages - Full stack traces - Detailed API logs - AWS Bedrock request/response details - Database query logs
Watch Logs in Real-Time¶
# Watch all services
docker-compose -f docker-compose.prod.yml logs -f
# Watch workers only (split terminal)
watch -n 1 'docker logs newhires-workers --tail=20'
# Follow errors only
docker logs newhires-workers -f 2>&1 | grep -i error
Database Query Logging¶
Enable PostgreSQL query logging:
# Edit postgresql.conf in container
docker exec -it newhires-db bash
echo "log_statement = 'all'" >> /var/lib/postgresql/data/postgresql.conf
# Restart database
docker-compose -f docker-compose.prod.yml restart db
# View query logs
docker logs newhires-db | grep "LOG:"
Next Steps¶
- Common Issues - Solutions to frequent problems
- AWS Bedrock Errors - Detailed Bedrock troubleshooting
- Docker Problems - Docker-specific issues
- Health Monitoring - Production monitoring setup