A production-ready, scalable backend platform built with modern microservices architecture, featuring event-driven communication, robust authentication, and intelligent rate limiting.
Features • Quick Start • Architecture • API Documentation • Contributing
- Overview
- Project Flow
- Features
- Architecture
- Tech Stack
- Quick Start
- Configuration
- API Documentation
- Development
- Project Structure
- Contributing
- License
The Unified Async Workflow & Notification Platform is a battle-tested, enterprise-grade backend solution that demonstrates best practices in modern microservices architecture. Built with scalability, maintainability, and developer experience in mind, this platform provides a solid foundation for building complex distributed systems.
- Security First: JWT-based authentication with IP-based rate limiting
- Event-Driven: Asynchronous communication via RabbitMQ ensures loose coupling
- Microservices Ready: Independently deployable services with clear domain boundaries
- Container Native: Full Docker support for consistent development and production environments
- High Performance: Redis-backed caching and intelligent request routing via Nginx
- Observable: Comprehensive health checks and monitoring endpoints
The diagram above illustrates the complete request lifecycle through the platform, from client request to event-driven notification delivery.
| Feature | Description |
|---|---|
| API Gateway | Centralized entry point with JWT validation and Redis-backed rate limiting |
| User Management | Complete identity and access management with Redis caching |
| Task Orchestration | Full CRUD operations with event publishing |
| Real-time Notifications | Event-driven notification system with reliable delivery |
| Rate Limiting | Redis-backed IP-based throttling (100 req/12 min) to prevent abuse |
| Response Caching | Redis caching for user profiles with automatic TTL expiration |
| Event Streaming | RabbitMQ-powered asynchronous messaging with confirm channels |
| Health Monitoring | Built-in health endpoints for all services |
| Containerization | Docker & Docker Compose for reproducible environments |
- Scalable Architecture: Horizontal scaling support for all services
- Database per Service: MongoDB Atlas with isolated databases
- Redis Caching: User profile caching with automatic cache invalidation (1-hour TTL)
- Intelligent Rate Limiting: Redis-backed IP-based rate limiting (100 requests per 12 minutes)
- Reverse Proxy: Nginx for load balancing and SSL termination
- Message Reliability: Confirmed RabbitMQ channels for guaranteed delivery
- Clean Architecture: Service-Controller pattern for better separation of concerns
- Environment-based Config: Secure configuration management via environment variables
- API Documentation: Postman collections for easy exploration
┌─────────────┐
│ Client │
└──────┬──────┘
│
▼
┌─────────────────────────────────────────┐
│ Nginx Reverse Proxy │
│ (Port 80/443) │
└──────────────────┬──────────────────────┘
│
▼
┌──────────────────────────────────────────┐
│ API Gateway (Port 3000) │
│ • JWT Validation │
│ • Rate Limiting (Redis) │
│ └─> 100 requests / 12 minutes │
│ • Request Routing │
└───┬──────────────┬──────────────┬────────┘
│ │ │
▼ ▼ ▼
┌─────────┐ ┌─────────┐ ┌──────────────┐
│ User │ │ Task │ │ Notification │
│ Service │ │ Service │ │ Service │
│ (3001) │ │ (3002) │ │ (3003) │
│ │ │ │ │ │
│ • Redis │ │ RabbitMQ│ │ RabbitMQ │
│ Cache │ │ Publish │ │ Consumer │
│ • MVC │ │ │ │ │
│ │ │ │ │ │
│ MongoDB │ │ MongoDB │ │ MongoDB │
└────┬────┘ └────┬────┘ └──────▲───────┘
│ │ │
▼ │ RabbitMQ │
┌─────────┐ └────────────────┘
│ Redis │ (task_created,
│ Cache │ task_updated,
└─────────┘ task_deleted queues)
- Client Request → Nginx forwards to API Gateway
- Rate Limiting → Redis checks request quotas (100 requests per 12 minutes per IP)
- Authentication → Gateway validates JWT token
- Service Routing → Request forwarded to appropriate microservice
- Cache Check (User Service) → Redis cache lookup for user profiles
- Database Query → MongoDB query on cache miss, result cached for 1 hour
- Event Publishing (Task Service) → Publishes events to RabbitMQ
- Event Consumption (Notification Service) → Processes events asynchronously
For detailed architecture diagrams and low-level design, see information/Architecture.md.
The User Service implements intelligent caching for optimal performance:
Cache Flow:
- Cache Hit: User profile request → Check Redis → Return cached data (fast)
- Cache Miss: User profile request → Check Redis → Query MongoDB → Cache result (TTL: 1 hour) → Return data
Benefits:
- Reduced Database Load: Frequent profile requests served from memory
- Improved Response Time: Redis responds in microseconds vs MongoDB milliseconds
- Automatic Expiration: 1-hour TTL ensures data freshness
- Graceful Degradation: Falls back to database if Redis is unavailable
Implementation: See services/user-service/src/services/user.service.js
The API Gateway implements distributed rate limiting using Redis:
Rate Limiter Configuration:
{
windowMs: 12 * 60 * 1000, // 12 minute window
max: 100, // 100 requests per window
store: RedisStore, // Shared across gateway instances
standardHeaders: true // RateLimit-* headers
}How it Works:
- Client makes request → Gateway intercepts
- Redis stores request count per IP address
- Counter increments with each request
- If limit exceeded → Return 429 (Too Many Requests)
- Counter resets after 12 minutes
Benefits:
- DDoS Protection: Prevents request flooding
- Fair Resource Allocation: Equal limits for all clients
- Distributed: Works across multiple gateway instances
- Transparent: Rate limit headers inform clients of remaining quota
Implementation: See services/api-gateway/src/middlewares/rateLimiter.js
The User Service follows clean architecture principles with a three-layer design:
Layer Separation:
- Routes (
user.routes.js): HTTP endpoint definitions - Controllers (
user.controller.js): Request/response handling, validation - Services (
user.service.js): Business logic, database operations, caching
Benefits:
- Separation of Concerns: Each layer has a single responsibility
- Testability: Business logic isolated from HTTP concerns
- Reusability: Service methods can be called from multiple controllers
- Maintainability: Clear structure makes code easy to understand and modify
- Runtime: Node.js 20+ (Express.js framework)
- Language: JavaScript (ES6+)
- Authentication: JSON Web Tokens (JWT)
- Password Hashing: bcrypt
- Primary Database: MongoDB Atlas (cloud-hosted, database-per-service pattern)
- Cache Layer: Redis 7.x (user profile caching, rate limiting)
- Message Queue: RabbitMQ 3.x (event-driven communication with confirm channels)
- Reverse Proxy: Nginx (load balancing, SSL termination)
- Containerization: Docker & Docker Compose
- Container Orchestration: Docker Compose
- express: Web framework
- express-rate-limit: Rate limiting middleware
- rate-limit-redis: Redis store for distributed rate limiting
- redis: Redis client for Node.js
- jsonwebtoken: JWT authentication
- bcrypt: Password hashing
- mongoose: MongoDB ODM
- amqplib: RabbitMQ client
- API Testing: Postman (3 collection files provided)
- Version Control: Git
- Scripts: PowerShell (Windows) & Bash (Linux/macOS)
Before you begin, ensure you have the following installed:
- Docker Desktop (Windows/macOS) or Docker Engine (Linux) - Install Docker
- Git - Install Git
- Node.js 20+ (optional, for local development) - Install Node.js
- Clone the repository
git clone https://github.com/yourusername/unified-async-platform.git
cd unified-async-platform- Configure environment variables
Create a .env file in the project root:
cp .env.example .envEdit .env with your configuration (see Configuration section).
- Start the platform
Windows (PowerShell):
.\scripts\windows\dev.ps1macOS/Linux (Bash):
bash scripts/posix/dev.shOr use Docker Compose directly:
docker-compose up --build -d- Verify the installation
# Check API Gateway health
curl http://localhost/health
# Check RabbitMQ Management UI
open http://localhost:15672Success! Your platform is now running.
All services are configured via environment variables in the root .env file.
API_GATEWAY_PORT=3000
JWT_SECRET=your-super-secret-jwt-key-change-this-in-production
REDIS_URL=redis://redis:6379USER_SERVICE_PORT=3001
MONGO_URI_USER_SERVICE=mongodb+srv://user:pass@cluster.mongodb.net/
DB_NAME_USER_SERVICE=users_db
REDIS_URL=redis://redis:6379
JWT_SECRET=your-super-secret-jwt-key-change-this-in-productionTASK_SERVICE_PORT=3002
MONGO_URI_TASK_SERVICE=mongodb+srv://user:pass@cluster.mongodb.net/
DB_NAME_TASK_SERVICE=tasks_dbNOTIFICATION_SERVICE_PORT=3003
MONGO_URI_NOTIFICATION_SERVICE=mongodb+srv://user:pass@cluster.mongodb.net/
DB_NAME_NOTIFICATION_SERVICE=notifications_dbREDIS_HOST=redis
REDIS_PORT=6379
RABBITMQ_URL=amqp://guest:guest@rabbitmq:5672Important Security Notes:
- Never commit
.envfiles to version control - Use strong, unique secrets for
JWT_SECRET - Rotate credentials regularly
- Use secret management solutions in production (AWS Secrets Manager, HashiCorp Vault, etc.)
- Implement database access controls and network policies
http://localhost/api
Most endpoints require JWT authentication. Include the token in the Authorization header:
Authorization: Bearer <your_jwt_token>
| Method | Endpoint | Description |
|---|---|---|
POST |
/api/users/register |
Create a new user account |
POST |
/api/users/login |
Authenticate and receive JWT token |
User Service
| Method | Endpoint | Description |
|---|---|---|
GET |
/api/users/profile |
Get current user profile |
Task Service
| Method | Endpoint | Description |
|---|---|---|
GET |
/api/tasks |
List all user tasks |
POST |
/api/tasks/upload |
Create a new task |
PUT |
/api/tasks/:id |
Update an existing task |
DELETE |
/api/tasks/:id |
Delete a task |
Notification Service
| Method | Endpoint | Description |
|---|---|---|
GET |
/api/notifications/get-all |
Retrieve all notifications |
GET |
/api/notifications/latest |
Get most recent notifications |
Register a new user:
curl -X POST http://localhost/api/users/register \
-H "Content-Type: application/json" \
-d '{
"email": "user@example.com",
"password": "SecurePassword123",
"name": "John Doe"
}'Login and get JWT:
curl -X POST http://localhost/api/users/login \
-H "Content-Type: application/json" \
-d '{
"email": "user@example.com",
"password": "SecurePassword123"
}'Create a task (with JWT):
curl -X POST http://localhost/api/tasks/upload \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_JWT_TOKEN" \
-d '{
"title": "Complete project documentation",
"description": "Write comprehensive README",
"priority": "high"
}'Pre-configured Postman collections are available in the postman collection files/ directory. Import these collections for instant API exploration with:
- Pre-populated request examples
- Environment variables
- Automated authentication flow
Start all services:
docker-compose up --buildStart specific services:
docker-compose up api-gateway user-serviceView logs:
# All services
docker-compose logs -f
# Specific service
docker-compose logs -f task-service- Make your code changes
- Rebuild the affected service:
docker-compose up --build <service-name>
- Test your changes
- Commit and push
# Run tests (when implemented)
npm test
# Run linting
npm run lintWindows (PowerShell):
.\scripts\windows\clean-docker.ps1macOS/Linux (Bash):
bash scripts/posix/clean-docker.shManual cleanup:
# Stop all services
docker-compose down
# Remove volumes (WARNING: deletes all data)
docker-compose down -v
# Remove all containers, networks, and images
docker system prune -a- Update
JWT_SECRETto a strong, unique value - Configure production MongoDB connection strings
- Set up SSL/TLS certificates for Nginx
- Configure proper CORS settings
- Enable production logging and monitoring
- Set up automated backups for databases
- Configure firewall rules and security groups
- Enable HTTPS and redirect HTTP traffic
- Set up CI/CD pipelines
- Configure resource limits in Docker Compose
Docker Swarm:
docker stack deploy -c docker-compose.yml unified-platformKubernetes: (Helm charts coming soon)
Cloud Providers:
- AWS ECS/EKS
- Google Cloud Run/GKE
- Azure Container Instances/AKS
Each service exposes health endpoints for monitoring:
| Service | Health Endpoint |
|---|---|
| API Gateway | http://localhost/health |
| User Service | http://localhost/api/users/profile |
| Task Service | http://localhost/api/tasks/home |
| Notification Service | http://localhost/home/notifications/health |
-
RabbitMQ Management UI: http://localhost:15672
- Default credentials:
guest/guest(change in production!) - Monitor queues, connections, and message rates
- Default credentials:
-
Redis: Connect via Redis CLI
docker-compose exec redis redis-cli
Default configuration:
- Limit: 100 requests per 12 minutes
- Scope: Per IP address
- Storage: Redis (distributed rate limiting)
- Response: HTTP 429 (Too Many Requests)
- Headers: Standard
RateLimit-*headers included in responses
Rate limiting is applied globally at the API Gateway level, protecting all downstream services from abuse and DDoS attacks.
To adjust rate limits, modify services/api-gateway/src/middlewares/rateLimiter.js.
Docker is not running
Solution: Start Docker Desktop and ensure the Docker daemon is running.
# Check Docker status
docker infoRedis connection errors
Solution: Verify Redis configuration in .env:
REDIS_URL=redis://redis:6379(use container name, not localhost)
Check Redis logs:
docker-compose logs redisTest Redis connection:
# Connect to Redis CLI
docker-compose exec redis redis-cli
# Test ping
127.0.0.1:6379> PING
# Should return: PONG
# Check cached keys
127.0.0.1:6379> KEYS *If Redis fails, the application will continue to work but:
- Rate limiting may not function correctly
- User profile caching will be bypassed (direct database queries)
RabbitMQ not ready
Solution: Services will automatically retry connection. Check RabbitMQ status:
docker-compose logs rabbitmqAccess management UI at http://localhost:15672 to verify operation.
MongoDB authentication issues
Solution:
- Verify connection strings in
.env - Ensure MongoDB Atlas IP whitelist includes your IP
- Check database user permissions
- Never commit credentials to Git
# Test connection
docker-compose exec user-service node -e "console.log(process.env.MONGO_URI_USER_SERVICE)"Rate limit 429 responses
Solution:
- Current limit: 100 requests per 12 minutes per IP
- Reduce request frequency or wait for window to reset
- Check current rate limit status in response headers:
RateLimit-Limit: Maximum requests allowedRateLimit-Remaining: Requests remaining in current windowRateLimit-Reset: Time when the limit resets
To adjust rate limits for development, modify services/api-gateway/src/middlewares/rateLimiter.js:
windowMs: 12 * 60 * 1000, // Change time window
max: 100, // Change request limitNote: Rate limiting is essential for production to prevent abuse. Don't disable it unless absolutely necessary.
Port already in use
Solution:
# Find process using port 80
lsof -i :80 # macOS/Linux
netstat -ano | findstr :80 # Windows
# Kill the process or change port mapping in docker-compose.yml- Check the LLD.md for detailed architecture
- Open an issue
- Start a discussion
unified-async-platform/
│
├── infra/ # Infrastructure components
│ ├── nginx/ # Nginx reverse proxy configuration
│ │ ├── Dockerfile
│ │ └── nginx.conf
│ ├── redis/ # Redis cache configuration
│ │ ├── Dockerfile
│ │ └── redis.conf
│ └── rabbitmq/ # RabbitMQ message broker configuration
│ ├── Dockerfile
│ ├── rabbitmq.conf
│ └── definitions.json
│
├── services/ # Microservices
│ ├── api-gateway/ # API Gateway service
│ │ ├── src/
│ │ │ ├── config/
│ │ │ │ ├── redis.js # Redis client configuration
│ │ │ │ └── services.js # Service routing configuration
│ │ │ ├── middlewares/
│ │ │ │ ├── authenticate.middleware.js
│ │ │ │ ├── authorize.middleware.js
│ │ │ │ └── rateLimiter.js # Redis-backed rate limiter
│ │ │ ├── routes/
│ │ │ │ └── index.js # Dynamic route generation
│ │ │ ├── utils/
│ │ │ │ └── proxy.js # Service proxy utility
│ │ │ ├── app.js # Express app with global rate limiting
│ │ │ └── server.js
│ │ ├── Dockerfile
│ │ └── package.json
│ │
│ ├── user-service/ # User management service
│ │ ├── src/
│ │ │ ├── config/
│ │ │ │ ├── db.js # MongoDB connection
│ │ │ │ └── redis.js # Redis cache client
│ │ │ ├── controllers/
│ │ │ │ └── user.controller.js # Request handlers
│ │ │ ├── models/
│ │ │ │ └── User.model.js
│ │ │ ├── routes/
│ │ │ │ └── user.routes.js
│ │ │ ├── services/
│ │ │ │ └── user.service.js # Business logic with Redis caching
│ │ │ └── server.js # Server initialization with Redis
│ │ ├── tests/
│ │ │ └── user.test.js
│ │ ├── Dockerfile
│ │ └── package.json
│ │
│ ├── task-service/ # Task orchestration service
│ │ ├── src/
│ │ │ ├── config/
│ │ │ │ └── db.js
│ │ │ ├── models/
│ │ │ │ └── Task.model.js
│ │ │ ├── app.js
│ │ │ └── server.js # RabbitMQ confirm channel setup
│ │ ├── tests/
│ │ │ └── task.test.js
│ │ ├── Dockerfile
│ │ └── package.json
│ │
│ └── notification-service/ # Notification handling service
│ ├── src/
│ │ ├── config/
│ │ │ └── db.js
│ │ ├── models/
│ │ │ └── Notification.model.js
│ │ ├── app.js
│ │ └── server.js
│ ├── tests/
│ │ └── notification.test.js
│ ├── Dockerfile
│ └── package.json
│
├── scripts/ # Automation scripts
│ ├── windows/ # PowerShell scripts for Windows
│ │ ├── dev.ps1
│ │ └── clean-docker.ps1
│ └── posix/ # Bash scripts for macOS/Linux
│ ├── dev.sh
│ └── clean-docker.sh
│
├── postman collection files/ # API testing collections
│ ├── Production-Backend-Nginx Project-Improved_Version.postman_collection.json
│ ├── Production-Backend-Nginx Project.postman_collection.json
│ └── Production-Backend-Project.postman_collection.json
│
├── information/ # Documentation assets
│ ├── Architecture.md
│ └── processflow.png
│
├── docker-compose.yml # Docker Compose orchestration
├── .env.example # Environment variables template
├── .gitignore
├── LICENSE
├── Readme.md # This filedocument
We welcome contributions from the community! Here's how you can help:
- Fork the repository
- Create a feature branch
git checkout -b feature/amazing-feature
- Make your changes
- Follow existing code style
- Add tests for new features
- Update documentation as needed
- Commit your changes
git commit -m 'Add amazing feature' - Push to your branch
git push origin feature/amazing-feature
- Open a Pull Request
- Keep changes focused and scoped to a single concern
- Write clear, descriptive commit messages
- Update documentation for new features
- Add tests for bug fixes and new functionality
- Ensure all tests pass before submitting PR
- Never commit secrets or sensitive configuration
- Avoid large, monolithic PRs
Please be respectful and constructive in all interactions. We're building this together!
This project is licensed under the MIT License - see the LICENSE file for details.
Built with modern open-source technologies:
