Skip to content

bhutuklearning/Unified-Async-Workflow-and-Notification-Platform

Repository files navigation

Unified Async Workflow & Notification Platform

Node.js Express.js MongoDB Redis RabbitMQ Nginx Docker Postman

A production-ready, scalable backend platform built with modern microservices architecture, featuring event-driven communication, robust authentication, and intelligent rate limiting.

FeaturesQuick StartArchitectureAPI DocumentationContributing


Table of Contents


Overview

The Unified Async Workflow & Notification Platform is a battle-tested, enterprise-grade backend solution that demonstrates best practices in modern microservices architecture. Built with scalability, maintainability, and developer experience in mind, this platform provides a solid foundation for building complex distributed systems.

What Makes This Platform Special?

  • Security First: JWT-based authentication with IP-based rate limiting
  • Event-Driven: Asynchronous communication via RabbitMQ ensures loose coupling
  • Microservices Ready: Independently deployable services with clear domain boundaries
  • Container Native: Full Docker support for consistent development and production environments
  • High Performance: Redis-backed caching and intelligent request routing via Nginx
  • Observable: Comprehensive health checks and monitoring endpoints

Project Flow

Process Flow Diagram

The diagram above illustrates the complete request lifecycle through the platform, from client request to event-driven notification delivery.


Features

Core Capabilities

Feature Description
API Gateway Centralized entry point with JWT validation and Redis-backed rate limiting
User Management Complete identity and access management with Redis caching
Task Orchestration Full CRUD operations with event publishing
Real-time Notifications Event-driven notification system with reliable delivery
Rate Limiting Redis-backed IP-based throttling (100 req/12 min) to prevent abuse
Response Caching Redis caching for user profiles with automatic TTL expiration
Event Streaming RabbitMQ-powered asynchronous messaging with confirm channels
Health Monitoring Built-in health endpoints for all services
Containerization Docker & Docker Compose for reproducible environments

Technical Highlights

  • Scalable Architecture: Horizontal scaling support for all services
  • Database per Service: MongoDB Atlas with isolated databases
  • Redis Caching: User profile caching with automatic cache invalidation (1-hour TTL)
  • Intelligent Rate Limiting: Redis-backed IP-based rate limiting (100 requests per 12 minutes)
  • Reverse Proxy: Nginx for load balancing and SSL termination
  • Message Reliability: Confirmed RabbitMQ channels for guaranteed delivery
  • Clean Architecture: Service-Controller pattern for better separation of concerns
  • Environment-based Config: Secure configuration management via environment variables
  • API Documentation: Postman collections for easy exploration

Architecture

High-Level Architecture

┌─────────────┐
│   Client    │
└──────┬──────┘
       │
       ▼
┌─────────────────────────────────────────┐
│         Nginx Reverse Proxy             │
│         (Port 80/443)                   │
└──────────────────┬──────────────────────┘
                   │
                   ▼
┌──────────────────────────────────────────┐
│         API Gateway (Port 3000)          │
│  • JWT Validation                        │
│  • Rate Limiting (Redis)                 │
│    └─> 100 requests / 12 minutes         │
│  • Request Routing                       │
└───┬──────────────┬──────────────┬────────┘
    │              │              │
    ▼              ▼              ▼
┌─────────┐  ┌─────────┐  ┌──────────────┐
│  User   │  │  Task   │  │ Notification │
│ Service │  │ Service │  │   Service    │
│ (3001)  │  │ (3002)  │  │   (3003)     │
│         │  │         │  │              │
│ • Redis │  │ RabbitMQ│  │   RabbitMQ   │
│  Cache  │  │ Publish │  │   Consumer   │
│ • MVC   │  │         │  │              │
│         │  │         │  │              │
│ MongoDB │  │ MongoDB │  │   MongoDB    │
└────┬────┘  └────┬────┘  └──────▲───────┘
     │            │                │
     ▼            │   RabbitMQ     │
┌─────────┐       └────────────────┘
│  Redis  │       (task_created,
│  Cache  │        task_updated,
└─────────┘        task_deleted queues)

Data Flow

  1. Client Request → Nginx forwards to API Gateway
  2. Rate Limiting → Redis checks request quotas (100 requests per 12 minutes per IP)
  3. Authentication → Gateway validates JWT token
  4. Service Routing → Request forwarded to appropriate microservice
  5. Cache Check (User Service) → Redis cache lookup for user profiles
  6. Database Query → MongoDB query on cache miss, result cached for 1 hour
  7. Event Publishing (Task Service) → Publishes events to RabbitMQ
  8. Event Consumption (Notification Service) → Processes events asynchronously

For detailed architecture diagrams and low-level design, see information/Architecture.md.


Key Implementation Details

Redis Caching Strategy

The User Service implements intelligent caching for optimal performance:

Cache Flow:

  1. Cache Hit: User profile request → Check Redis → Return cached data (fast)
  2. Cache Miss: User profile request → Check Redis → Query MongoDB → Cache result (TTL: 1 hour) → Return data

Benefits:

  • Reduced Database Load: Frequent profile requests served from memory
  • Improved Response Time: Redis responds in microseconds vs MongoDB milliseconds
  • Automatic Expiration: 1-hour TTL ensures data freshness
  • Graceful Degradation: Falls back to database if Redis is unavailable

Implementation: See services/user-service/src/services/user.service.js

Rate Limiting Architecture

The API Gateway implements distributed rate limiting using Redis:

Rate Limiter Configuration:

{
  windowMs: 12 * 60 * 1000,  // 12 minute window
  max: 100,                   // 100 requests per window
  store: RedisStore,          // Shared across gateway instances
  standardHeaders: true       // RateLimit-* headers
}

How it Works:

  1. Client makes request → Gateway intercepts
  2. Redis stores request count per IP address
  3. Counter increments with each request
  4. If limit exceeded → Return 429 (Too Many Requests)
  5. Counter resets after 12 minutes

Benefits:

  • DDoS Protection: Prevents request flooding
  • Fair Resource Allocation: Equal limits for all clients
  • Distributed: Works across multiple gateway instances
  • Transparent: Rate limit headers inform clients of remaining quota

Implementation: See services/api-gateway/src/middlewares/rateLimiter.js

Service Layer Pattern

The User Service follows clean architecture principles with a three-layer design:

Layer Separation:

Benefits:

  • Separation of Concerns: Each layer has a single responsibility
  • Testability: Business logic isolated from HTTP concerns
  • Reusability: Service methods can be called from multiple controllers
  • Maintainability: Clear structure makes code easy to understand and modify

Tech Stack

Backend Services

  • Runtime: Node.js 20+ (Express.js framework)
  • Language: JavaScript (ES6+)
  • Authentication: JSON Web Tokens (JWT)
  • Password Hashing: bcrypt

Data Layer

  • Primary Database: MongoDB Atlas (cloud-hosted, database-per-service pattern)
  • Cache Layer: Redis 7.x (user profile caching, rate limiting)
  • Message Queue: RabbitMQ 3.x (event-driven communication with confirm channels)

Infrastructure & Networking

  • Reverse Proxy: Nginx (load balancing, SSL termination)
  • Containerization: Docker & Docker Compose
  • Container Orchestration: Docker Compose

Key Node.js Packages

  • express: Web framework
  • express-rate-limit: Rate limiting middleware
  • rate-limit-redis: Redis store for distributed rate limiting
  • redis: Redis client for Node.js
  • jsonwebtoken: JWT authentication
  • bcrypt: Password hashing
  • mongoose: MongoDB ODM
  • amqplib: RabbitMQ client

DevOps & Tooling

  • API Testing: Postman (3 collection files provided)
  • Version Control: Git
  • Scripts: PowerShell (Windows) & Bash (Linux/macOS)

Quick Start

Prerequisites

Before you begin, ensure you have the following installed:

Installation

  1. Clone the repository
git clone https://github.com/yourusername/unified-async-platform.git
cd unified-async-platform
  1. Configure environment variables

Create a .env file in the project root:

cp .env.example .env

Edit .env with your configuration (see Configuration section).

  1. Start the platform

Windows (PowerShell):

.\scripts\windows\dev.ps1

macOS/Linux (Bash):

bash scripts/posix/dev.sh

Or use Docker Compose directly:

docker-compose up --build -d
  1. Verify the installation
# Check API Gateway health
curl http://localhost/health

# Check RabbitMQ Management UI
open http://localhost:15672

Success! Your platform is now running.


Configuration

All services are configured via environment variables in the root .env file.

Required Environment Variables

API Gateway

API_GATEWAY_PORT=3000
JWT_SECRET=your-super-secret-jwt-key-change-this-in-production
REDIS_URL=redis://redis:6379

User Service

USER_SERVICE_PORT=3001
MONGO_URI_USER_SERVICE=mongodb+srv://user:pass@cluster.mongodb.net/
DB_NAME_USER_SERVICE=users_db
REDIS_URL=redis://redis:6379
JWT_SECRET=your-super-secret-jwt-key-change-this-in-production

Task Service

TASK_SERVICE_PORT=3002
MONGO_URI_TASK_SERVICE=mongodb+srv://user:pass@cluster.mongodb.net/
DB_NAME_TASK_SERVICE=tasks_db

Notification Service

NOTIFICATION_SERVICE_PORT=3003
MONGO_URI_NOTIFICATION_SERVICE=mongodb+srv://user:pass@cluster.mongodb.net/
DB_NAME_NOTIFICATION_SERVICE=notifications_db

Infrastructure

REDIS_HOST=redis
REDIS_PORT=6379
RABBITMQ_URL=amqp://guest:guest@rabbitmq:5672

Security Best Practices

Important Security Notes:

  • Never commit .env files to version control
  • Use strong, unique secrets for JWT_SECRET
  • Rotate credentials regularly
  • Use secret management solutions in production (AWS Secrets Manager, HashiCorp Vault, etc.)
  • Implement database access controls and network policies

API Documentation

Base URL

http://localhost/api

Authentication

Most endpoints require JWT authentication. Include the token in the Authorization header:

Authorization: Bearer <your_jwt_token>

Endpoints Overview

Public Endpoints (No Authentication Required)

Method Endpoint Description
POST /api/users/register Create a new user account
POST /api/users/login Authenticate and receive JWT token

Protected Endpoints (JWT Required)

User Service

Method Endpoint Description
GET /api/users/profile Get current user profile

Task Service

Method Endpoint Description
GET /api/tasks List all user tasks
POST /api/tasks/upload Create a new task
PUT /api/tasks/:id Update an existing task
DELETE /api/tasks/:id Delete a task

Notification Service

Method Endpoint Description
GET /api/notifications/get-all Retrieve all notifications
GET /api/notifications/latest Get most recent notifications

Example Requests

Register a new user:

curl -X POST http://localhost/api/users/register \
  -H "Content-Type: application/json" \
  -d '{
    "email": "user@example.com",
    "password": "SecurePassword123",
    "name": "John Doe"
  }'

Login and get JWT:

curl -X POST http://localhost/api/users/login \
  -H "Content-Type: application/json" \
  -d '{
    "email": "user@example.com",
    "password": "SecurePassword123"
  }'

Create a task (with JWT):

curl -X POST http://localhost/api/tasks/upload \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_JWT_TOKEN" \
  -d '{
    "title": "Complete project documentation",
    "description": "Write comprehensive README",
    "priority": "high"
  }'

Postman Collections

Pre-configured Postman collections are available in the postman collection files/ directory. Import these collections for instant API exploration with:

  • Pre-populated request examples
  • Environment variables
  • Automated authentication flow

Development

Running Locally

Start all services:

docker-compose up --build

Start specific services:

docker-compose up api-gateway user-service

View logs:

# All services
docker-compose logs -f

# Specific service
docker-compose logs -f task-service

Development Workflow

  1. Make your code changes
  2. Rebuild the affected service:
    docker-compose up --build <service-name>
  3. Test your changes
  4. Commit and push

Testing

# Run tests (when implemented)
npm test

# Run linting
npm run lint

Cleaning Up

Windows (PowerShell):

.\scripts\windows\clean-docker.ps1

macOS/Linux (Bash):

bash scripts/posix/clean-docker.sh

Manual cleanup:

# Stop all services
docker-compose down

# Remove volumes (WARNING: deletes all data)
docker-compose down -v

# Remove all containers, networks, and images
docker system prune -a

Deployment

Production Checklist

  • Update JWT_SECRET to a strong, unique value
  • Configure production MongoDB connection strings
  • Set up SSL/TLS certificates for Nginx
  • Configure proper CORS settings
  • Enable production logging and monitoring
  • Set up automated backups for databases
  • Configure firewall rules and security groups
  • Enable HTTPS and redirect HTTP traffic
  • Set up CI/CD pipelines
  • Configure resource limits in Docker Compose

Deployment Options

Docker Swarm:

docker stack deploy -c docker-compose.yml unified-platform

Kubernetes: (Helm charts coming soon)

Cloud Providers:

  • AWS ECS/EKS
  • Google Cloud Run/GKE
  • Azure Container Instances/AKS

Monitoring & Health Checks

Health Endpoints

Each service exposes health endpoints for monitoring:

Service Health Endpoint
API Gateway http://localhost/health
User Service http://localhost/api/users/profile
Task Service http://localhost/api/tasks/home
Notification Service http://localhost/home/notifications/health

Infrastructure Monitoring

  • RabbitMQ Management UI: http://localhost:15672

    • Default credentials: guest/guest (change in production!)
    • Monitor queues, connections, and message rates
  • Redis: Connect via Redis CLI

    docker-compose exec redis redis-cli

Rate Limiting

Default configuration:

  • Limit: 100 requests per 12 minutes
  • Scope: Per IP address
  • Storage: Redis (distributed rate limiting)
  • Response: HTTP 429 (Too Many Requests)
  • Headers: Standard RateLimit-* headers included in responses

Rate limiting is applied globally at the API Gateway level, protecting all downstream services from abuse and DDoS attacks.

To adjust rate limits, modify services/api-gateway/src/middlewares/rateLimiter.js.


Troubleshooting

Common Issues and Solutions

Docker is not running

Solution: Start Docker Desktop and ensure the Docker daemon is running.

# Check Docker status
docker info
Redis connection errors

Solution: Verify Redis configuration in .env:

  • REDIS_URL=redis://redis:6379 (use container name, not localhost)

Check Redis logs:

docker-compose logs redis

Test Redis connection:

# Connect to Redis CLI
docker-compose exec redis redis-cli

# Test ping
127.0.0.1:6379> PING
# Should return: PONG

# Check cached keys
127.0.0.1:6379> KEYS *

If Redis fails, the application will continue to work but:

  • Rate limiting may not function correctly
  • User profile caching will be bypassed (direct database queries)
RabbitMQ not ready

Solution: Services will automatically retry connection. Check RabbitMQ status:

docker-compose logs rabbitmq

Access management UI at http://localhost:15672 to verify operation.

MongoDB authentication issues

Solution:

  • Verify connection strings in .env
  • Ensure MongoDB Atlas IP whitelist includes your IP
  • Check database user permissions
  • Never commit credentials to Git
# Test connection
docker-compose exec user-service node -e "console.log(process.env.MONGO_URI_USER_SERVICE)"
Rate limit 429 responses

Solution:

  • Current limit: 100 requests per 12 minutes per IP
  • Reduce request frequency or wait for window to reset
  • Check current rate limit status in response headers:
    • RateLimit-Limit: Maximum requests allowed
    • RateLimit-Remaining: Requests remaining in current window
    • RateLimit-Reset: Time when the limit resets

To adjust rate limits for development, modify services/api-gateway/src/middlewares/rateLimiter.js:

windowMs: 12 * 60 * 1000,  // Change time window
max: 100,                   // Change request limit

Note: Rate limiting is essential for production to prevent abuse. Don't disable it unless absolutely necessary.

Port already in use

Solution:

# Find process using port 80
lsof -i :80  # macOS/Linux
netstat -ano | findstr :80  # Windows

# Kill the process or change port mapping in docker-compose.yml

Getting Help


Project Structure

unified-async-platform/
│
├── infra/                          # Infrastructure components
│   ├── nginx/                      # Nginx reverse proxy configuration
│   │   ├── Dockerfile
│   │   └── nginx.conf
│   ├── redis/                      # Redis cache configuration
│   │   ├── Dockerfile
│   │   └── redis.conf
│   └── rabbitmq/                   # RabbitMQ message broker configuration
│       ├── Dockerfile
│       ├── rabbitmq.conf
│       └── definitions.json
│
├── services/                       # Microservices
│   ├── api-gateway/                # API Gateway service
│   │   ├── src/
│   │   │   ├── config/
│   │   │   │   ├── redis.js       # Redis client configuration
│   │   │   │   └── services.js    # Service routing configuration
│   │   │   ├── middlewares/
│   │   │   │   ├── authenticate.middleware.js
│   │   │   │   ├── authorize.middleware.js
│   │   │   │   └── rateLimiter.js # Redis-backed rate limiter
│   │   │   ├── routes/
│   │   │   │   └── index.js       # Dynamic route generation
│   │   │   ├── utils/
│   │   │   │   └── proxy.js       # Service proxy utility
│   │   │   ├── app.js             # Express app with global rate limiting
│   │   │   └── server.js
│   │   ├── Dockerfile
│   │   └── package.json
│   │
│   ├── user-service/               # User management service
│   │   ├── src/
│   │   │   ├── config/
│   │   │   │   ├── db.js          # MongoDB connection
│   │   │   │   └── redis.js       # Redis cache client
│   │   │   ├── controllers/
│   │   │   │   └── user.controller.js  # Request handlers
│   │   │   ├── models/
│   │   │   │   └── User.model.js
│   │   │   ├── routes/
│   │   │   │   └── user.routes.js
│   │   │   ├── services/
│   │   │   │   └── user.service.js     # Business logic with Redis caching
│   │   │   └── server.js          # Server initialization with Redis
│   │   ├── tests/
│   │   │   └── user.test.js
│   │   ├── Dockerfile
│   │   └── package.json
│   │
│   ├── task-service/               # Task orchestration service
│   │   ├── src/
│   │   │   ├── config/
│   │   │   │   └── db.js
│   │   │   ├── models/
│   │   │   │   └── Task.model.js
│   │   │   ├── app.js
│   │   │   └── server.js          # RabbitMQ confirm channel setup
│   │   ├── tests/
│   │   │   └── task.test.js
│   │   ├── Dockerfile
│   │   └── package.json
│   │
│   └── notification-service/       # Notification handling service
│       ├── src/
│       │   ├── config/
│       │   │   └── db.js
│       │   ├── models/
│       │   │   └── Notification.model.js
│       │   ├── app.js
│       │   └── server.js
│       ├── tests/
│       │   └── notification.test.js
│       ├── Dockerfile
│       └── package.json
│
├── scripts/                        # Automation scripts
│   ├── windows/                    # PowerShell scripts for Windows
│   │   ├── dev.ps1
│   │   └── clean-docker.ps1
│   └── posix/                      # Bash scripts for macOS/Linux
│       ├── dev.sh
│       └── clean-docker.sh
│
├── postman collection files/       # API testing collections
│   ├── Production-Backend-Nginx Project-Improved_Version.postman_collection.json
│   ├── Production-Backend-Nginx Project.postman_collection.json
│   └── Production-Backend-Project.postman_collection.json
│
├── information/                    # Documentation assets
│   ├── Architecture.md
│   └── processflow.png
│
├── docker-compose.yml              # Docker Compose orchestration
├── .env.example                    # Environment variables template
├── .gitignore
├── LICENSE
├── Readme.md                       # This filedocument

Contributing

We welcome contributions from the community! Here's how you can help:

How to Contribute

  1. Fork the repository
  2. Create a feature branch
    git checkout -b feature/amazing-feature
  3. Make your changes
    • Follow existing code style
    • Add tests for new features
    • Update documentation as needed
  4. Commit your changes
    git commit -m 'Add amazing feature'
  5. Push to your branch
    git push origin feature/amazing-feature
  6. Open a Pull Request

Contribution Guidelines

  • Keep changes focused and scoped to a single concern
  • Write clear, descriptive commit messages
  • Update documentation for new features
  • Add tests for bug fixes and new functionality
  • Ensure all tests pass before submitting PR
  • Never commit secrets or sensitive configuration
  • Avoid large, monolithic PRs

Code of Conduct

Please be respectful and constructive in all interactions. We're building this together!


License

This project is licensed under the MIT License - see the LICENSE file for details.


Acknowledgments

Built with modern open-source technologies:


Back to Top

Made with code and coffee

If this project helped you, please star the repository!

Releases

No releases published

Packages

 
 
 

Contributors