A distributed, event-driven microservices architecture built with Rust (Axum), NATS for message passing, PostgreSQL for persistence, Redis for caching, and a modern CI/CD + observability stack.
Note: This is a portfolio project. GitHub Actions workflows are present in
.github/workflows/but disabled (if: false) to prevent execution since there's no deployment target.
Ganaka follows a microservices architecture with the following components:
- Auth Service (Port 3001) - User authentication and authorization
- User Service (Port 3002) - User profile management
- Expense Service (Port 3003) - Expense CRUD operations
- Category Service (Port 3004) - Expense categorization
- Report Service (Port 3005) - Financial reporting and analytics
- Notification Service (Port 3006) - Email/SMS notifications
- Dashboard Service (Port 3007) - Real-time dashboard with WebSocket
- PostgreSQL - Primary database with partitioned tables
- Redis - Caching and session management
- NATS - Message broker for event-driven communication
- Prometheus - Metrics collection
- Grafana - Monitoring dashboards
- Docker and Docker Compose
- Rust 1.75+ (for local development)
- Git
-
Clone the repository
git clone <repository-url> cd ganaka
-
Choose your setup
Option A: Full Containerized Environment (Recommended for new users)
# Runs all services (PostgreSQL, Redis, NATS, all microservices) in containers docker-compose up -dOption B: Development Environment (Recommended)
# Runs infrastructure (PostgreSQL, Redis, NATS) in containers # Microservices run locally for development docker-compose -f docker-compose.dev.yml up -d # Then run services locally (see Local Development section below)
Option C: Development with Hot Reloading
# Runs all services with live code reloading using cargo watch docker-compose -f docker-compose.yml -f docker-compose.override.yml up -
Check service health
docker-compose ps # or for dev setup: docker-compose -f docker-compose.dev.yml ps -
View logs
docker-compose logs -f # or for dev setup: docker-compose -f docker-compose.dev.yml logs -f
Ganaka provides three docker-compose configurations:
-
docker-compose.yml- Production setup with all services containerized- Includes: PostgreSQL, Redis, NATS, all microservices
- Best for: New users, demos, CI/CD, production-like environments
-
docker-compose.dev.yml- Development setup with hybrid infrastructure- Includes: PostgreSQL, all microservices
- Excludes: Redis, NATS (expects them running locally)
- Best for: Advanced developers with local Redis/NATS installations
-
docker-compose.override.yml- Development overrides for hot reloading- Enables live code reloading with
cargo watch - Mounts source code volumes for instant updates
- Use with:
docker-compose -f docker-compose.yml -f docker-compose.override.yml up
- Enables live code reloading with
- API Gateway: http://localhost:8080 (if configured)
- Grafana: http://localhost:3000 (admin/admin)
- Prometheus: http://localhost:9090
- NATS Monitoring: http://localhost:8222
-
Start infrastructure in containers
docker-compose -f docker-compose.dev.yml up -d
-
Run services locally
Option A: Run all services with helper script
./scripts/run-dev.sh
Option B: Run services manually (in separate terminals)
# Terminal 1 - Auth Service cd services/auth-service && cargo run # Terminal 2 - User Service cd services/user-service && cargo run # Terminal 3 - Expense Service cd services/expense-service && cargo run # And so on for other services...
-
Stop infrastructure
docker-compose -f docker-compose.dev.yml down
Create a .env file in each service directory (when using docker-compose.dev.yml):
# Database (connects to containerized PostgreSQL)
DATABASE_URL=postgres://ganaka_user:ganaka_password@localhost:5432/ganaka
# Redis (connects to containerized Redis)
REDIS_URL=redis://localhost:6379
# NATS (connects to containerized NATS)
NATS_URL=nats://localhost:4222
# JWT Secret (change in production)
JWT_SECRET=your-super-secret-jwt-key-change-in-production
# Logging
RUST_LOG=debugJWT_SECRET=your-super-secret-jwt-key-change-in-production
SMTP_HOST=smtp.gmail.com SMTP_PORT=587 SMTP_USERNAME=your-email@gmail.com SMTP_PASSWORD=your-app-password TWILIO_ACCOUNT_SID=your-twilio-sid TWILIO_AUTH_TOKEN=your-twilio-token TWILIO_FROM_NUMBER=+1234567890
RUST_LOG=info
## ๐ก API Documentation
### Authentication Endpoints
```bash
POST /auth/register
POST /auth/login
POST /auth/refresh
POST /auth/logout
GET /auth/me
GET /users/{id}
PATCH /users/{id}
DELETE /users/{id}POST /expenses
GET /expenses?user_id={id}
GET /expenses/{id}
PATCH /expenses/{id}
DELETE /expenses/{id}POST /categories
GET /categories?user_id={id}
PATCH /categories/{id}
DELETE /categories/{id}GET /reports/{user_id}/monthly
GET /reports/{user_id}/weekly
GET /reports/{user_id}/categoryPOST /notifications/send
GET /notifications?user_id={id}GET /dashboard/{user_id}/summary
GET /dashboard/{user_id}/live # WebSocket endpointThe system uses PostgreSQL with the following key tables:
users- User accounts and profilesuser_passwords- Secure password storagecategories- Expense categoriesexpenses- Expense records (partitioned by user_id)notifications- Notification logs
The expenses table is partitioned by user_id using hash partitioning for optimal performance:
CREATE TABLE expenses (
-- columns...
) PARTITION BY HASH (user_id);
-- Creates 16 partitions
CREATE TABLE expenses_p0 PARTITION OF expenses FOR VALUES WITH (MODULUS 16, REMAINDER 0);
-- ... up to expenses_p15Services communicate through NATS with the following event streams:
user.registereduser.logged_inuser.logged_outuser.profile_updated
expense.addedexpense.updatedexpense.deleted
category.createdexpense.categorized
report.updated
notification.sent
- Service response times
- Error rates
- Database query performance
- Message broker throughput
- Structured JSON logs
- Centralized log aggregation
- Error tracking and alerting
- Distributed request tracing
- Performance bottleneck identification
- Service dependency mapping
cargo test --package SERVICE_NAMEdocker-compose -f docker-compose.test.yml up -d
cargo test --test integration# Start all services
docker-compose up -d
# Run E2E tests
npm test # or your preferred test runner-
Build Docker images
docker-compose build
-
Deploy to Kubernetes
Kubernetes deployment manifests are not implemented. See
docs/deployment/README.mdfor deployment architecture and planning.
Note: CI/CD workflows are configured but disabled for this portfolio project.
For production, update the following:
- Database connection strings
- Redis cluster configuration
- NATS cluster setup
- JWT secrets
- SMTP/Twilio credentials
- Monitoring endpoints
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests
- Submit a pull request
- Follow Rust best practices
- Write comprehensive tests
- Update documentation
- Use conventional commit messages
This project is licensed under the MIT License - see the LICENSE file for details.
See Future Enhancements for planned features including:
- Advanced logging with Loki + Promtail
- Distributed tracing with OpenTelemetry
- Kubernetes production deployment
- Machine learning integration
- And more...
- Built with Axum web framework
- Message passing with NATS
- Database operations with SQLx
- Authentication with JWT
For more detailed documentation, see the /docs directory.