Ganaka is a distributed, event-driven expense tracking system built with Rust using a microservices architecture. It provides comprehensive expense management with real-time analytics, intelligent categorization, and multi-channel notifications.
- Track personal/business expenses with intelligent categorization
- Provide real-time analytics and reporting
- Enable multi-channel notifications for expense alerts
- Support real-time dashboard updates via WebSocket
- Ensure high performance with database partitioning and caching
- Event-Driven: Services communicate via NATS messaging
- Domain-Driven Design: Each service owns specific business domains
- Scalability: Horizontal scaling with database partitioning
- Reliability: Fault-tolerant with circuit breakers and retries
- Observability: Comprehensive logging, metrics, and tracing
- Language: Rust 1.75+
- Web Framework: Axum 0.7 (async HTTP server)
- Database: PostgreSQL 15+ with partitioning
- Cache: Redis 7+ for sessions and caching
- Message Broker: NATS 2.9+ for event-driven communication
- ORM: SQLx 0.8 (compile-time verified queries)
- JWT: JSON Web Tokens for API authentication
- Password Hashing: Argon2 for secure password storage
- Session Management: Redis-based refresh token storage
- CORS: Configurable cross-origin resource sharing
- Containerization: Docker with multi-stage builds
- Orchestration: Docker Compose for development/production
- Monitoring: Prometheus + Grafana stack
- Logging: Loki + Promtail for centralized logging
- CI/CD: GitHub Actions workflows (disabled for portfolio project)
- Unit Tests: Rust built-in testing framework
- Integration Tests: Full service communication testing
- E2E Tests: Playwright for browser automation
- Performance Tests: k6 for load testing
- Code Quality: Clippy linting, security auditing
โโโโโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโโ
โ API Gateway โ โ Auth Service โ
โ (Future) โโโโโบโ (Port 3001) โ
โโโโโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโโ
โ โ
โผ โผ
โโโโโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโโ
โ User Service โ โExpense Service โ
โ (Port 3002) โโโโโบโ (Port 3003) โ
โโโโโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโโ
โ โ
โผ โผ
โโโโโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโโ
โCategory Service โ โ Report Service โ
โ (Port 3004) โโโโโบโ (Port 3005) โ
โโโโโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโโ
โ โ
โผ โผ
โโโโโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโโ
โNotification Svc โ โDashboard Serviceโ
โ (Port 3006) โโโโโบโ (Port 3007) โ
โโโโโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโโ
- Auth Service receives registration request
- Creates user in database with hashed password
- Publishes
user.registeredevent to NATS - User Service consumes event and performs initialization
- Category Service creates default categories for new user
- Notification Service sends welcome message
- Expense Service receives expense creation request
- Validates and stores expense in partitioned table
- Publishes
expense.addedevent - Category Service attempts auto-categorization
- Report Service updates analytics and publishes
report.updated - Dashboard Service broadcasts real-time updates via WebSocket
- Notification Service sends alerts for high-value expenses
Purpose: User authentication and session management Responsibilities:
- User registration and login
- JWT token generation and validation
- Refresh token management via Redis
- Password hashing with Argon2
- Authentication middleware for protected routes
Key Endpoints:
POST /auth/register- User registrationPOST /auth/login- User authenticationPOST /auth/refresh- Token refreshPOST /auth/logout- Session terminationGET /auth/me- Current user info (protected)
Purpose: User profile management Responsibilities:
- User CRUD operations
- Profile updates and preferences
- User data validation
- Event consumption for user lifecycle
Key Endpoints:
GET /users/{id}- Get user profilePUT /users/{id}- Update user profileDELETE /users/{id}- Delete user account
Purpose: Core expense management Responsibilities:
- Expense CRUD operations
- Database partitioning by user_id
- Expense validation and business rules
- Event publishing for expense lifecycle
Key Endpoints:
GET /expenses- List user expenses (paginated)POST /expenses- Create new expenseGET /expenses/{id}- Get specific expensePUT /expenses/{id}- Update expenseDELETE /expenses/{id}- Delete expense
Purpose: Expense categorization and organization Responsibilities:
- Category CRUD operations
- Auto-categorization based on keywords
- Default category creation for new users
- Category-based analytics
Key Endpoints:
GET /categories- List user categoriesPOST /categories- Create categoryPUT /categories/{id}- Update categoryDELETE /categories/{id}- Delete category
Purpose: Analytics and reporting Responsibilities:
- Monthly expense summaries
- Category spending analysis
- Materialized views for performance
- Real-time report generation
Key Endpoints:
GET /reports/monthly/{year}/{month}- Monthly reportGET /reports/categories/{year}/{month}- Category analysisGET /reports/summary- Overall spending summary
Purpose: Multi-channel notifications Responsibilities:
- Email and SMS notifications
- Welcome messages for new users
- Expense alerts for high-value transactions
- Monthly report summaries
Key Features:
- SMTP email integration
- SMS provider integration (configurable)
- Template-based message generation
- Notification history tracking
Purpose: Real-time dashboard and WebSocket updates Responsibilities:
- Live expense tracking
- WebSocket connections for real-time updates
- Dashboard data aggregation
- Cache management for performance
Key Features:
- WebSocket endpoint for live updates
- Dashboard summary generation
- Real-time expense notifications
- Cache invalidation and updates
-- Users with preferences
users (
id UUID PRIMARY KEY,
email VARCHAR UNIQUE,
name VARCHAR,
role user_role,
preferences JSONB,
created_at TIMESTAMP,
updated_at TIMESTAMP
)
-- Secure password storage
user_passwords (
user_id UUID PRIMARY KEY,
password_hash VARCHAR
)
-- Partitioned expenses for scalability
expenses (
id UUID PRIMARY KEY,
user_id UUID,
amount DECIMAL(12,2),
description TEXT,
category_id UUID,
timestamp TIMESTAMP,
created_at TIMESTAMP,
updated_at TIMESTAMP
) PARTITION BY HASH (user_id)
-- User-defined categories
categories (
id UUID PRIMARY KEY,
user_id UUID,
name VARCHAR,
description TEXT,
UNIQUE(user_id, name)
)
-- Notification history
notifications (
id UUID PRIMARY KEY,
user_id UUID,
channel notification_channel,
message TEXT,
sent_at TIMESTAMP
)- Hash Partitioning: Expenses table partitioned by user_id (16 partitions)
- Materialized Views: Pre-computed monthly and category summaries
- Indexes: Optimized for common query patterns
- Connection Pooling: Configured for high concurrency
- Redis: Session storage, refresh tokens, dashboard cache
- TTL: 24 hours for dashboard data, 30 days for sessions
- Invalidation: Event-driven cache updates
user.registered- New user registrationuser.logged_in- User authenticationuser.logged_out- User session termination
expense.added- New expense createdexpense.updated- Expense modifiedexpense.deleted- Expense removedexpense.categorized- Expense category assigned
report.updated- Report data refreshedcategory.created- New category added
notification.sent- Notification delivered
- Fire-and-Forget: Most events are processed asynchronously
- At-Least-Once Delivery: Guaranteed event processing
- Event Sourcing: Audit trail via event logs
- Circuit Breaker: Fault tolerance for service communication
- Registration: Password hashed with Argon2, stored separately
- Login: Password verification, JWT + refresh token generation
- Session Management: Refresh tokens stored in Redis with TTL
- Middleware: JWT validation on protected routes
- Password Security: Argon2 hashing with salt
- Token Security: Short-lived access tokens (15min), longer refresh tokens (30 days)
- Input Validation: Comprehensive validation on all endpoints
- CORS Configuration: Restrictive cross-origin policies
- SQL Injection Prevention: SQLx compile-time query verification
- Rust: 1.75+ with Cargo
- Docker: 24+ with Docker Compose
- PostgreSQL: 15+ (via Docker)
- Redis: 7+ (via Docker)
- NATS: 2.9+ (via Docker)
# Clone repository
git clone <repository-url>
cd ganaka
# Start infrastructure
docker-compose -f docker-compose.dev.yml up -d
# Run all services
./scripts/run-dev.sh
# Or run individual services
cargo run --bin auth-service
cargo run --bin user-service
cargo run --bin docs-aggregator
# ... etc# Database
DATABASE_URL=postgres://user:password@localhost:5432/ganaka
# Cache
REDIS_URL=redis://localhost:6379
# Messaging
NATS_URL=nats://localhost:4222
# Security
JWT_SECRET=your-256-bit-secret-key
# Email (optional)
SMTP_HOST=smtp.gmail.com
SMTP_PORT=587
SMTP_USER=your-email@gmail.com
SMTP_PASS=your-app-password- Unit Tests: Individual functions and modules (80%)
- Integration Tests: Service-to-service communication (15%)
- E2E Tests: Full user workflows (5%)
- API Tests: REST endpoint validation
- Database Tests: Data persistence and queries
- Event Tests: Message publishing and consumption
- Performance Tests: Load testing with k6
- Security Tests: Authentication and authorization
# Unit tests
cargo test --workspace
# Integration tests
cargo test --test integration_test
# E2E tests
cd tests/e2e && npm test
# Performance tests
k6 run tests/performance/k6-script.js- Docker Compose: Single-command infrastructure setup
- Hot Reload: Cargo watch for service development
- Local Services: All dependencies run in containers
- Containerized: Each service in optimized Docker image
- Orchestration: Kubernetes manifests (planned)
- Load Balancing: Nginx or service mesh
- Monitoring: Prometheus + Grafana + Loki stack
- Lint & Test: Code quality and unit tests
- Build: Docker images for all services
- Security Scan: Dependency vulnerability checks
- Deploy: Environment-specific deployments
- Monitor: Health checks and alerting
Note: Workflows are present but disabled (if: false) as this is a portfolio project with no deployment target.
- Database Partitioning: Horizontal scaling by user
- Connection Pooling: Optimized database connections
- Caching Layers: Redis for frequently accessed data
- Async Processing: Non-blocking I/O throughout
- API Response Time: <100ms for 95% of requests
- Concurrent Users: 10,000+ simultaneous connections
- Database Queries: <10ms average response time
- Event Processing: <1 second end-to-end latency
- Service Health: Uptime and response times
- Database Performance: Query execution times
- Event Throughput: Messages per second
- Error Rates: 4xx/5xx response percentages
- API Gateway: Centralized request routing and authentication
- Kubernetes Deployment: Production container orchestration
- Advanced Observability: Distributed tracing with OpenTelemetry
- Machine Learning: AI-powered expense categorization
- Mobile App: React Native companion application
- Multi-Tenancy: Organization and team support
- Service Mesh: Istio integration for advanced networking
- Event Sourcing: Complete audit trail and replay capabilities
- CQRS Pattern: Separate read/write models for complex queries
- GraphQL API: Flexible query interface for advanced clients
- Rust Edition: 2021 with strict Clippy rules
- Error Handling: Custom error types with thiserror
- Logging: Structured logging with tracing
- Documentation: Comprehensive docs.rs documentation
- Conventional Commits:
feat:,fix:,docs:,refactor: - Atomic Commits: Single responsibility per commit
- Descriptive Messages: Clear explanation of changes
- main: Production-ready code
- develop: Integration branch
- feature/*: Feature development branches
- hotfix/*: Critical bug fixes
- Performance: Zero-cost abstractions, memory safety
- Reliability: Compile-time guarantees, no runtime panics
- Scalability: Async runtime optimized for concurrency
- Ecosystem: Rich crate ecosystem for production use
- Domain Isolation: Clear boundaries between business concerns
- Independent Scaling: Scale services based on load patterns
- Technology Diversity: Choose best tool for each domain
- Fault Isolation: Service failures don't cascade
- Loose Coupling: Services communicate via events, not direct calls
- Scalability: Asynchronous processing handles load spikes
- Auditability: Complete event history for debugging
- Extensibility: New services can consume existing events
- Performance: Faster queries on partitioned data
- Scalability: Distribute data across multiple disks/nodes
- Maintenance: Easier backup and archiving of old data
- Cost Efficiency: Optimize storage based on access patterns
- Check Environment: Verify all required environment variables
- Check Dependencies: Ensure PostgreSQL, Redis, NATS are running
- Check Ports: Verify no port conflicts (3001-3007)
- Check Logs: Use
RUST_LOG=debugfor detailed logging
- Verify Connection String: Check DATABASE_URL format
- Check PostgreSQL: Ensure database exists and is accessible
- Check Migrations: Run database migrations if needed
- Check NATS: Verify NATS server is running on port 4222
- Check Subjects: Verify correct event subjects are being used
- Check Serialization: Ensure event payloads can be deserialized
- Check Database Indexes: Verify proper indexing on frequently queried columns
- Check Connection Pooling: Ensure adequate connection pool size
- Check Caching: Verify Redis is being used effectively
- OpenAPI Specs: Each service exposes
/api-docsendpoint for live OpenAPI JSON specs - Docs Aggregator:
docs-aggregatorservice (port 3008) provides/servicesendpoint listing all service docs URLs - Architecture Guide:
/docs/architecture/- System design decisions - Developer Guide:
/docs/developer-guide/- Development setup and guidelines
- Scripts:
/scripts/- Development helper scripts - Tests:
/tests/- Comprehensive test suites - Infrastructure:
/infrastructure/- Deployment configurations
- Rust Documentation: https://doc.rust-lang.org/
- Axum Framework: https://docs.rs/axum/
- SQLx Documentation: https://docs.rs/sqlx/
- NATS Documentation: https://docs.nats.io/
This context file provides complete understanding of the Ganaka expense tracker project. Any AI or developer should be able to understand the system architecture, implementation details, and development processes from this single document.