Skip to content

Subeshrock/Ganaka

Folders and files

NameName
Last commit message
Last commit date

Latest commit

ย 

History

28 Commits
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 

Repository files navigation

Ganaka - Expense Tracker

A distributed, event-driven microservices architecture built with Rust (Axum), NATS for message passing, PostgreSQL for persistence, Redis for caching, and a modern CI/CD + observability stack.

Note: This is a portfolio project. GitHub Actions workflows are present in .github/workflows/ but disabled (if: false) to prevent execution since there's no deployment target.

๐Ÿ—๏ธ Architecture Overview

Ganaka follows a microservices architecture with the following components:

Core Services

  • Auth Service (Port 3001) - User authentication and authorization
  • User Service (Port 3002) - User profile management
  • Expense Service (Port 3003) - Expense CRUD operations
  • Category Service (Port 3004) - Expense categorization
  • Report Service (Port 3005) - Financial reporting and analytics
  • Notification Service (Port 3006) - Email/SMS notifications
  • Dashboard Service (Port 3007) - Real-time dashboard with WebSocket

Infrastructure

  • PostgreSQL - Primary database with partitioned tables
  • Redis - Caching and session management
  • NATS - Message broker for event-driven communication
  • Prometheus - Metrics collection
  • Grafana - Monitoring dashboards

๐Ÿš€ Quick Start

Prerequisites

  • Docker and Docker Compose
  • Rust 1.75+ (for local development)
  • Git

Running with Docker Compose

  1. Clone the repository

    git clone <repository-url>
    cd ganaka
  2. Choose your setup

    Option A: Full Containerized Environment (Recommended for new users)

    # Runs all services (PostgreSQL, Redis, NATS, all microservices) in containers
    docker-compose up -d

    Option B: Development Environment (Recommended)

    # Runs infrastructure (PostgreSQL, Redis, NATS) in containers
    # Microservices run locally for development
    docker-compose -f docker-compose.dev.yml up -d
    
    # Then run services locally (see Local Development section below)

    Option C: Development with Hot Reloading

    # Runs all services with live code reloading using cargo watch
    docker-compose -f docker-compose.yml -f docker-compose.override.yml up
  3. Check service health

    docker-compose ps
    # or for dev setup:
    docker-compose -f docker-compose.dev.yml ps
  4. View logs

    docker-compose logs -f
    # or for dev setup:
    docker-compose -f docker-compose.dev.yml logs -f

Docker Compose Files

Ganaka provides three docker-compose configurations:

  • docker-compose.yml - Production setup with all services containerized

    • Includes: PostgreSQL, Redis, NATS, all microservices
    • Best for: New users, demos, CI/CD, production-like environments
  • docker-compose.dev.yml - Development setup with hybrid infrastructure

    • Includes: PostgreSQL, all microservices
    • Excludes: Redis, NATS (expects them running locally)
    • Best for: Advanced developers with local Redis/NATS installations
  • docker-compose.override.yml - Development overrides for hot reloading

    • Enables live code reloading with cargo watch
    • Mounts source code volumes for instant updates
    • Use with: docker-compose -f docker-compose.yml -f docker-compose.override.yml up

Access Points

๐Ÿ”ง Local Development

Running Services Locally (Recommended Development Setup)

  1. Start infrastructure in containers

    docker-compose -f docker-compose.dev.yml up -d
  2. Run services locally

    Option A: Run all services with helper script

    ./scripts/run-dev.sh

    Option B: Run services manually (in separate terminals)

    # Terminal 1 - Auth Service
    cd services/auth-service && cargo run
    
    # Terminal 2 - User Service
    cd services/user-service && cargo run
    
    # Terminal 3 - Expense Service
    cd services/expense-service && cargo run
    
    # And so on for other services...
  3. Stop infrastructure

    docker-compose -f docker-compose.dev.yml down

Environment Variables

Create a .env file in each service directory (when using docker-compose.dev.yml):

# Database (connects to containerized PostgreSQL)
DATABASE_URL=postgres://ganaka_user:ganaka_password@localhost:5432/ganaka

# Redis (connects to containerized Redis)
REDIS_URL=redis://localhost:6379

# NATS (connects to containerized NATS)
NATS_URL=nats://localhost:4222

# JWT Secret (change in production)
JWT_SECRET=your-super-secret-jwt-key-change-in-production

# Logging
RUST_LOG=debug

Auth Service

JWT_SECRET=your-super-secret-jwt-key-change-in-production

Notification Service

SMTP_HOST=smtp.gmail.com SMTP_PORT=587 SMTP_USERNAME=your-email@gmail.com SMTP_PASSWORD=your-app-password TWILIO_ACCOUNT_SID=your-twilio-sid TWILIO_AUTH_TOKEN=your-twilio-token TWILIO_FROM_NUMBER=+1234567890

Logging

RUST_LOG=info


## ๐Ÿ“ก API Documentation

### Authentication Endpoints

```bash
POST /auth/register
POST /auth/login
POST /auth/refresh
POST /auth/logout
GET  /auth/me

User Management

GET    /users/{id}
PATCH  /users/{id}
DELETE /users/{id}

Expense Management

POST   /expenses
GET    /expenses?user_id={id}
GET    /expenses/{id}
PATCH  /expenses/{id}
DELETE /expenses/{id}

Category Management

POST   /categories
GET    /categories?user_id={id}
PATCH  /categories/{id}
DELETE /categories/{id}

Reporting

GET /reports/{user_id}/monthly
GET /reports/{user_id}/weekly
GET /reports/{user_id}/category

Notifications

POST /notifications/send
GET  /notifications?user_id={id}

Dashboard

GET  /dashboard/{user_id}/summary
GET  /dashboard/{user_id}/live  # WebSocket endpoint

๐Ÿ—„๏ธ Database Schema

The system uses PostgreSQL with the following key tables:

  • users - User accounts and profiles
  • user_passwords - Secure password storage
  • categories - Expense categories
  • expenses - Expense records (partitioned by user_id)
  • notifications - Notification logs

Partitioning Strategy

The expenses table is partitioned by user_id using hash partitioning for optimal performance:

CREATE TABLE expenses (
    -- columns...
) PARTITION BY HASH (user_id);

-- Creates 16 partitions
CREATE TABLE expenses_p0 PARTITION OF expenses FOR VALUES WITH (MODULUS 16, REMAINDER 0);
-- ... up to expenses_p15

๐Ÿ“จ Event-Driven Architecture

Services communicate through NATS with the following event streams:

User Events

  • user.registered
  • user.logged_in
  • user.logged_out
  • user.profile_updated

Expense Events

  • expense.added
  • expense.updated
  • expense.deleted

Category Events

  • category.created
  • expense.categorized

Report Events

  • report.updated

Notification Events

  • notification.sent

๐Ÿ” Monitoring & Observability

Metrics

  • Service response times
  • Error rates
  • Database query performance
  • Message broker throughput

Logging

  • Structured JSON logs
  • Centralized log aggregation
  • Error tracking and alerting

Tracing

  • Distributed request tracing
  • Performance bottleneck identification
  • Service dependency mapping

๐Ÿงช Testing

Unit Tests

cargo test --package SERVICE_NAME

Integration Tests

docker-compose -f docker-compose.test.yml up -d
cargo test --test integration

E2E Tests

# Start all services
docker-compose up -d

# Run E2E tests
npm test  # or your preferred test runner

๐Ÿš€ Deployment

Production Deployment

  1. Build Docker images

    docker-compose build
  2. Deploy to Kubernetes

    Kubernetes deployment manifests are not implemented. See docs/deployment/README.md for deployment architecture and planning.

Note: CI/CD workflows are configured but disabled for this portfolio project.

Environment Configuration

For production, update the following:

  • Database connection strings
  • Redis cluster configuration
  • NATS cluster setup
  • JWT secrets
  • SMTP/Twilio credentials
  • Monitoring endpoints

๐Ÿค Contributing

  1. Fork the repository
  2. Create a feature branch
  3. Make your changes
  4. Add tests
  5. Submit a pull request

Development Guidelines

  • Follow Rust best practices
  • Write comprehensive tests
  • Update documentation
  • Use conventional commit messages

๐Ÿ“„ License

This project is licensed under the MIT License - see the LICENSE file for details.

๐Ÿš€ Roadmap & Future Plans

See Future Enhancements for planned features including:

  • Advanced logging with Loki + Promtail
  • Distributed tracing with OpenTelemetry
  • Kubernetes production deployment
  • Machine learning integration
  • And more...

๐Ÿ™ Acknowledgments

  • Built with Axum web framework
  • Message passing with NATS
  • Database operations with SQLx
  • Authentication with JWT

For more detailed documentation, see the /docs directory.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

โšก