LeetCode Analyzer is a full-stack analytics platform that analyzes any public LeetCode profile and returns practical, data-driven insights.
It combines:
- profile analysis from LeetCode GraphQL data
- historical tracking and trend analytics
- productivity pattern detection
- recommendation generation for learning focus
- optional ML-based prediction (risk, growth, decline)
- Features
- Tech Stack
- Architecture
- Project Structure
- Getting Started
- Environment Variables
- API Reference
- Example API Calls
- Testing
- Troubleshooting
- Deployment Notes
- Roadmap
- License
- Analyze by username or LeetCode profile URL
- Difficulty and topic-based performance breakdown
- Consistency score, growth rate, and topic mastery metrics
- Historical snapshots for trend lines
- Submission event history with topic/difficulty enrichment
- Productivity patterns from activity history
- Goal forecast endpoint (target-based projection)
- CSV export for historical analytics
- Optional ML prediction service integration
- Graceful fallback behavior if MongoDB or Redis is unavailable
- React 18
- TypeScript
- Vite
- Tailwind CSS
- Recharts
- Framer Motion
- Node.js
- Express
- TypeScript
- Axios
- Mongoose
- Redis client
- MongoDB
- Redis
- Python FastAPI ML microservice
- Docker + Docker Compose
Frontend (React/Vite)
|
| HTTP /api/*
v
Backend API (Express + TypeScript)
|\
| \-- Redis (response cache)
| \-- MongoDB (analysis + snapshots + events)
| \-- ML Service (FastAPI /predict)
\
\-- LeetCode GraphQL (upstream profile data)
LeetCode Analysis/
├── docker-compose.yml
├── README.md
├── backend/
│ ├── Dockerfile
│ ├── package.json
│ ├── data/problems.json
│ └── src/
│ ├── config/
│ ├── models/
│ ├── routes/
│ ├── services/
│ └── utils/
├── frontend/
│ ├── Dockerfile
│ ├── package.json
│ └── src/
│ ├── api/
│ ├── components/
│ ├── hooks/
│ └── types/
└── ml_service/
├── Dockerfile
├── main.py
├── requirements.txt
└── test_predict.py
- Docker Desktop (for Docker mode)
- OR local runtimes:
- Node.js 18+
- npm 9+
- Python 3.10+
- MongoDB (optional but recommended)
- Redis (optional but recommended)
From repository root:
docker compose up --build -dServices:
- Frontend: http://localhost:3000
- Backend API: http://localhost:5001
- Backend Health: http://localhost:5001/health
- ML Service: http://localhost:8000
- MongoDB: localhost:27017
- Redis: localhost:6379
Useful commands:
docker compose ps
docker compose logs -f backend
docker compose logs -f frontend
docker compose logs -f ml-service
docker compose downRun each service in a separate terminal.
cd backend
npm install
npm run devBackend default URL: http://localhost:5000
cd frontend
npm install
npm run devFrontend default URL: http://localhost:5173
cd ml_service
pip install -r requirements.txt
uvicorn main:app --host 0.0.0.0 --port 8000ML service URL: http://localhost:8000
| Variable | Default | Description |
|---|---|---|
| PORT | 5000 | Backend server port |
| NODE_ENV | development | Runtime mode |
| MONGODB_URI | mongodb://localhost:27017/leetcode-analyzer | MongoDB connection string |
| REDIS_URL | redis://localhost:6379 | Redis connection string |
| CACHE_TTL_SECONDS | 3600 | Cache TTL (seconds) |
| FRONTEND_URL | http://localhost:5173 | Allowed CORS origins (comma-separated) |
| ML_SERVICE_URL | http://localhost:8000 | Base URL for ML service |
| ML_TIMEOUT_MS | 5000 | Timeout for ML requests |
| Variable | Scope | Default | Description |
|---|---|---|---|
| VITE_BACKEND_URL | dev server proxy (vite.config.ts) | http://localhost:5000 | Proxy target used in local dev |
| VITE_API_URL | build/runtime API base | empty string | If set, Axios uses this full base URL |
Notes:
- In local development, keeping
VITE_API_URLempty allows relative/api/*calls through Vite proxy. - In Docker frontend build, an nginx config proxies
/api/*to the backend container.
| Variable | Default | Description |
|---|---|---|
| ALLOWED_ORIGINS | http://localhost:5000 | CORS allow-list for ML API |
Base URLs:
- Local backend:
http://localhost:5000 - Docker backend:
http://localhost:5001
GET /health
Returns backend status and timestamp.
POST /api/analyze
Request body:
{
"username": "leetcode_user"
}You can also pass a full LeetCode profile URL:
{
"username": "https://leetcode.com/u/leetcode_user/"
}Response includes:
- core analysis (solved counts, streak, topics, activity)
- computed analytics (consistency, growth, mastery)
- recommendations / learning output
mlPrediction(when ML service is reachable)cachedflag
GET /api/history/:username?days=30
Returns time-series points for analytics trends.
GET /api/events/:username?limit=100
Returns recent accepted submissions with enriched metadata.
GET /api/patterns/:username
Returns productivity insights derived from historical activity.
GET /api/forecast/:username?target=500
Returns target-based progression and estimated completion metrics.
GET /api/export/:username
Returns historical analytics as downloadable CSV.
Using Docker backend (5001):
curl -X POST http://localhost:5001/api/analyze \
-H "Content-Type: application/json" \
-d '{"username":"leetcode_user"}'curl "http://localhost:5001/api/history/leetcode_user?days=60"curl "http://localhost:5001/api/events/leetcode_user?limit=120"curl "http://localhost:5001/api/forecast/leetcode_user?target=700"Start ML service first, then:
cd ml_service
python test_predict.pyThis runs health + prediction checks against http://localhost:8000.
Symptoms:
- network errors in UI
/api/*calls fail
Checks:
- local dev: verify backend is on
http://localhost:5000 - Docker: verify backend is on
http://localhost:5001 - confirm
VITE_BACKEND_URL/VITE_API_URLusage matches your mode
Possible causes:
- historical snapshots are not persisted yet
- MongoDB is unavailable
- limited activity for target profile
Quick checks:
curl "http://localhost:5001/api/history/<username>?days=30"
curl "http://localhost:5001/api/events/<username>?limit=50"Behavior:
- app still works
- response caching is disabled automatically
Behavior:
- live analysis still works
- historical persistence/export features are reduced or unavailable
Use plain progress and no cache to inspect failures:
docker compose build --no-cache --progress=plainAny static host is suitable (Vercel, Netlify, nginx). Set:
VITE_API_URLto deployed backend URL if not using same-origin proxy
Any Node.js host is suitable (Render, Railway, Fly.io, Azure App Service, etc.).
Build command:
npm install && npm run buildStart command:
node dist/index.jsRequired external services for full functionality:
- MongoDB instance
- Redis instance
- ML service endpoint (optional but recommended)
- User authentication and saved dashboards
- Historical range filtering and comparisons
- Deeper topic-level recommendation engine
- Scheduled weekly progress digests
- More robust automated test coverage
This project is intended for educational and portfolio use.