A unified meta-server application that enables users to manage and switch between multiple Large Language Model (LLM) providers through a single interface.
Large Language Models (LLMs) come in two main categories: free/open-source and commercial. Depending on your specific task requirements, you may achieve satisfactory results with free LLMs, while other tasks may require commercial LLM services to meet quality expectations.
This application serves as an LLM API Call Meta-server that:
- Allows users to register API keys for multiple LLM providers
- Provides a unified API interface for interacting with different LLMs
- Enables seamless switching between LLM providers based on task requirements
- Supports both free and commercial LLM services
- OpenAI - Commercial LLM service (GPT models: GPT-4o, GPT-4o Mini, GPT-4 Turbo, GPT-3.5 Turbo)
- Anthropic - Commercial LLM service (Claude models: Claude Sonnet 4.5, Claude Haiku 4.5, Claude Opus 4.1, Claude Sonnet 3.7, Claude 3.5 Haiku, Claude 3 Haiku)
- Google - Commercial LLM service (Gemini models: Gemini 2.5 Pro, Gemini 2.5 Flash, Gemini 2.0 Flash)
- Ollama - Free/open-source local LLM runtime (no API key required)
- Ruby 3.4.7 (or compatible version as specified in
.ruby-version)
This application is built with Rails 8.0 and uses the following key dependencies:
- Database: SQLite 3 (>= 2.1)
- Web Server: Puma
- Authentication: Devise with OAuth support (Google OAuth2)
- Asset Pipeline: Propshaft
- Frontend:
- Hotwire (Turbo & Stimulus)
- Tailwind CSS
- Import maps for JavaScript
- LLM Interface: llm.rb gem
- Encryption: AWS KMS for API key encryption
- HTTP Client: HTTParty for external API calls
- Token Verification: Google Auth library for ID token verification
- CORS: Rack::Cors for cross-origin resource sharing
No external middleware services (Redis, PostgreSQL, etc.) are required for basic operation.
- Install Ruby 3.4.7 (or use a Ruby version manager like rbenv or rvm)
- Install SQLite 3
- Install Node.js (for asset compilation)
-
Clone the repository
git clone <repository-url> cd llm_meta_server
-
Install dependencies
bundle install
-
Set up environment variables
Create a
.envfile in the root directory with the following required variables:# AWS Configuration (required for API key encryption) AWS_ACCESS_KEY_ID=your_aws_access_key_id AWS_SECRET_ACCESS_KEY=your_aws_secret_access_key AWS_REGION=your_aws_region # AWS KMS Key for encryption (required) # Use Key ID format (recommended): 1234abcd-12ab-34cd-56ef-1234567890ab # or Alias format: alias/llm-api-meta-server-key KMS_KEY_ID=your_kms_key_id # Google OAuth2 Configuration (required for user authentication) GOOGLE_CLIENT_ID=your_google_client_id GOOGLE_CLIENT_SECRET=your_google_client_secret # Allowed Google Client IDs (comma-separated, required) # Include all Google client IDs of external services authorized to use this LLM Meta Server ALLOWED_GOOGLE_CLIENT_IDS=external_service_1_client_id,external_service_2_client_id # Application Host (required) # The base URL where your application is hosted APP_HOST=http://localhost:3000
Environment Variable Descriptions:
Variable Required Description AWS_ACCESS_KEY_IDYes AWS access key for KMS encryption AWS_SECRET_ACCESS_KEYYes AWS secret key for KMS encryption AWS_REGIONYes AWS region where your KMS key is located KMS_KEY_IDYes AWS KMS key ID or alias for encrypting API keys GOOGLE_CLIENT_IDYes Google OAuth2 client ID for user authentication GOOGLE_CLIENT_SECRETYes Google OAuth2 client secret ALLOWED_GOOGLE_CLIENT_IDSYes Comma-separated list of Google client IDs for external services authorized to use this LLM Meta Server APP_HOSTYes Base URL of your application To obtain the required Google OAuth2 credentials:
-
Create a Google Cloud Project (if you don't have one):
- Go to Google Cloud Console
- Create a new project or select an existing one
-
Enable Google+ API:
- Navigate to "APIs & Services" > "Library"
- Search for "Google+ API" and enable it
-
Create OAuth 2.0 Credentials:
- Go to "APIs & Services" > "Credentials"
- Click "Create Credentials" > "OAuth 2.0 Client IDs"
- Choose "Web application" as the application type
-
Configure Authorized Redirect URIs:
Add the following redirect URIs to your OAuth client configuration:
For Development (localhost):
http://localhost:3000/users/auth/google_oauth2/callbackFor Production:
https://yourdomain.com/users/auth/google_oauth2/callbackReplace
yourdomain.comwith your actual production domain. -
Get Your Credentials:
- After creating the OAuth client, copy the "Client ID" and "Client Secret"
- Use these values for
GOOGLE_CLIENT_IDandGOOGLE_CLIENT_SECRET
-
Configure Allowed Client IDs:
- The
ALLOWED_GOOGLE_CLIENT_IDSshould include Google client IDs of external services that are authorized to use this LLM Meta Server - This is different from
GOOGLE_CLIENT_IDwhich is used for user authentication on this server - Include client IDs of all external applications/services that will consume this meta-server's API:
ALLOWED_GOOGLE_CLIENT_IDS=external_app_client_id,mobile_app_client_id,web_service_client_id
- The
Important Security Notes:
- Never commit OAuth credentials to version control
- Use different OAuth clients for development and production environments
- Regularly rotate your client secrets for production applications
-
-
Set up the database
bin/rails db:setup
This command creates the database, runs migrations, and loads seed data.
-
Start the development environment
bin/dev
This command starts all development services defined in
Procfile.dev, including:- Rails web server (available at
http://localhost:3000) - Tailwind CSS watch mode (for automatic stylesheet compilation)
All services will run in a single terminal with color-coded output.
- Rails web server (available at
If you prefer to run services separately:
# Run the web server
bin/rails server
# Run Tailwind CSS watch (in a separate terminal)
bin/rails tailwindcss:watchThe project provides db/seeds.rb to populate required master data (LLM providers and their models). The seed script is idempotent and safe to rerun.
bin/rails db:seedbin/rails db:setupRAILS_ENV=production bin/rails db:seed- LLM platforms: OpenAI, Anthropic, Google, Ollama
- Their available models based on
LlmModelMap
Notes:
- Ensure your database is migrated before seeding (
bin/rails db:migrate). - Seeding does not require API keys; it only creates platform and model records.
This project uses RSpec for testing.
# Run all tests
bin/spec
# Run specific test file
bin/spec spec/models/llm_api_key_spec.rb
# Run tests with coverage
COVERAGE=true bin/spec- Navigate to the application home page
- Sign in using your Google account
- After authentication, you'll be redirected to your user profile
- From your profile page, navigate to "LLM API Keys"
- Add API keys for your preferred LLM providers:
- Select the provider (OpenAI, Anthropic, or Google)
- Enter your API key
- Add an optional description
- Each API key will be assigned a unique UUID for API access
Note: Ollama does not require API key registration as it runs locally.
Get a list of all available LLM services and their models:
GET /api/llmsAuthentication: Requires Google ID Token authentication.
Example Request:
curl -X GET "https://your-server.com/api/llms" \
-H "Authorization: Bearer {your_google_id_token}" \
-H "Content-Type: application/json"Example Response:
{
"llms": [
{
"id": 1,
"name": "OpenAI",
"created_at": "2025-01-01T00:00:00.000Z",
"updated_at": "2025-01-01T00:00:00.000Z",
"models": [
{
"name": "gpt-4o",
"display_name": "GPT-4o",
"created_at": "2025-01-01T00:00:00.000Z",
"updated_at": "2025-01-01T00:00:00.000Z"
},
{
"name": "gpt-4o-mini",
"display_name": "GPT-4o Mini",
"created_at": "2025-01-01T00:00:00.000Z",
"updated_at": "2025-01-01T00:00:00.000Z"
}
]
},
{
"id": 2,
"name": "Anthropic",
"created_at": "2025-01-01T00:00:00.000Z",
"updated_at": "2025-01-01T00:00:00.000Z",
"models": [
{
"name": "claude-sonnet-4-5",
"display_name": "Claude Sonnet 4.5",
"created_at": "2025-01-01T00:00:00.000Z",
"updated_at": "2025-01-01T00:00:00.000Z"
}
]
},
{
"llm_type": "ollama",
"description": "[Ollama] Local Ollama (no API key required)",
"uuid": "ollama-local",
"available_models": [
{
"label": "gpt-oss:20b",
"value": "gpt-oss-20b"
}
]
}
]
}Get a list of your registered API keys:
GET /api/llm_api_keysAuthentication: Requires Google ID Token authentication.
Example Request:
curl -X GET "https://your-server.com/api/llm_api_keys" \
-H "Authorization: Bearer {your_google_id_token}" \
-H "Content-Type: application/json"Example Response:
{
"llm_api_keys": [
{
"uuid": "550e8400-e29b-41d4-a716-446655440000",
"llm_type": "openai",
"description": "[OpenAI] Production Key",
"available_models": [
{
"label": "GPT-4o",
"value": "gpt-4o"
},
{
"label": "GPT-4o Mini",
"value": "gpt-4o-mini"
}
]
},
{
"uuid": "6ba7b810-9dad-11d1-80b4-00c04fd430c8",
"llm_type": "anthropic",
"description": "[Anthropic] Dev Key",
"available_models": [
{
"label": "Claude Sonnet 4.5",
"value": "claude-sonnet-4-5"
}
]
}
]
}Use the unified API endpoint to make chat completion requests:
POST /api/llm_api_keys/:uuid/models/:model_name/chatsParameters:
uuid: The UUID of your registered API key (or "ollama-local" for Ollama)model_name: The model name (e.g., "gpt-4o", "claude-sonnet-4-5", "gemini-2-5-pro")prompt: Your chat prompt (in request body)
Authentication: Requires Google ID Token authentication.
Example Request:
curl -X POST "https://your-server.com/api/llm_api_keys/{uuid}/models/gpt-4o/chats" \
-H "Authorization: Bearer {your_google_id_token}" \
-H "Content-Type: application/json" \
-d '{"prompt": "Hello, how are you?"}'Example Response:
{
"response": {
"message": "Hello! I'm doing well, thank you for asking. How can I help you today?"
}
}Note: For Ollama (no API key required), use uuid=ollama-local in the API endpoint.
- API keys are encrypted using AWS KMS before storage
- User authentication is handled through OAuth 2.0 (Google)
- API access requires Google ID Token authentication
- All sensitive configuration is managed through environment variables
This project follows Ruby style guidelines enforced by RuboCop:
# Run RuboCop
bin/rubocop
# Auto-fix issues
bin/rubocop -ARun Brakeman to check for security vulnerabilities:
bin/brakeman[Specify your license here]
[Add contributing guidelines if applicable]