This guide explains how to add and configure custom LLM providers that are not available by default in ResilientLLM.
ResilientLLM uses a ProviderRegistry to manage LLM providers. By default, it includes support for:
- OpenAI
- Anthropic (Claude)
- Google (Gemini)
- Ollama
You can extend this by adding your own providers, such as:
- Self-hosted models (e.g., vLLM, Text Generation Inference)
- OpenAI-compatible APIs (e.g., Together AI, Groq, Perplexity)
- Custom API endpoints
- Other LLM providers with compatible interfaces
The simplest way to add a custom provider is using ProviderRegistry.configure():
import { ResilientLLM, ProviderRegistry } from 'resilient-llm';
// Configure a new provider
ProviderRegistry.configure('my-provider', {
chatApiUrl: 'https://api.example.com/v1/chat/completions',
defaultModel: 'my-model-v1',
envVarNames: ['MY_PROVIDER_API_KEY'],
authConfig: {
type: 'header',
headerName: 'Authorization',
headerFormat: 'Bearer {key}'
}
});
// Use it with ResilientLLM
const llm = new ResilientLLM({
aiService: 'my-provider',
model: 'my-model-v1'
});
const response = await llm.chat([
{ role: 'user', content: 'Hello!' }
]);The API endpoint for chat completions.
chatApiUrl: 'https://api.example.com/v1/chat/completions'Alternative: Use baseUrl for convenience. For OpenAI-compatible APIs, it will automatically append /v1/chat/completions:
baseUrl: 'https://api.example.com' // Becomes https://api.example.com/v1/chat/completionsFor Ollama-compatible APIs, baseUrl will append /api/generate:
baseUrl: 'http://localhost:11434' // Becomes http://localhost:11434/api/generateFor basic API key setup with built-in providers, see the API Key Configuration section in the reference documentation.
Controls how API keys are sent to the provider. This is the default authentication method used when no endpoint-specific config matches.
Header-based authentication (default):
authConfig: {
type: 'header',
headerName: 'Authorization', // Header name
headerFormat: 'Bearer {key}', // Format template (use {key} placeholder)
optional: false // Whether API key is optional
}Query parameter authentication:
authConfig: {
type: 'query',
queryParam: 'key', // Query parameter name
optional: false
}Optional authentication:
authConfig: {
type: 'header',
headerName: 'Authorization',
headerFormat: 'Bearer {key}',
optional: true // API key not required
}Optional map of URL patterns to endpoint-specific authentication configurations. Useful when different endpoints require different authentication methods (e.g., header auth for chat, query param for models).
The system automatically detects which config to use by matching URL patterns. Longer/more specific patterns take precedence.
endpointAuthConfigs: {
'/chat/completions': {
type: 'header',
headerName: 'Authorization',
headerFormat: 'Bearer {key}'
},
'/models': {
type: 'query',
queryParam: 'key'
},
'/v1/custom': {
type: 'header',
headerName: 'x-api-key',
headerFormat: '{key}'
}
}How it works:
- When making a request, the system checks if the API URL contains any of the patterns in
endpointAuthConfigs - If a match is found, that endpoint-specific config is used
- If no match is found, it falls back to
authConfig - Patterns are matched by substring, with longer patterns taking precedence
Example use case: Google's API uses header authentication for chat endpoints but query parameter authentication for models endpoints.
API keys can be provided in multiple ways, with the following priority order (highest to lowest):
llmOptions.apiKey- Passed directly in thechat()method call (highest priority, per-request)ProviderRegistry.configure()withapiKey- Direct API key in provider configuration- Environment variables - Via
envVarNamesconfiguration
You can provide the API key directly when configuring a provider:
ProviderRegistry.configure('my-provider', {
chatApiUrl: 'https://api.example.com/v1/chat/completions',
apiKey: 'sk-...' // Stored securely, not serialized
});You can also override the API key for individual requests by passing it in llmOptions:
const response = await llm.chat(conversationHistory, {
aiService: 'my-provider',
apiKey: 'sk-custom-key-for-this-request' // Takes precedence over ProviderRegistry
});Environment variable names to check for API keys (checked in order, lowest priority):
envVarNames: ['MY_PROVIDER_API_KEY', 'MY_PROVIDER_KEY']The default model identifier to use:
defaultModel: 'my-model-v1'Optional URL to fetch available models:
modelsApiUrl: 'https://api.example.com/v1/models'If not provided, you can still use the provider, but model discovery won't work.
Controls how messages are formatted and responses are parsed.
Message Format:
chatConfig: {
messageFormat: 'openai' // or 'anthropic'
}'openai': System messages stay in the messages array (default for most providers)'anthropic': System messages are extracted and sent separately
Response Parsing:
chatConfig: {
responseParsePath: 'choices[0].message.content' // Path to extract content
}Common paths:
- OpenAI-compatible:
'choices[0].message.content' - Anthropic:
'content[0].text' - Ollama:
'response'
Tool Schema:
chatConfig: {
toolSchemaType: 'openai' // or 'anthropic'
}'openai': Tools useparametersfield'anthropic': Tools useinput_schemafield
Controls how model lists are parsed from the API response.
parseConfig: {
modelsPath: 'data', // Path to models array (e.g., 'data', 'models', 'items')
idField: 'id', // Field name for model ID
nameField: 'id', // Field name for model name
displayNameField: 'display_name', // Field name for display name (optional)
contextWindowField: 'inputTokenLimit', // Field name for context window (optional)
idPrefix: null // Prefix to strip from model ID (e.g., 'models/')
}Human-readable name for the provider:
displayName: 'My Custom Provider'Map of URL patterns to endpoint-specific authentication configurations. See Endpoint-Specific Authentication section above for details.
API version header (if required):
apiVersion: '2023-06-01'Additional HTTP headers:
customHeaders: {
'X-Custom-Header': 'value',
'User-Agent': 'MyApp/1.0'
}Enable/disable the provider:
active: true // or false to disableimport { ProviderRegistry } from 'resilient-llm';
ProviderRegistry.configure('together', {
chatApiUrl: 'https://api.together.xyz/v1/chat/completions',
modelsApiUrl: 'https://api.together.xyz/v1/models',
defaultModel: 'meta-llama/Llama-3-70b-chat-hf',
envVarNames: ['TOGETHER_API_KEY'],
displayName: 'Together AI',
authConfig: {
type: 'header',
headerName: 'Authorization',
headerFormat: 'Bearer {key}'
},
chatConfig: {
messageFormat: 'openai',
responseParsePath: 'choices[0].message.content',
toolSchemaType: 'openai'
},
parseConfig: {
modelsPath: 'data',
idField: 'id',
nameField: 'id'
}
});import { ProviderRegistry } from 'resilient-llm';
ProviderRegistry.configure('vllm', {
baseUrl: 'http://localhost:8000', // vLLM default port
defaultModel: 'meta-llama/Llama-3-70b-chat-hf',
displayName: 'vLLM (Local)',
authConfig: {
type: 'header',
headerName: 'Authorization',
headerFormat: 'Bearer {key}',
optional: true // vLLM may not require auth
},
chatConfig: {
messageFormat: 'openai',
responseParsePath: 'choices[0].message.content',
toolSchemaType: 'openai'
}
});import { ProviderRegistry } from 'resilient-llm';
ProviderRegistry.configure('custom-anthropic', {
chatApiUrl: 'https://api.custom-anthropic.com/v1/messages',
modelsApiUrl: 'https://api.custom-anthropic.com/v1/models',
defaultModel: 'custom-claude-v1',
envVarNames: ['CUSTOM_ANTHROPIC_API_KEY'],
displayName: 'Custom Anthropic',
customHeaders: {
'anthropic-version': '2023-06-01'
},
authConfig: {
type: 'header',
headerName: 'x-api-key',
headerFormat: '{key}'
},
chatConfig: {
messageFormat: 'anthropic',
responseParsePath: 'content[0].text',
toolSchemaType: 'anthropic'
},
parseConfig: {
modelsPath: 'data',
idField: 'id',
nameField: 'id',
displayNameField: 'display_name'
}
});Some providers require different authentication methods for different endpoints. Use endpointAuthConfigs to configure endpoint-specific authentication:
import { ProviderRegistry } from 'resilient-llm';
ProviderRegistry.configure('custom-google', {
chatApiUrl: 'https://api.example.com/v1/chat/completions',
modelsApiUrl: 'https://api.example.com/v1/models',
defaultModel: 'custom-model',
envVarNames: ['CUSTOM_API_KEY'],
displayName: 'Custom Google-Style',
// Default auth config (used when no endpoint pattern matches)
authConfig: {
type: 'query',
queryParam: 'key'
},
// Endpoint-specific auth configs
endpointAuthConfigs: {
'/chat/completions': {
type: 'header',
headerName: 'Authorization',
headerFormat: 'Bearer {key}'
},
'/models': {
type: 'query',
queryParam: 'key'
}
},
chatConfig: {
messageFormat: 'openai',
responseParsePath: 'choices[0].message.content',
toolSchemaType: 'openai'
},
parseConfig: {
modelsPath: 'models',
idField: 'name',
nameField: 'name',
displayNameField: 'displayName',
contextWindowField: 'inputTokenLimit',
idPrefix: 'models/'
}
});In this example:
- Chat endpoint (
/chat/completions) uses header authentication - Models endpoint (
/models) uses query parameter authentication - Any other endpoints fall back to the default
authConfig(query parameter)
import { ProviderRegistry } from 'resilient-llm';
ProviderRegistry.configure('local-ollama', {
baseUrl: 'http://localhost:11434', // Auto-generates /api/generate and /api/tags
defaultModel: 'llama3.1:8b',
displayName: 'Local Ollama',
authConfig: {
type: 'header',
headerName: 'Authorization',
headerFormat: 'Bearer {key}',
optional: true
},
chatConfig: {
messageFormat: 'openai',
responseParsePath: 'response',
toolSchemaType: 'openai'
},
parseConfig: {
modelsPath: 'models',
idField: 'name',
nameField: 'name'
}
});Once configured, use your custom provider just like any built-in provider:
import { ResilientLLM, ProviderRegistry } from 'resilient-llm';
// Configure provider (do this once, typically at app startup)
ProviderRegistry.configure('my-provider', { /* ... */ });
// Use with ResilientLLM
const llm = new ResilientLLM({
aiService: 'my-provider',
model: 'my-model'
});
const response = await llm.chat([
{ role: 'user', content: 'Hello!' }
]);
// Or override API key per request
const response = await llm.chat([
{ role: 'user', content: 'Hello!' }
], {
apiKey: 'sk-custom-key-for-this-request'
});You can override the provider and API key for individual requests:
const llm = new ResilientLLM({ aiService: 'openai' });
// Use custom provider for this request
const response = await llm.chat(conversationHistory, {
aiService: 'my-provider',
model: 'my-model'
});
// Override both provider and API key for this request
const response = await llm.chat(conversationHistory, {
aiService: 'my-provider',
model: 'my-model',
apiKey: 'sk-custom-key-here' // Overrides ProviderRegistry and env vars
});If you've configured modelsApiUrl, you can fetch available models:
import { ProviderRegistry } from 'resilient-llm';
// Fetch models for your custom provider
const models = await ProviderRegistry.getModels('my-provider');
console.log(models);
// [
// { id: 'model-1', provider: 'my-provider', name: 'Model 1', ... },
// { id: 'model-2', provider: 'my-provider', name: 'Model 2', ... }
// ]
// Get a specific model
const model = await ProviderRegistry.getModel('my-provider', 'model-1');Configure or update a provider. Merges with existing configuration.
ProviderRegistry.configure('my-provider', {
chatApiUrl: 'https://api.example.com/v1/chat/completions',
defaultModel: 'my-model'
});Get provider configuration (without API key):
const config = ProviderRegistry.get('my-provider');
console.log(config.chatApiUrl);List all configured providers:
// List all providers
const all = ProviderRegistry.list();
// List only active providers
const active = ProviderRegistry.list({ active: true });Check if an API key is available for a provider (without exposing the key):
const hasKey = ProviderRegistry.hasApiKey('my-provider');
if (hasKey) {
console.log('API key is configured');
}Fetch models from the provider's API:
const models = await ProviderRegistry.getModels('my-provider');Clear cached models:
// Clear cache for specific provider
ProviderRegistry.clearCache('my-provider');
// Clear all caches
ProviderRegistry.clearCache();Error: Invalid provider specified: "my-provider"
Solution: Ensure you've called ProviderRegistry.configure() before using the provider:
// Do this first
ProviderRegistry.configure('my-provider', { /* ... */ });
// Then use it
const llm = new ResilientLLM({ aiService: 'my-provider' });Error: MY_PROVIDER_API_KEY is not set for provider "my-provider"
Solutions:
-
Per-request (highest priority): Pass the API key in
llmOptions:const response = await llm.chat(conversationHistory, { aiService: 'my-provider', apiKey: 'sk-...' });
-
Via ProviderRegistry: Provide the key when configuring:
ProviderRegistry.configure('my-provider', { apiKey: 'sk-...' });
-
Via environment variable: Set the environment variable:
export MY_PROVIDER_API_KEY=sk-... -
Mark auth as optional: If the provider doesn't require authentication:
authConfig: { optional: true }
Symptom: Responses are empty or malformed
Solution: Check and adjust chatConfig.responseParsePath:
// Try different paths based on your API response
chatConfig: {
responseParsePath: 'choices[0].message.content' // OpenAI-style
// or
responseParsePath: 'content[0].text' // Anthropic-style
// or
responseParsePath: 'response' // Ollama-style
}Inspect the actual API response to determine the correct path:
// Temporarily log the response
const response = await fetch(apiUrl, { /* ... */ });
const data = await response.json();
console.log(JSON.stringify(data, null, 2));Symptom: getModels() returns empty array
Solutions:
- Ensure
modelsApiUrlis configured correctly - Check API key is valid
- Verify the API response format matches
parseConfig - Check browser console for errors
Symptom: System messages not working correctly
Solution: Adjust chatConfig.messageFormat:
// If your API expects system messages in the messages array
chatConfig: {
messageFormat: 'openai'
}
// If your API expects system messages separately
chatConfig: {
messageFormat: 'anthropic'
}-
Configure providers at application startup: Set up all custom providers before creating
ResilientLLMinstances. -
Use environment variables for API keys: Avoid hardcoding keys in your code. Use
envVarNamesto reference environment variables. -
Test with a simple request first: Before integrating into your application, test with a basic chat request to verify configuration.
-
Cache models when possible: If your provider supports model listing, use
ProviderRegistry.getModels()to cache available models. -
Handle errors gracefully: Custom providers may have different error formats. Note that
onErrorcallback inResilientLLMis currently reserved for future use. -
Document your configuration: Keep a record of your custom provider configurations for your team.
You can also update existing default providers:
// Update OpenAI to use a different endpoint
ProviderRegistry.configure('openai', {
chatApiUrl: 'https://custom-openai-proxy.com/v1/chat/completions'
});
// Disable a provider
ProviderRegistry.configure('ollama', {
active: false
});- Reference Documentation - Complete API reference
- ProviderRegistry Source - Implementation details