feat: add MiniMax as first-class LLM provider#551
Open
octo-patch wants to merge 1 commit intomonarch-initiative:mainfrom
Open
feat: add MiniMax as first-class LLM provider#551octo-patch wants to merge 1 commit intomonarch-initiative:mainfrom
octo-patch wants to merge 1 commit intomonarch-initiative:mainfrom
Conversation
Add MiniMax AI (https://www.minimaxi.com/) as a supported LLM provider via their OpenAI-compatible API. Users can now use MiniMax models with either the minimax/ prefix or --model-provider minimax option. Changes: - LLMClient: detect MiniMax provider, auto-configure api_base, resolve MINIMAX_API_KEY env var with oaklib fallback, clamp temperature to (0.0, 1.0] range, route through litellm OpenAI-compatible path - __init__.py: register MiniMax-M2.7 and MiniMax-M2.7-highspeed models (204K context) in the model cost map - CLI: update --model-provider help text to mention MiniMax - README: add MiniMax setup and usage documentation - Tests: 25 unit tests + 3 integration tests covering provider init, API key resolution, temperature clamping, completion calls, and model registry
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Adds MiniMax AI as a first-class LLM provider for OntoGPT, accessible via their OpenAI-compatible API.
What's changed
src/ontogpt/clients/llm_client.py): Detect MiniMax provider viaminimax/model prefix or--model-provider minimax, auto-configureapi_basetohttps://api.minimax.io/v1, resolve API key fromMINIMAX_API_KEYenv var with oaklibminimax-keyfallback, clamp temperature to MiniMax's required(0.0, 1.0]range, and route through litellm's OpenAI-compatible pathsrc/ontogpt/__init__.py): RegisterMiniMax-M2.7andMiniMax-M2.7-highspeedmodels (204K context window) in the model cost mapsrc/ontogpt/cli.py): Update--model-providerhelp text to mention MiniMaxUsage
Test plan
MINIMAX_API_KEYis not set)test_llmclient.pytests still pass