Skip to content

feat: add MiniMax as first-class LLM provider#551

Open
octo-patch wants to merge 1 commit intomonarch-initiative:mainfrom
octo-patch:feature/add-minimax-provider
Open

feat: add MiniMax as first-class LLM provider#551
octo-patch wants to merge 1 commit intomonarch-initiative:mainfrom
octo-patch:feature/add-minimax-provider

Conversation

@octo-patch
Copy link
Copy Markdown

Summary

Adds MiniMax AI as a first-class LLM provider for OntoGPT, accessible via their OpenAI-compatible API.

What's changed

  • LLMClient (src/ontogpt/clients/llm_client.py): Detect MiniMax provider via minimax/ model prefix or --model-provider minimax, auto-configure api_base to https://api.minimax.io/v1, resolve API key from MINIMAX_API_KEY env var with oaklib minimax-key fallback, clamp temperature to MiniMax's required (0.0, 1.0] range, and route through litellm's OpenAI-compatible path
  • Model registry (src/ontogpt/__init__.py): Register MiniMax-M2.7 and MiniMax-M2.7-highspeed models (204K context window) in the model cost map
  • CLI (src/ontogpt/cli.py): Update --model-provider help text to mention MiniMax
  • README: Add MiniMax setup and usage documentation section

Usage

# Set API key
export MINIMAX_API_KEY="your-key"

# Use with minimax/ prefix
ontogpt extract -t drug -i example.txt -m minimax/MiniMax-M2.7

# Or with --model-provider
ontogpt extract -t drug -i example.txt -m MiniMax-M2.7 --model-provider minimax

Test plan

  • 25 unit tests covering provider detection, API key resolution, temperature clamping, completion parameter forwarding, system message handling, and model registry
  • 3 integration tests verifying real API calls with MiniMax-M2.7 and MiniMax-M2.7-highspeed (skipped when MINIMAX_API_KEY is not set)
  • All 11 existing test_llmclient.py tests still pass
  • All 28 new tests pass

Add MiniMax AI (https://www.minimaxi.com/) as a supported LLM provider
via their OpenAI-compatible API. Users can now use MiniMax models with
either the minimax/ prefix or --model-provider minimax option.

Changes:
- LLMClient: detect MiniMax provider, auto-configure api_base, resolve
  MINIMAX_API_KEY env var with oaklib fallback, clamp temperature to
  (0.0, 1.0] range, route through litellm OpenAI-compatible path
- __init__.py: register MiniMax-M2.7 and MiniMax-M2.7-highspeed models
  (204K context) in the model cost map
- CLI: update --model-provider help text to mention MiniMax
- README: add MiniMax setup and usage documentation
- Tests: 25 unit tests + 3 integration tests covering provider init,
  API key resolution, temperature clamping, completion calls, and
  model registry
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant