Skip to content

feat: add MiniMax Cloud API as alternative prompt rewrite provider#53

Open
octo-patch wants to merge 1 commit intoTencent-Hunyuan:mainfrom
octo-patch:feature/add-minimax-provider
Open

feat: add MiniMax Cloud API as alternative prompt rewrite provider#53
octo-patch wants to merge 1 commit intoTencent-Hunyuan:mainfrom
octo-patch:feature/add-minimax-provider

Conversation

@octo-patch
Copy link
Copy Markdown

Summary

  • Add MiniMax Cloud API as an alternative prompt rewrite provider for text-to-video (T2V) generation
  • Users can switch to MiniMax by setting REWRITE_PROVIDER=minimax and MINIMAX_API_KEY, eliminating the need to deploy a local vLLM server
  • Support MiniMax-M2.7 (latest), MiniMax-M2.5, and MiniMax-M2.5-highspeed (204K context) models

Changes

  • hyvideo/utils/rewrite/clients.py: Add MiniMaxClient class with OpenAI-compatible API calls, temperature clamping to [0.0, 1.0], and <think> tag stripping
  • hyvideo/utils/rewrite/rewrite_utils.py: Add _create_t2v_client() factory with REWRITE_PROVIDER env var for provider selection (defaults to existing Qwen/vLLM behavior)
  • README.md / README_CN.md: Document MiniMax configuration with env vars and available models
  • tests/test_minimax_client.py: 24 unit tests + 3 integration tests covering init, temperature clamping, think-tag parsing, provider selection, and end-to-end rewrite

Usage

export REWRITE_PROVIDER=minimax
export MINIMAX_API_KEY="your_api_key"
# Optional: override model (default: MiniMax-M2.7)
# export T2V_REWRITE_MODEL_NAME="MiniMax-M2.5-highspeed"

Test plan

  • 24 unit tests pass (mock-based, no API key required)
  • 3 integration tests pass with real MiniMax API
  • Existing Qwen/vLLM provider behavior unchanged when REWRITE_PROVIDER is not set
  • Verify prompt rewriting quality with actual video generation pipeline

Add MiniMaxClient for text-to-video prompt rewriting via MiniMax's
OpenAI-compatible API (MiniMax-M2.7, M2.5, M2.5-highspeed models).
Users can switch by setting REWRITE_PROVIDER=minimax and MINIMAX_API_KEY,
eliminating the need to deploy a local vLLM server.

- Add MiniMaxClient class with temperature clamping and think-tag stripping
- Add REWRITE_PROVIDER env var for provider selection in rewrite_utils.py
- Update README.md and README_CN.md with MiniMax configuration docs
- Add 24 unit tests and 3 integration tests
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant