Skip to content

feat: add MiniMax as LLM and TTS provider#86

Open
octo-patch wants to merge 1 commit intohahahumble:mainfrom
octo-patch:feature/add-minimax-provider
Open

feat: add MiniMax as LLM and TTS provider#86
octo-patch wants to merge 1 commit intohahahumble:mainfrom
octo-patch:feature/add-minimax-provider

Conversation

@octo-patch
Copy link
Copy Markdown

Summary

Add MiniMax as an alternative LLM and TTS provider, giving users access to MiniMax-M2.7 (1M context) alongside the existing OpenAI integration.

Chat Provider

  • Provider selector in Chat settings to switch between OpenAI and MiniMax
  • MiniMax API key input with env var (VITE_MINIMAX_API_KEY) support
  • Model selection: MiniMax-M2.7 and MiniMax-M2.7-highspeed
  • Temperature clamping to (0, 1] range as required by MiniMax API
  • Think-tag stripping from MiniMax model responses

MiniMax TTS

  • Added as 4th speech synthesis service alongside System, Azure TTS, and Amazon Polly
  • Models: speech-2.8-hd (high quality) and speech-2.8-turbo (fast)
  • 12 English voices with voice selector in Synthesis settings
  • Shares MiniMax API key with chat provider (no duplicate configuration)

Other Changes

  • i18n support in English, Chinese (zh-CN), and Spanish (es) locales
  • Updated tips and README with MiniMax setup instructions
  • 33 unit and integration tests (vitest)

Files Changed (18 files, 702 additions)

File Change
src/apis/openai.ts Provider-aware routing, temp clamping, think-tag strip
src/apis/minimaxTTS.ts New - MiniMax TTS API client
src/store/reducer/global.ts Add chatProvider, minimaxApiKey, minimaxModel, TTS fields
src/store/module/global.ts Default state for MiniMax fields
src/components/Settings/ChatSection.tsx Provider selector, MiniMax settings UI
src/components/Settings/SynthesisSection.tsx MiniMax TTS service option
src/components/Content.tsx Provider-based API key/host/model routing
src/utils/speechSynthesis.ts MiniMax TTS playback integration
src/locales/{en,zh-CN,es}.json i18n strings for MiniMax
README.md MiniMax setup tutorial
src/tests/*.test.ts 33 tests (unit + integration)

Test Plan

  • TypeScript type checking passes (npx tsc --noEmit)
  • 33/33 tests pass (npx vitest run)
  • Manual: select MiniMax provider, enter API key, send a chat message
  • Manual: select MiniMax TTS, play a response with different voices
  • Manual: verify OpenAI provider still works unchanged
  • Manual: verify temperature slider caps at 1.0 when MiniMax is selected

Add MiniMax as an alternative chat provider alongside OpenAI, with provider
selector in settings UI, MiniMax-M2.7/M2.7-highspeed model support, temperature
clamping (0,1], and think-tag stripping.

Also add MiniMax TTS as a 4th speech synthesis service (speech-2.8-hd/turbo
models, 12 English voices), sharing the MiniMax API key with the chat provider.

Changes:
- Chat provider selector (OpenAI/MiniMax) in settings
- MiniMax API key and model configuration UI
- Temperature clamping for MiniMax (0, 1] range
- Think-tag stripping from MiniMax responses
- MiniMax TTS integration with voice and model selection
- i18n support in en/zh-CN/es locales
- 33 unit and integration tests (vitest)
- README documentation for MiniMax setup
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant