Skip to content

Add Grok 4.20 and Minimax M2.7 (Together AI)#1269

Merged
tawnymanticore merged 3 commits intomainfrom
claude/retry-minimax-together-ai-eTIOl
Apr 16, 2026
Merged

Add Grok 4.20 and Minimax M2.7 (Together AI)#1269
tawnymanticore merged 3 commits intomainfrom
claude/retry-minimax-together-ai-eTIOl

Conversation

@tawnymanticore
Copy link
Copy Markdown
Collaborator

@tawnymanticore tawnymanticore commented Apr 14, 2026

What does this PR do?

Adds Grok 4.20 (OpenRouter) and Minimax M2.7 on Together AI as a new provider.

Test Results

The initial Minimax M2.7 Together AI config used json_schema, but the model ignored the schema and returned plain text (5 structured-output tests failed). Switched to the same config Minimax M2.5 uses on Together AI — json_instruction_and_object + reasoning_optional_for_structured_output=True + optional_r1_thinking parser — after which all structured-output tests pass.

One CoT test (test_structured_input_cot_prompt_builder) remains a pre-existing flake: the model collapses its reasoning and final answer into a single assistant turn, so the trace has 3 messages instead of the expected 5. This same test also fails for Minimax M2.5 on Together AI in the current main, so it is not specific to M2.7.

Minimax M2.7 (together_ai):

  • 10 passed, 1 skipped, 1 pre-existing flake

Minimax M2.7 (together_ai):
✅ test_tools_all_built_in_models[minimax_m2_7-together_ai]
✅ test_all_built_in_models_structured_output[minimax_m2_7-together_ai]
✅ test_data_gen_sample_all_models_providers[minimax_m2_7-together_ai]
✅ test_cot_prompt_builder[minimax_m2_7-together_ai]
✅ test_all_built_in_models_structured_input[minimax_m2_7-together_ai]
✅ test_data_gen_all_models_providers[minimax_m2_7-together_ai]
✅ test_data_gen_sample_all_models_providers_with_structured_output[minimax_m2_7-together_ai]
✅ test_structured_output_cot_prompt_builder[minimax_m2_7-together_ai]
✅ test_all_built_in_models_llm_as_judge[minimax_m2_7-together_ai]
✅ test_all_models_providers_plaintext[minimax_m2_7-together_ai]
⚠️ test_structured_input_cot_prompt_builder[minimax_m2_7-together_ai] — model returns reasoning + final answer in a single turn, trace has 3 messages instead of 5. Same flake exists on Minimax M2.5 Together AI in main.

Checklists

  • Tests have been run locally and passed
  • New tests have been added to any work in /lib

https://claude.ai/code/session_01F1L5ryuY5t2MxQXbNVjQGj

Summary by CodeRabbit

  • New Features
    • Added support for Grok 4.20 model via OpenRouter, featuring structured output, multimodal capabilities, and document extraction.
    • Expanded Minimax M2.7 with Together AI provider option, including reasoning support and data generation capabilities.

claude added 3 commits April 14, 2026 14:06
Added Grok 4.20 (OpenRouter) and TogetherAI provider for Minimax M2.7 to the model list.

https://claude.ai/code/session_01S77zSCTFnNW52JiCyWpBoV
Other Grok models on OpenRouter don't set reasoning_capable=True.
The model doesn't reliably return reasoning, causing 5 test failures.
Removing to match the Kiln pattern for Grok on OpenRouter.

https://claude.ai/code/session_01S77zSCTFnNW52JiCyWpBoV
The json_schema mode was being ignored by M2.7 on Together AI (model
returned plain text instead of JSON). Switch to json_instruction_and_object
with reasoning_optional_for_structured_output and optional_r1_thinking
parser, matching the M2.5 Together AI config that works reliably.

https://claude.ai/code/session_01F1L5ryuY5t2MxQXbNVjQGj
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Apr 14, 2026

No actionable comments were generated in the recent review. 🎉

ℹ️ Recent review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: ac8f715b-b6b0-4058-975f-0de694a2fd1e

📥 Commits

Reviewing files that changed from the base of the PR and between 9bcc35a and 37b640a.

📒 Files selected for processing (1)
  • libs/core/kiln_ai/adapters/ml_model_list.py

📝 Walkthrough

Walkthrough

This pull request extends the model registry by adding support for the Grok 4.20 model with OpenRouter provider integration and expands Minimax M2.7 with a Together AI provider configuration, enabling additional structured output and reasoning capabilities.

Changes

Cohort / File(s) Summary
Model Registry Extensions
libs/core/kiln_ai/adapters/ml_model_list.py
Added grok_4_20 enum member to ModelName and corresponding KilnModel entry with OpenRouter provider supporting structured output, multimodal/vision capabilities, PDF handling, and document extraction. Extended Minimax M2.7 configuration with Together AI provider supporting json_instruction_and_object output, reasoning, data generation, and optional R1 thinking parser.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~10 minutes

Possibly related PRs

Suggested reviewers

  • leonardmq
  • scosman

Poem

🐰 A Grok hops in, with vision so keen,
Four-point-twenty, the finest we've seen!
Together with Minimax, reasoning bright,
New models hop closer, making inference right! 🚀

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Title check ✅ Passed The title 'Add Grok 4.20 and Minimax M2.7 (Together AI)' is a clear, concise description of the main changes in the pull request.
Description check ✅ Passed The description covers what the PR does, includes test results, explains configuration decisions, and both checklist items are marked complete.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch claude/retry-minimax-together-ai-eTIOl

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@github-actions
Copy link
Copy Markdown

📊 Coverage Report

Overall Coverage: 91%

Diff: origin/main...HEAD

No lines with coverage information in this diff.


Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds support for the grok_4_20 model via OpenRouter and includes a new together_ai provider for the MiniMax model. A review comment suggests reordering the ModelName enum to ensure logical consistency with other model versions.

grok_2 = "grok_2"
grok_3 = "grok_3"
grok_3_mini = "grok_3_mini"
grok_4_20 = "grok_4_20"
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The enum member grok_4_20 is inserted between grok_3_mini and grok_4_1_fast. While not strictly a bug, it is inconsistent with the alphabetical/logical ordering of the other grok models (grok_2, grok_3, grok_3_mini, grok_4_1_fast, grok_4). Consider reordering to maintain consistency.

@tawnymanticore tawnymanticore merged commit 53e241e into main Apr 16, 2026
16 checks passed
@tawnymanticore tawnymanticore deleted the claude/retry-minimax-together-ai-eTIOl branch April 16, 2026 14:58
tawnymanticore added a commit that referenced this pull request Apr 16, 2026
* KIL-517 Fix misc spec builder bugs and improvements

Addresses 11 items: add X button to dismiss questions, preserve answers on
failed request, add Created At to spec details, allow whitespace while typing
spec names (trim on submit), add priority selector in advanced options, fix
autoselect badge persistence, rename FewShotSelector to TaskSampleSelector,
fine tune page max-width, add Re-run button for review examples, disable
copilot when full trace enabled, and add archive/unarchive to spec details.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* Address Gemini review: use specific question numbers in validation messages

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* Address CodeRabbit review: persist dismissed questions across remounts

Lift dismissed state to parent like selections/other_texts so dismissals
survive component remounts on API failures.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* KIL-522 Restore persisted model selection on Run page

Initialize model from ui_state store (localStorage) instead of empty
string so the previously selected model is restored on page load.
Also fix the saved-config dropdown to show "custom" immediately
instead of "Select an option" while configs load.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* KIL-522 Add one-shot guard to prevent default config from overriding intentional Custom selection

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* KIL-534 Add Feedback data model on TaskRun

Replace the single `user_feedback` string field on TaskRun with a proper
Feedback model that supports multiple feedback entries per run. Feedback
is a parented model under TaskRun, stored as separate files to avoid
write conflicts when multiple people provide feedback.

- Add Feedback model (feedback text + FeedbackSource enum)
- Make TaskRun a parent model with feedback children
- Remove user_feedback field from TaskRun
- Add REST API endpoints (list/create) for feedback on task runs
- Update copilot models, utils, and frontend spec builder
- Create follow-up ticket KIL-537 for repair UI replacement

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* Add agent policy annotations for feedback API endpoints

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* Revert unintended user_feedback renames in copilot code

The ticket only asked to remove user_feedback from TaskRun, not rename
it in the copilot/spec-builder code which uses it for a different purpose.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* Remove misplaced annotation files, revert copilot renames

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* Preserve feedback from spec review as Feedback children

When creating TaskRuns from reviewed examples in the copilot flow,
create Feedback children (with source=spec-feedback) after saving
the run, so review feedback is not lost.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* reverts

* KIL-537 Replace repair UI with feedback UI

Remove all repair UI code (repair form, repair edit form, repair
review/accept/delete flows) and replace with a new feedback UI that
uses the Feedback data model from KIL-534.

- Rename "Output Rating" to "Rating and Feedback"
- Add inline feedback list (up to 3, truncated) with "Add Feedback" link
- Add "All Feedback" modal with sortable table
- Add "Add Feedback" modal using FormContainer
- Delete output_repair_edit_form.svelte
- Remove model_name/provider/focus_repair_on_appear props from Run

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* Address AI review feedback: race condition and submit loading state

- Add request ID tracking and run ID dedup to load_feedback to prevent
  race conditions and redundant requests when switching runs
- Set add_feedback_submitting = true at start of submit_feedback

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* Show latest 3 feedbacks in inline preview instead of oldest

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* reverted some changes

* fixed add feedback dialog UI

* outline instead of bg for clickable area

* claude compatible mcp.json

* steveback

* policy anno

* Add Fireworks AI provider to GLM 5.1 (#1275)

https://getkiln.slack.com/archives/C0AG8U78MNG/p1776274097954549?thread_ts=1776273210.799549&cid=C0AG8U78MNG

Co-authored-by: Claude <noreply@anthropic.com>

* Add Grok 4.20 and Minimax M2.7 (Together AI) (#1269)

* Add Grok 4.20 and Minimax M2.7 TogetherAI provider

Added Grok 4.20 (OpenRouter) and TogetherAI provider for Minimax M2.7 to the model list.

https://claude.ai/code/session_01S77zSCTFnNW52JiCyWpBoV

* Remove reasoning flags from Grok 4.20

Other Grok models on OpenRouter don't set reasoning_capable=True.
The model doesn't reliably return reasoning, causing 5 test failures.
Removing to match the Kiln pattern for Grok on OpenRouter.

https://claude.ai/code/session_01S77zSCTFnNW52JiCyWpBoV

* Fix Minimax M2.7 Together AI structured output config

The json_schema mode was being ignored by M2.7 on Together AI (model
returned plain text instead of JSON). Switch to json_instruction_and_object
with reasoning_optional_for_structured_output and optional_r1_thinking
parser, matching the M2.5 Together AI config that works reliably.

https://claude.ai/code/session_01F1L5ryuY5t2MxQXbNVjQGj

---------

Co-authored-by: Claude <noreply@anthropic.com>

* Update add-model skill: lagging-provider checks and push-gate rules (#1281)

* Update SKILL.md

* Update SKILL.md

* Update SKILL.md

* CR

* Workaround for Claude Code web for using anthropic models in paid tests (#1283)

* Update SKILL.md

* Update SKILL.md

* Update SKILL.md

* CR

* Update SKILL.md

* Add Claude Opus 4.7 to model list (#1282)

* Add Claude Opus 4.7 to model list (anthropic, openrouter)

Adds Anthropic's new Opus 4.7 model with both Anthropic and OpenRouter
providers. Introduces CLAUDE_OPUS_4_7_ANTHROPIC_THINKING_LEVELS to
support the new "xhigh" and "max" effort levels exclusive to Opus 4.7.

* Apply zero-sum swap: demote Opus 4.6 from suggested/featured

Opus 4.7 now carries featured_rank=2, editorial_notes, suggested_for_evals,
and suggested_for_data_gen. Removing the same flags from Opus 4.6 keeps the
suggested/featured count stable across the Claude Opus family.

https://claude.ai/code/session_01Xnfzt91McoMdqaiRv1g6xg

* Add PDF support to OpenRouter provider for Opus 4.7

Adds KilnMimeType.PDF to multimodal_mime_types and sets
multimodal_requires_pdf_as_image=True (OpenRouter's PDF routing through
Mistral OCR breaks LiteLLM parsing, so PDFs must be sent as images).

https://claude.ai/code/session_01Xnfzt91McoMdqaiRv1g6xg

---------

Co-authored-by: Claude <noreply@anthropic.com>

---------

Co-authored-by: Sam Fierro <13154106+sfierro@users.noreply.github.com>
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-authored-by: scosman <scosman@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants