Skip to content

feat: add MiniMax as a built-in LLM provider#158

Open
octo-patch wants to merge 2 commits intodzhng:mainfrom
octo-patch:feature/add-minimax-provider
Open

feat: add MiniMax as a built-in LLM provider#158
octo-patch wants to merge 2 commits intodzhng:mainfrom
octo-patch:feature/add-minimax-provider

Conversation

@octo-patch
Copy link
Copy Markdown

@octo-patch octo-patch commented Mar 14, 2026

Summary

Add MiniMax as a built-in LLM provider for deep-research, and set MiniMax-M2.7 as the default model.

Changes

  • Add MiniMax provider via @ai-sdk/openai (OpenAI-compatible API)
  • Set MiniMax-M2.7 as the default model (configurable via MINIMAX_MODEL env var)
  • Add MiniMax section to README with setup instructions
  • MiniMax is auto-detected when MINIMAX_API_KEY is set

Why

MiniMax-M2.7 is the latest flagship model with enhanced reasoning and coding capabilities, offering up to 204K token context.

Testing

  • TypeScript type check passes with no errors
  • All existing tests pass

octo-patch and others added 2 commits March 14, 2026 10:14
Add MiniMax (https://www.minimaxi.com) as a new provider option alongside
OpenAI and Fireworks. MiniMax offers the MiniMax-M2.5 model with up to 204K
token context, making it well-suited for deep research tasks.

The integration uses the existing @ai-sdk/openai package since MiniMax
provides an OpenAI-compatible API, so no additional dependencies are needed.

To use MiniMax, simply set the MINIMAX_API_KEY environment variable.
- Update default model from MiniMax-M2.5 to MiniMax-M2.7
- Update README documentation to reference M2.7 and M2.7-highspeed
- Users can still override via MINIMAX_MODEL env var
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant