-
Notifications
You must be signed in to change notification settings - Fork 2.4k
fix: support Azure provider with reasoning models #1310
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: next
Are you sure you want to change the base?
Conversation
- Add azure, openrouter, bedrock, and ollama to VALIDATED_PROVIDERS array - Add Azure reasoning models (GPT-5, o1, o3, o3-mini, o4-mini) to supported-models.json - Implement automatic API endpoint detection for Azure reasoning models - Add dual endpoint support (chat/completions vs responses) in AzureProvider - Add smart URL adjustment logic for different Azure configurations - Maintain backward compatibility with existing Azure setups Fixes #638 🤖 Generated with [Claude Code](https://claude.ai/code) Co-authored-by: Ralph Khreish <[email protected]>
|
WalkthroughAdds five Azure reasoning models to the model registry, updates the Azure provider to detect reasoning models and rewrite Azure endpoints to /responses based on model metadata, and expands the validated provider list to include azure, openrouter, bedrock, and ollama. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant Caller as Caller
participant AzureProvider as AzureProvider
participant ModelMap as Supported Models JSON
participant AzureSDK as createAzure()
Caller->>AzureProvider: getClient({ baseURL, modelId, ... })
AzureProvider->>ModelMap: lookup(modelId)
ModelMap-->>AzureProvider: metadata (api_type, id, ...)
alt api_type == "responses"
AzureProvider->>AzureProvider: adjustBaseURL(baseURL) -> ensure /responses (preserve query)
else
AzureProvider->>AzureProvider: leave baseURL unchanged
end
AzureProvider->>AzureSDK: createAzure({ baseURL: adjustedBaseURL, ... })
AzureSDK-->>AzureProvider: client
AzureProvider-->>Caller: client
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes
Possibly related PRs
Suggested reviewers
Pre-merge checks and finishing touches❌ Failed checks (1 inconclusive)
✅ Passed checks (4 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
📒 Files selected for processing (3)
scripts/modules/supported-models.json(1 hunks)src/ai-providers/azure.js(2 hunks)src/constants/providers.js(1 hunks)
🧰 Additional context used
📓 Path-based instructions (5)
**/*.js
📄 CodeRabbit inference engine (.cursor/rules/tests.mdc)
**/*.js: Declare and initialize global variables at the top of modules to avoid hoisting issues.
Use proper function declarations to avoid hoisting issues and initialize variables before they are referenced.
Do not reference variables before their declaration in module scope.
Use dynamic imports (import()) to avoid initialization order issues in modules.
Files:
src/constants/providers.jssrc/ai-providers/azure.js
src/ai-providers/*.js
📄 CodeRabbit inference engine (.cursor/rules/ai_providers.mdc)
src/ai-providers/*.js: Create a new provider module file in src/ai-providers/ named .js when adding a new AI provider.
Provider modules must export three functions: generateText, streamText, and generateObject.
Provider modules must import the provider's create function from @ai-sdk/, and import generateText, streamText, generateObject from the core ai package, as well as the log utility from ../../scripts/modules/utils.js.
Implement generateText, streamText, and generateObject functions in provider modules with basic validation and try/catch error handling.Provider-specific wrappers for Vercel AI SDK functions must be implemented in src/ai-providers/*.js, each file corresponding to a provider.
Files:
src/ai-providers/azure.js
scripts/modules/supported-models.json
📄 CodeRabbit inference engine (.cursor/rules/ai_providers.mdc)
Add a new key for the provider and an array of model objects under it in scripts/modules/supported-models.json, including id, name, allowed_roles, and optionally swe_score, cost_per_1m_tokens, and max_tokens.
Files:
scripts/modules/supported-models.json
scripts/modules/**
📄 CodeRabbit inference engine (.cursor/rules/dev_workflow.mdc)
When using the MCP server, restart it if core logic in
scripts/modulesor MCP tool/direct function definitions change.
Files:
scripts/modules/supported-models.json
scripts/modules/*
📄 CodeRabbit inference engine (.cursor/rules/tags.mdc)
scripts/modules/*: Every command that reads or writes tasks.json must be tag-aware
All command files must import getCurrentTag from utils.js
Every CLI command that operates on tasks must include the --tag CLI option
All commands must resolve the tag using the pattern: options.tag || getCurrentTag(projectRoot) || 'master'
All commands must find projectRoot with error handling before proceeding
All commands must pass { projectRoot, tag } as context to core functions
MCP direct functions must accept and use a context object containing projectRoot and tag, and pass them to core functions
Do not hard-code tag resolution (e.g., const tag = options.tag || 'master';); always use getCurrentTag
Do not omit the --tag CLI option in commands that operate on tasks
Do not omit the context parameter when calling core functions from commands
Do not call readJSON or writeJSON without passing projectRoot and tag
Files:
scripts/modules/supported-models.json
🧠 Learnings (3)
📚 Learning: 2025-07-18T17:06:04.909Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_providers.mdc:0-0
Timestamp: 2025-07-18T17:06:04.909Z
Learning: Applies to scripts/modules/config-manager.js : Update scripts/modules/config-manager.js to add the new provider to MODEL_MAP, ensure it is included in VALID_PROVIDERS, and update API key handling logic.
Applied to files:
src/constants/providers.js
📚 Learning: 2025-07-18T17:06:04.909Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_providers.mdc:0-0
Timestamp: 2025-07-18T17:06:04.909Z
Learning: Applies to src/ai-providers/*.js : Provider modules must import the provider's create<ProviderName> function from ai-sdk/<provider-name>, and import generateText, streamText, generateObject from the core ai package, as well as the log utility from ../../scripts/modules/utils.js.
Applied to files:
src/ai-providers/azure.js
📚 Learning: 2025-07-18T17:06:04.909Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_providers.mdc:0-0
Timestamp: 2025-07-18T17:06:04.909Z
Learning: Applies to scripts/modules/supported-models.json : Add a new key for the provider and an array of model objects under it in scripts/modules/supported-models.json, including id, name, allowed_roles, and optionally swe_score, cost_per_1m_tokens, and max_tokens.
Applied to files:
scripts/modules/supported-models.json
🧬 Code graph analysis (1)
src/ai-providers/azure.js (2)
scripts/modules/ai-services-unified.js (2)
modelId(529-529)baseURL(533-533)scripts/modules/config-manager.js (1)
modelId(703-703)
🪛 GitHub Actions: CI
src/ai-providers/azure.js
[error] 8-9: Formatter would have printed content with import MODEL_MAP from '../../scripts/modules/supported-models.json' with { type: 'json' } - formatting mismatch detected by biome format. Run 'biome format .' to fix.
[error] 46-49: Formatter would have printed content for: const modelDef = azureModels.find(m => m.id === modelId); - ensure spacing and arrow function formatting matches formatter output. Run 'biome format .' to fix.
🔇 Additional comments (2)
src/ai-providers/azure.js (2)
90-100: AzureProvider.getClient always receives modelId. All invocations (via ai-services-unified) pass modelId from config, so the reasoning-URL switch is safe.Likely an incorrect or invalid review comment.
8-8: No Biome formatting errors remain The JSON import already adheres to the formatter’s style.
|
@claude please do this:
|
|
Claude finished @Crunchyman-ralph's task —— View job Tasks Completed ✅
SummarySuccessfully addressed all CodeRabbit feedback:
All changes have been committed and pushed to the PR branch |
- Add display names to Azure reasoning models (gpt-5, o1, o3, o3-mini, o4-mini) - Improve model detection with fallback heuristic for custom deployment names - Fix URL rewriting to properly handle query strings using URL parser - Remove duplicate providers in ALL_PROVIDERS array using Set deduplication Co-authored-by: Ralph Khreish <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
📒 Files selected for processing (3)
scripts/modules/supported-models.json(1 hunks)src/ai-providers/azure.js(2 hunks)src/constants/providers.js(2 hunks)
🧰 Additional context used
📓 Path-based instructions (6)
**/*.js
📄 CodeRabbit inference engine (.cursor/rules/architecture.mdc)
**/*.js: Always use isSilentMode() function to check current silent mode status instead of directly accessing the global silentMode variable or global.silentMode
Use try/finally block pattern when wrapping core function calls with enableSilentMode/disableSilentMode to ensure silent mode is always restored, even if errors occur
For functions that need to handle both a passed silentMode parameter and check global state, check both the function parameter and global state: const isSilent = options.silentMode || (typeof options.silentMode === 'undefined' && isSilentMode())
Functions should accept their dependencies as parameters rather than using globals to promote testability and explicit dependency injection
Define callbacks as separate functions for easier testing rather than inline functions
Files:
src/constants/providers.jssrc/ai-providers/azure.js
**/*.{js,jsx}
📄 CodeRabbit inference engine (.cursor/rules/test_workflow.mdc)
JavaScript test files using Jest must follow the same testing patterns as TypeScript files, include proper mocking of external dependencies, and achieve the same coverage thresholds
Files:
src/constants/providers.jssrc/ai-providers/azure.js
**/*.{js,ts}
📄 CodeRabbit inference engine (.cursor/rules/utilities.mdc)
**/*.{js,ts}: Import and use specific getters from config-manager.js (e.g., getMainProvider(), getLogLevel(), getMainMaxTokens()) to access configuration values needed for application logic
Use isApiKeySet(providerName, session) from config-manager.js to check if a provider's key is available before potentially attempting an AI call
Do not add direct console.log calls outside the logging utility - use the central log function instead
Ensure silent mode is disabled in a finally block to prevent it from staying enabled
Do not access the global silentMode variable directly - use the exported silent mode control functions instead
Do not duplicate task ID formatting logic across modules - centralize formatting utilities
Use ContextGatherer class from utils/contextGatherer.js for AI-powered commands that need project context, supporting tasks, files, custom text, and project tree context
Use FuzzyTaskSearch class from utils/fuzzyTaskSearch.js for automatic task relevance detection with configurable search parameters
Use fuzzy search to supplement user-provided task IDs and display discovered task IDs to users for transparency
Do not replace explicit user task selections with fuzzy results - fuzzy search should supplement, not replace user selections
Use readJSON and writeJSON utilities for all JSON file operations instead of raw fs.readFileSync or fs.writeFileSync
Include error handling for JSON file operations and validate JSON structure after reading
Use path.join() for cross-platform path construction and path.resolve() for absolute paths, validating paths before file operations
Support both .env files and MCP session environment for environment variable resolution with fallbacks for missing values
Prefer updating the core function to accept an outputFormat parameter and check outputFormat === 'json' before displaying UI elements
Files:
src/constants/providers.jssrc/ai-providers/azure.js
scripts/modules/supported-models.json
📄 CodeRabbit inference engine (.cursor/rules/ai_providers.mdc)
Update
scripts/modules/supported-models.jsonto add a new provider key with an array of model objects, each containingid,name(optional),swe_score,cost_per_1m_tokens(optional),allowed_roles, andmax_tokens(optional but recommended)
Files:
scripts/modules/supported-models.json
scripts/modules/**/*
📄 CodeRabbit inference engine (.cursor/rules/dev_workflow.mdc)
Restart the MCP server if core logic in
scripts/modulesor MCP tool definitions change
Files:
scripts/modules/supported-models.json
src/ai-providers/*.js
📄 CodeRabbit inference engine (.cursor/rules/ai_providers.mdc)
src/ai-providers/*.js: Create a new provider module insrc/ai-providers/<provider-name>.jsthat implementsgenerate<ProviderName>Text,stream<ProviderName>Text, andgenerate<ProviderName>Objectfunctions using the Vercel AI SDK
Provider modules insrc/ai-providers/must importgenerateText,streamText,generateObjectfrom theaipackage, the provider'screate<ProviderName>function from@ai-sdk/<provider-name>, and thelogutility from../../scripts/modules/utils.js
Provider functions must include basic validation and try/catch error handlingsrc/ai-providers/*.js files should contain provider-specific wrappers for Vercel AI SDK functions and interact directly with Vercel AI SDK adapters
Files:
src/ai-providers/azure.js
🧠 Learnings (21)
📓 Common learnings
Learnt from: Crunchyman-ralph
Repo: eyaltoledano/claude-task-master PR: 1444
File: apps/cli/src/utils/auto-update/changelog.ts:103-111
Timestamp: 2025-11-25T18:32:29.828Z
Learning: The claude-task-master project uses a custom changelog format with PR numbers and author acknowledgements in the pattern `- [#PR](...) Thanks [author]! - Description`, which is parsed by the regex in apps/cli/src/utils/auto-update/changelog.ts.
Learnt from: Crunchyman-ralph
Repo: eyaltoledano/claude-task-master PR: 1178
File: packages/tm-core/src/auth/config.ts:5-7
Timestamp: 2025-09-02T21:51:27.921Z
Learning: The user Crunchyman-ralph prefers not to use node: scheme imports (e.g., 'node:os', 'node:path') for Node.js core modules and considers suggestions to change bare imports to node: scheme as too nitpicky.
Learnt from: Crunchyman-ralph
Repo: eyaltoledano/claude-task-master PR: 1069
File: .changeset/fix-tag-complexity-detection.md:0-0
Timestamp: 2025-08-02T15:33:22.656Z
Learning: For changeset files (.changeset/*.md), Crunchyman-ralph prefers to ignore formatting nitpicks about blank lines between frontmatter and descriptions, as he doesn't mind having them and wants to avoid such comments in future reviews.
Learnt from: Crunchyman-ralph
Repo: eyaltoledano/claude-task-master PR: 1132
File: .github/workflows/weekly-metrics-discord.yml:81-93
Timestamp: 2025-08-13T22:10:46.958Z
Learning: Crunchyman-ralph ignores YAML formatting nitpicks about trailing spaces when there's no project-specific YAML formatter configured, preferring to focus on functionality over cosmetic formatting issues.
Learnt from: Crunchyman-ralph
Repo: eyaltoledano/claude-task-master PR: 1132
File: .github/workflows/weekly-metrics-discord.yml:81-93
Timestamp: 2025-08-13T22:10:46.958Z
Learning: Crunchyman-ralph ignores YAML formatting nitpicks about trailing spaces when there's no project-specific YAML formatter configured, preferring to focus on functionality over cosmetic formatting issues.
Learnt from: Crunchyman-ralph
Repo: eyaltoledano/claude-task-master PR: 1105
File: scripts/modules/supported-models.json:242-254
Timestamp: 2025-08-08T11:33:15.297Z
Learning: Preference: In scripts/modules/supported-models.json, the "name" field is optional. For OpenAI entries (e.g., "gpt-5"), Crunchyman-ralph prefers omitting "name" when the id is explicit enough; avoid nitpicks requesting a "name" in such cases.
Learnt from: Crunchyman-ralph
Repo: eyaltoledano/claude-task-master PR: 1200
File: src/ai-providers/custom-sdk/grok-cli/language-model.js:96-100
Timestamp: 2025-09-19T16:06:42.182Z
Learning: The user Crunchyman-ralph prefers to keep environment variable names explicit (like GROK_CLI_API_KEY) rather than supporting multiple aliases, to avoid overlap and ensure clear separation between different CLI implementations.
Learnt from: Crunchyman-ralph
Repo: eyaltoledano/claude-task-master PR: 1178
File: packages/tm-core/src/subpath-exports.test.ts:6-9
Timestamp: 2025-09-03T12:45:30.724Z
Learning: The user Crunchyman-ralph prefers to avoid overly nitpicky or detailed suggestions in code reviews, especially for test coverage of minor import paths. Focus on more substantial issues rather than comprehensive coverage of all possible edge cases.
Learnt from: Crunchyman-ralph
Repo: eyaltoledano/claude-task-master PR: 1217
File: apps/cli/src/index.ts:16-21
Timestamp: 2025-09-18T16:35:35.147Z
Learning: The user Crunchyman-ralph considers suggestions to export types for better ergonomics (like exporting UpdateInfo type alongside related functions) as nitpicky and prefers not to implement such suggestions.
📚 Learning: 2025-11-24T17:57:14.743Z
Learnt from: CR
Repo: eyaltoledano/claude-task-master PR: 0
File: .cursor/rules/ai_providers.mdc:0-0
Timestamp: 2025-11-24T17:57:14.743Z
Learning: Applies to scripts/modules/config-manager.js : In `scripts/modules/config-manager.js`, update `MODEL_MAP` to include the new provider, ensure `VALID_PROVIDERS` includes the provider, update API key handling in `keyMap` and the `switch` statement in `getMcpApiKeyStatus` and `isApiKeySet`
Applied to files:
src/constants/providers.jssrc/ai-providers/azure.js
📚 Learning: 2025-11-24T17:57:14.743Z
Learnt from: CR
Repo: eyaltoledano/claude-task-master PR: 0
File: .cursor/rules/ai_providers.mdc:0-0
Timestamp: 2025-11-24T17:57:14.743Z
Learning: Applies to scripts/modules/ai-services-unified.js : In `scripts/modules/ai-services-unified.js`, import new providers and add entries to the `PROVIDER_FUNCTIONS` map with `generateText`, `streamText`, and `generateObject` properties
Applied to files:
src/constants/providers.jssrc/ai-providers/azure.js
📚 Learning: 2025-11-24T17:57:14.743Z
Learnt from: CR
Repo: eyaltoledano/claude-task-master PR: 0
File: .cursor/rules/ai_providers.mdc:0-0
Timestamp: 2025-11-24T17:57:14.743Z
Learning: Applies to src/ai-providers/*.js : Provider functions must include basic validation and try/catch error handling
Applied to files:
src/constants/providers.jssrc/ai-providers/azure.js
📚 Learning: 2025-11-24T17:57:14.743Z
Learnt from: CR
Repo: eyaltoledano/claude-task-master PR: 0
File: .cursor/rules/ai_providers.mdc:0-0
Timestamp: 2025-11-24T17:57:14.743Z
Learning: Applies to src/ai-providers/*.js : Provider modules in `src/ai-providers/` must import `generateText`, `streamText`, `generateObject` from the `ai` package, the provider's `create<ProviderName>` function from `ai-sdk/<provider-name>`, and the `log` utility from `../../scripts/modules/utils.js`
Applied to files:
src/constants/providers.jssrc/ai-providers/azure.js
📚 Learning: 2025-11-24T17:57:14.743Z
Learnt from: CR
Repo: eyaltoledano/claude-task-master PR: 0
File: .cursor/rules/ai_providers.mdc:0-0
Timestamp: 2025-11-24T17:57:14.743Z
Learning: Applies to src/ai-providers/*.js : Create a new provider module in `src/ai-providers/<provider-name>.js` that implements `generate<ProviderName>Text`, `stream<ProviderName>Text`, and `generate<ProviderName>Object` functions using the Vercel AI SDK
Applied to files:
src/constants/providers.jssrc/ai-providers/azure.js
📚 Learning: 2025-11-24T17:57:14.743Z
Learnt from: CR
Repo: eyaltoledano/claude-task-master PR: 0
File: .cursor/rules/ai_providers.mdc:0-0
Timestamp: 2025-11-24T17:57:14.743Z
Learning: Applies to **/*.md : Update relevant documentation (like `README.md`) mentioning supported providers or configuration when adding a new AI provider
Applied to files:
src/constants/providers.js
📚 Learning: 2025-11-24T18:04:43.972Z
Learnt from: CR
Repo: eyaltoledano/claude-task-master PR: 0
File: .cursor/rules/utilities.mdc:0-0
Timestamp: 2025-11-24T18:04:43.972Z
Learning: Applies to **/*.{js,ts} : Use isApiKeySet(providerName, session) from config-manager.js to check if a provider's key is available before potentially attempting an AI call
Applied to files:
src/constants/providers.js
📚 Learning: 2025-07-18T17:09:40.548Z
Learnt from: CR
Repo: eyaltoledano/claude-task-master PR: 0
File: .cursor/rules/dependencies.mdc:0-0
Timestamp: 2025-07-18T17:09:40.548Z
Learning: Applies to scripts/modules/dependency-manager.js : Use Set objects to identify and remove duplicate dependencies
Applied to files:
src/constants/providers.js
📚 Learning: 2025-11-24T17:58:07.992Z
Learnt from: CR
Repo: eyaltoledano/claude-task-master PR: 0
File: .cursor/rules/architecture.mdc:0-0
Timestamp: 2025-11-24T17:58:07.992Z
Learning: Applies to src/ai-providers/*.js : src/ai-providers/*.js files should contain provider-specific wrappers for Vercel AI SDK functions and interact directly with Vercel AI SDK adapters
Applied to files:
src/constants/providers.jssrc/ai-providers/azure.js
📚 Learning: 2025-11-24T17:57:14.743Z
Learnt from: CR
Repo: eyaltoledano/claude-task-master PR: 0
File: .cursor/rules/ai_providers.mdc:0-0
Timestamp: 2025-11-24T17:57:14.743Z
Learning: Applies to scripts/modules/supported-models.json : Update `scripts/modules/supported-models.json` to add a new provider key with an array of model objects, each containing `id`, `name` (optional), `swe_score`, `cost_per_1m_tokens` (optional), `allowed_roles`, and `max_tokens` (optional but recommended)
Applied to files:
scripts/modules/supported-models.jsonsrc/ai-providers/azure.js
📚 Learning: 2025-07-21T14:14:48.694Z
Learnt from: rtmcrc
Repo: eyaltoledano/claude-task-master PR: 933
File: scripts/modules/supported-models.json:238-238
Timestamp: 2025-07-21T14:14:48.694Z
Learning: Model version updates in scripts/modules/supported-models.json may be included in feature PRs if they provide practical improvements like reduced error rates, even if not directly related to the main feature being implemented.
Applied to files:
scripts/modules/supported-models.json
📚 Learning: 2025-08-08T11:34:45.482Z
Learnt from: Crunchyman-ralph
Repo: eyaltoledano/claude-task-master PR: 1105
File: .changeset/vast-weeks-fetch.md:5-5
Timestamp: 2025-08-08T11:34:45.482Z
Learning: In this repo, the supported models list is auto-generated by CI into docs/models.md from scripts/modules/supported-models.json via .github/workflows/update-models-md.yml and docs/scripts/models-json-to-markdown.js. Don’t request manual edits to the Markdown; ensure the JSON is correct instead.
Applied to files:
scripts/modules/supported-models.json
📚 Learning: 2025-08-08T11:34:45.482Z
Learnt from: Crunchyman-ralph
Repo: eyaltoledano/claude-task-master PR: 1105
File: .changeset/vast-weeks-fetch.md:5-5
Timestamp: 2025-08-08T11:34:45.482Z
Learning: In this repo, supported-models.md is auto-generated by CI from supported-models.json; do not request manual edits to that file—ensure JSON entries are correct instead.
Applied to files:
scripts/modules/supported-models.json
📚 Learning: 2025-08-08T11:34:45.482Z
Learnt from: Crunchyman-ralph
Repo: eyaltoledano/claude-task-master PR: 1105
File: .changeset/vast-weeks-fetch.md:5-5
Timestamp: 2025-08-08T11:34:45.482Z
Learning: This repo’s supported models list is CI-generated into docs/models.md from scripts/modules/supported-models.json (workflow: .github/workflows/update-models-md.yml using docs/scripts/models-json-to-markdown.js). Don’t request manual edits to the Markdown; ensure the JSON is correct instead. README links to docs/models.md.
Applied to files:
scripts/modules/supported-models.json
📚 Learning: 2025-08-08T11:33:15.297Z
Learnt from: Crunchyman-ralph
Repo: eyaltoledano/claude-task-master PR: 1105
File: scripts/modules/supported-models.json:242-254
Timestamp: 2025-08-08T11:33:15.297Z
Learning: Preference: In scripts/modules/supported-models.json, the "name" field is optional. For OpenAI entries (e.g., "gpt-5"), Crunchyman-ralph prefers omitting "name" when the id is explicit enough; avoid nitpicks requesting a "name" in such cases.
Applied to files:
scripts/modules/supported-models.jsonsrc/ai-providers/azure.js
📚 Learning: 2025-11-24T17:57:14.743Z
Learnt from: CR
Repo: eyaltoledano/claude-task-master PR: 0
File: .cursor/rules/ai_providers.mdc:0-0
Timestamp: 2025-11-24T17:57:14.743Z
Learning: When setting a model ID using the `models` MCP tool or CLI command, DO NOT include the `provider:` prefix—use only the model ID itself (e.g., `gpt-4o` not `openai:gpt-4o`)
Applied to files:
scripts/modules/supported-models.jsonsrc/ai-providers/azure.js
📚 Learning: 2025-10-31T18:07:17.402Z
Learnt from: Crunchyman-ralph
Repo: eyaltoledano/claude-task-master PR: 1360
File: src/ai-providers/glm.js:0-0
Timestamp: 2025-10-31T18:07:17.402Z
Learning: In src/ai-providers/glm.js, the GLM provider's getClient method should allow defaulting to the 'coding' endpoint when an invalid or unspecified route parameter is provided, as this is the correct behavior per Z.ai's OpenAI-compatible API documentation. Do not enforce strict route validation that throws errors for unknown routes.
Applied to files:
src/ai-providers/azure.js
📚 Learning: 2025-11-24T17:57:14.743Z
Learnt from: CR
Repo: eyaltoledano/claude-task-master PR: 0
File: .cursor/rules/ai_providers.mdc:0-0
Timestamp: 2025-11-24T17:57:14.743Z
Learning: To add a new AI provider with official Vercel AI SDK support, install the provider package via `npm install ai-sdk/<provider-name>`
Applied to files:
src/ai-providers/azure.js
📚 Learning: 2025-11-24T17:57:14.743Z
Learnt from: CR
Repo: eyaltoledano/claude-task-master PR: 0
File: .cursor/rules/ai_providers.mdc:0-0
Timestamp: 2025-11-24T17:57:14.743Z
Learning: Applies to tests/unit/ai-providers/*.test.js : Create unit tests in `tests/unit/ai-providers/<provider-name>.test.js` that mock the provider's AI SDK module and test each exported function for correct client instantiation, parameter passing, result handling, and error handling
Applied to files:
src/ai-providers/azure.js
📚 Learning: 2025-11-24T17:58:07.992Z
Learnt from: CR
Repo: eyaltoledano/claude-task-master PR: 0
File: .cursor/rules/architecture.mdc:0-0
Timestamp: 2025-11-24T17:58:07.992Z
Learning: Applies to scripts/modules/ai-services-unified.js : ai-services-unified.js should export generateTextService and generateObjectService, handle provider/model selection based on role and .taskmasterconfig, resolve API keys, implement fallback and retry logic, and orchestrate calls to provider-specific implementations
Applied to files:
src/ai-providers/azure.js
🧬 Code graph analysis (1)
src/ai-providers/azure.js (2)
scripts/modules/ai-services-unified.js (3)
modelId(529-529)baseURL(533-533)path(191-191)scripts/modules/config-manager.js (1)
modelId(703-703)
🔇 Additional comments (5)
scripts/modules/supported-models.json (1)
1001-1066: Azure reasoning models are well-structured and complete.The five new Azure models (gpt-5, o1, o3, o3-mini, o4-mini) follow the required schema with
id,name,swe_score,cost_per_1m_tokens,allowed_roles,max_tokens, andsupported. Theapi_type: "responses"metadata correctly enables the new endpoint detection logic inAzureProvider.isReasoningModel().src/constants/providers.js (1)
39-41: Good use of Set for deduplication.The
Setcorrectly prevents duplicate entries when mergingVALIDATED_PROVIDERSandCUSTOM_PROVIDERS_ARRAY, addressing the overlap issue with azure, openrouter, bedrock, and ollama appearing in both arrays.src/ai-providers/azure.js (3)
46-61: Reasoning model detection with heuristic fallback looks correct.The implementation properly checks
api_type === 'responses'from the model registry first, then falls back to a regex heuristic for custom Azure deployment names. This addresses the concern about Azure deployments using custom names.One note: the regex
^(gpt-5|o1|o3|o4)will also match model IDs likeo1-previeworo4-mini-high, which is likely the desired behavior for forward compatibility with reasoning model variants.
69-104: URL adjustment logic correctly handles query strings and various path patterns.The implementation properly:
- Uses
URLparsing to preserve query parameters (e.g.,?api-version=...)- Handles the three main Azure URL patterns:
/chat/completions,/openai/deployments/<dep>, and/openai- Includes a robust fallback for malformed URLs that still preserves query strings
The guard at line 97 (
!/\/responses$/.test(newPath)) correctly prevents double-appending/responses.
115-129: Client initialization correctly integrates URL adjustment.The
getClientmethod properly extractsmodelIdfrom params and uses it to compute the adjusted base URL before passing tocreateAzure. This aligns with howmodelIdis sourced fromroleConfigin the unified AI services layer.
| 'mistral', | ||
| 'azure', | ||
| 'openrouter', | ||
| 'bedrock', | ||
| 'ollama' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check if mistral key exists in supported-models.json
echo "Checking for 'mistral' key in supported-models.json..."
jq 'has("mistral")' scripts/modules/supported-models.json
echo ""
echo "All top-level provider keys in supported-models.json:"
jq 'keys | sort' scripts/modules/supported-models.jsonRepository: eyaltoledano/claude-task-master
Length of output: 390
Add mistral to supported-models.json or remove from VALIDATED_PROVIDERS.
The mistral provider was added to VALIDATED_PROVIDERS (line 14 in src/constants/providers.js), but no corresponding "mistral" key exists in supported-models.json. Mistral models will fail validation since VALIDATED_PROVIDERS are checked against the supported-models registry. Either add mistral configuration to supported-models.json or remove it from VALIDATED_PROVIDERS.
🤖 Prompt for AI Agents
In src/constants/providers.js around lines 14–18, `mistral` was added to
VALIDATED_PROVIDERS but no corresponding "mistral" entry exists in
supported-models.json; this causes validation failures. Fix by either adding a
"mistral" key to supported-models.json with the same schema/fields used by other
providers (model list, capabilities, validation rules) so the validator can find
mistral models, or remove "mistral" from VALIDATED_PROVIDERS if support isn’t
ready; ensure whichever change you make keeps the JSON schema consistent with
existing provider entries and update any related tests/fixtures.
Fixes #638
🤖 Generated with Claude Code
What type of PR is this?
Description
Related Issues
How to Test This
# Example commands or stepsExpected result:
Contributor Checklist
npm run changesetnpm testnpm run format-check(ornpm run formatto fix)Changelog Entry
For Maintainers
Summary by CodeRabbit
New Features
Improvements
✏️ Tip: You can customize this high-level summary in your review settings.