Skip to content

Conversation

@Crunchyman-ralph
Copy link
Collaborator

@Crunchyman-ralph Crunchyman-ralph commented Oct 14, 2025

  • Add azure, openrouter, bedrock, and ollama to VALIDATED_PROVIDERS array
  • Add Azure reasoning models (GPT-5, o1, o3, o3-mini, o4-mini) to supported-models.json
  • Implement automatic API endpoint detection for Azure reasoning models
  • Add dual endpoint support (chat/completions vs responses) in AzureProvider
  • Add smart URL adjustment logic for different Azure configurations
  • Maintain backward compatibility with existing Azure setups

Fixes #638

🤖 Generated with Claude Code

What type of PR is this?

  • 🐛 Bug fix
  • ✨ Feature
  • 🔌 Integration
  • 📝 Docs
  • 🧹 Refactor
  • Other:

Description

Related Issues

How to Test This

# Example commands or steps

Expected result:

Contributor Checklist

  • Created changeset: npm run changeset
  • Tests pass: npm test
  • Format check passes: npm run format-check (or npm run format to fix)
  • Addressed CodeRabbit comments (if any)
  • Linked related issues (if any)
  • Manually tested the changes

Changelog Entry


For Maintainers

  • PR title follows conventional commits
  • Target branch correct
  • Labels added
  • Milestone assigned (if applicable)

Summary by CodeRabbit

  • New Features

    • Added support for new Azure OpenAI models: gpt-5, o1, o3, o3-mini, and o4-mini (up to 100k tokens).
  • Improvements

    • Automatic selection of the Responses API for reasoning-capable models to simplify endpoint handling.
    • Expanded validated provider list to include Azure, OpenRouter, Bedrock, and Ollama for broader compatibility.

✏️ Tip: You can customize this high-level summary in your review settings.

- Add azure, openrouter, bedrock, and ollama to VALIDATED_PROVIDERS array
- Add Azure reasoning models (GPT-5, o1, o3, o3-mini, o4-mini) to supported-models.json
- Implement automatic API endpoint detection for Azure reasoning models
- Add dual endpoint support (chat/completions vs responses) in AzureProvider
- Add smart URL adjustment logic for different Azure configurations
- Maintain backward compatibility with existing Azure setups

Fixes #638

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-authored-by: Ralph Khreish <[email protected]>
@changeset-bot
Copy link

changeset-bot bot commented Oct 14, 2025

⚠️ No Changeset found

Latest commit: f7b8617

Merging this PR will not cause a version bump for any packages. If these changes should not result in a new version, you're good to go. If these changes should result in a version bump, you need to add a changeset.

Click here to learn what changesets are, and how to add one.

Click here if you're a maintainer who wants to add a changeset to this PR

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Oct 14, 2025

Walkthrough

Adds five Azure reasoning models to the model registry, updates the Azure provider to detect reasoning models and rewrite Azure endpoints to /responses based on model metadata, and expands the validated provider list to include azure, openrouter, bedrock, and ollama.

Changes

Cohort / File(s) Summary
Model registry updates
scripts/modules/supported-models.json
Added five Azure OpenAI model entries (gpt-5, o1, o3, o3-mini, o4-mini) with api_type: "responses", max_tokens: 100000, cost fields, allowed_roles, swe_score, and supported: true. gpt-5 also includes temperature: 1. No existing entries removed or modified.
Azure provider endpoint routing logic
src/ai-providers/azure.js
Imported model metadata map; added isReasoningModel(modelId) to detect responses-based models; added adjustBaseURL(baseURL, modelId) to convert chat/completions/deployment paths to /responses (preserving queries and handling parse failures); updated getClient(params) to accept modelId, compute adjusted base URL, and pass it to createAzure.
Provider validation list expansion
src/constants/providers.js
Expanded VALIDATED_PROVIDERS to include 'azure', 'openrouter', 'bedrock', and 'ollama'; changed ALL_PROVIDERS construction to use new Set([...VALIDATED_PROVIDERS, ...CUSTOM_PROVIDERS_ARRAY]) to deduplicate.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  participant Caller as Caller
  participant AzureProvider as AzureProvider
  participant ModelMap as Supported Models JSON
  participant AzureSDK as createAzure()

  Caller->>AzureProvider: getClient({ baseURL, modelId, ... })
  AzureProvider->>ModelMap: lookup(modelId)
  ModelMap-->>AzureProvider: metadata (api_type, id, ...)
  alt api_type == "responses"
    AzureProvider->>AzureProvider: adjustBaseURL(baseURL) -> ensure /responses (preserve query)
  else
    AzureProvider->>AzureProvider: leave baseURL unchanged
  end
  AzureProvider->>AzureSDK: createAzure({ baseURL: adjustedBaseURL, ... })
  AzureSDK-->>AzureProvider: client
  AzureProvider-->>Caller: client
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

  • Pay attention to URL parsing edge cases and query preservation in adjustBaseURL.
  • Verify isReasoningModel lookup/heuristic aligns with model IDs in supported-models.json.
  • Confirm tests or runtime usages of getClient pass the new modelId parameter where required.

Possibly related PRs

Suggested reviewers

  • eyaltoledano

Pre-merge checks and finishing touches

❌ Failed checks (1 inconclusive)
Check name Status Explanation Resolution
Out of Scope Changes check ❓ Inconclusive Changes to VALIDATED_PROVIDERS (adding azure, openrouter, bedrock, ollama) and ALL_PROVIDERS deduplication logic appear to be supporting infrastructure beyond the specific #638 requirements. Clarify whether adding openrouter, bedrock, and ollama to VALIDATED_PROVIDERS is within scope for issue #638 or represents separate feature additions.
✅ Passed checks (4 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title directly reflects the main change: adding Azure provider support with reasoning models, which matches the primary objective in the linked issue #638.
Linked Issues check ✅ Passed The PR addresses the core requirement from #638: enabling Azure as a supported provider. It adds Azure to VALIDATED_PROVIDERS, creates Azure model configurations, and implements endpoint detection logic for reasoning models.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch claude/issue-638-20251014-0756

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: ASSERTIVE

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between a98d96e and 59ee1e7.

📒 Files selected for processing (3)
  • scripts/modules/supported-models.json (1 hunks)
  • src/ai-providers/azure.js (2 hunks)
  • src/constants/providers.js (1 hunks)
🧰 Additional context used
📓 Path-based instructions (5)
**/*.js

📄 CodeRabbit inference engine (.cursor/rules/tests.mdc)

**/*.js: Declare and initialize global variables at the top of modules to avoid hoisting issues.
Use proper function declarations to avoid hoisting issues and initialize variables before they are referenced.
Do not reference variables before their declaration in module scope.
Use dynamic imports (import()) to avoid initialization order issues in modules.

Files:

  • src/constants/providers.js
  • src/ai-providers/azure.js
src/ai-providers/*.js

📄 CodeRabbit inference engine (.cursor/rules/ai_providers.mdc)

src/ai-providers/*.js: Create a new provider module file in src/ai-providers/ named .js when adding a new AI provider.
Provider modules must export three functions: generateText, streamText, and generateObject.
Provider modules must import the provider's create function from @ai-sdk/, and import generateText, streamText, generateObject from the core ai package, as well as the log utility from ../../scripts/modules/utils.js.
Implement generateText, streamText, and generateObject functions in provider modules with basic validation and try/catch error handling.

Provider-specific wrappers for Vercel AI SDK functions must be implemented in src/ai-providers/*.js, each file corresponding to a provider.

Files:

  • src/ai-providers/azure.js
scripts/modules/supported-models.json

📄 CodeRabbit inference engine (.cursor/rules/ai_providers.mdc)

Add a new key for the provider and an array of model objects under it in scripts/modules/supported-models.json, including id, name, allowed_roles, and optionally swe_score, cost_per_1m_tokens, and max_tokens.

Files:

  • scripts/modules/supported-models.json
scripts/modules/**

📄 CodeRabbit inference engine (.cursor/rules/dev_workflow.mdc)

When using the MCP server, restart it if core logic in scripts/modules or MCP tool/direct function definitions change.

Files:

  • scripts/modules/supported-models.json
scripts/modules/*

📄 CodeRabbit inference engine (.cursor/rules/tags.mdc)

scripts/modules/*: Every command that reads or writes tasks.json must be tag-aware
All command files must import getCurrentTag from utils.js
Every CLI command that operates on tasks must include the --tag CLI option
All commands must resolve the tag using the pattern: options.tag || getCurrentTag(projectRoot) || 'master'
All commands must find projectRoot with error handling before proceeding
All commands must pass { projectRoot, tag } as context to core functions
MCP direct functions must accept and use a context object containing projectRoot and tag, and pass them to core functions
Do not hard-code tag resolution (e.g., const tag = options.tag || 'master';); always use getCurrentTag
Do not omit the --tag CLI option in commands that operate on tasks
Do not omit the context parameter when calling core functions from commands
Do not call readJSON or writeJSON without passing projectRoot and tag

Files:

  • scripts/modules/supported-models.json
🧠 Learnings (3)
📚 Learning: 2025-07-18T17:06:04.909Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_providers.mdc:0-0
Timestamp: 2025-07-18T17:06:04.909Z
Learning: Applies to scripts/modules/config-manager.js : Update scripts/modules/config-manager.js to add the new provider to MODEL_MAP, ensure it is included in VALID_PROVIDERS, and update API key handling logic.

Applied to files:

  • src/constants/providers.js
📚 Learning: 2025-07-18T17:06:04.909Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_providers.mdc:0-0
Timestamp: 2025-07-18T17:06:04.909Z
Learning: Applies to src/ai-providers/*.js : Provider modules must import the provider's create<ProviderName> function from ai-sdk/<provider-name>, and import generateText, streamText, generateObject from the core ai package, as well as the log utility from ../../scripts/modules/utils.js.

Applied to files:

  • src/ai-providers/azure.js
📚 Learning: 2025-07-18T17:06:04.909Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_providers.mdc:0-0
Timestamp: 2025-07-18T17:06:04.909Z
Learning: Applies to scripts/modules/supported-models.json : Add a new key for the provider and an array of model objects under it in scripts/modules/supported-models.json, including id, name, allowed_roles, and optionally swe_score, cost_per_1m_tokens, and max_tokens.

Applied to files:

  • scripts/modules/supported-models.json
🧬 Code graph analysis (1)
src/ai-providers/azure.js (2)
scripts/modules/ai-services-unified.js (2)
  • modelId (529-529)
  • baseURL (533-533)
scripts/modules/config-manager.js (1)
  • modelId (703-703)
🪛 GitHub Actions: CI
src/ai-providers/azure.js

[error] 8-9: Formatter would have printed content with import MODEL_MAP from '../../scripts/modules/supported-models.json' with { type: 'json' } - formatting mismatch detected by biome format. Run 'biome format .' to fix.


[error] 46-49: Formatter would have printed content for: const modelDef = azureModels.find(m => m.id === modelId); - ensure spacing and arrow function formatting matches formatter output. Run 'biome format .' to fix.

🔇 Additional comments (2)
src/ai-providers/azure.js (2)

90-100: AzureProvider.getClient always receives modelId. All invocations (via ai-services-unified) pass modelId from config, so the reasoning-URL switch is safe.

Likely an incorrect or invalid review comment.


8-8: No Biome formatting errors remain The JSON import already adheres to the formatter’s style.

@Crunchyman-ralph
Copy link
Collaborator Author

@claude please do this:

  • rebase with origin/next
  • npm run format
  • address coderabbitai comments on this PR

@claude
Copy link
Contributor

claude bot commented Dec 9, 2025

Claude finished @Crunchyman-ralph's task —— View job


Tasks Completed ✅

  • Check git status and current branch
  • Rebase with origin/next (skipped - requires approval, but working tree was clean)
  • Run npm run format (manual formatting applied)
  • Address CodeRabbit comments:
    • Add display names to Azure models in supported-models.json (GPT-5, o1, o3, o3-mini, o4-mini)
    • Improve model detection logic in azure.js with fallback heuristic for custom deployment names
    • Fix URL rewriting bug in adjustBaseURL to properly handle query strings using URL parser
    • Fix duplicate providers in ALL_PROVIDERS array using Set deduplication
  • Commit and push changes

Summary

Successfully addressed all CodeRabbit feedback:

  1. Added display names to all new Azure reasoning models for better UX
  2. Enhanced model detection with fallback heuristic that recognizes reasoning models even when using custom Azure deployment names
  3. Fixed URL rewriting bug that could corrupt query strings by implementing proper URL parsing with fallback
  4. Eliminated duplicate providers in ALL_PROVIDERS array using Set deduplication

All changes have been committed and pushed to the PR branch claude/issue-638-20251014-0756.


- Add display names to Azure reasoning models (gpt-5, o1, o3, o3-mini, o4-mini)
- Improve model detection with fallback heuristic for custom deployment names
- Fix URL rewriting to properly handle query strings using URL parser
- Remove duplicate providers in ALL_PROVIDERS array using Set deduplication

Co-authored-by: Ralph Khreish <[email protected]>
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: ASSERTIVE

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 59ee1e7 and f7b8617.

📒 Files selected for processing (3)
  • scripts/modules/supported-models.json (1 hunks)
  • src/ai-providers/azure.js (2 hunks)
  • src/constants/providers.js (2 hunks)
🧰 Additional context used
📓 Path-based instructions (6)
**/*.js

📄 CodeRabbit inference engine (.cursor/rules/architecture.mdc)

**/*.js: Always use isSilentMode() function to check current silent mode status instead of directly accessing the global silentMode variable or global.silentMode
Use try/finally block pattern when wrapping core function calls with enableSilentMode/disableSilentMode to ensure silent mode is always restored, even if errors occur
For functions that need to handle both a passed silentMode parameter and check global state, check both the function parameter and global state: const isSilent = options.silentMode || (typeof options.silentMode === 'undefined' && isSilentMode())
Functions should accept their dependencies as parameters rather than using globals to promote testability and explicit dependency injection
Define callbacks as separate functions for easier testing rather than inline functions

Files:

  • src/constants/providers.js
  • src/ai-providers/azure.js
**/*.{js,jsx}

📄 CodeRabbit inference engine (.cursor/rules/test_workflow.mdc)

JavaScript test files using Jest must follow the same testing patterns as TypeScript files, include proper mocking of external dependencies, and achieve the same coverage thresholds

Files:

  • src/constants/providers.js
  • src/ai-providers/azure.js
**/*.{js,ts}

📄 CodeRabbit inference engine (.cursor/rules/utilities.mdc)

**/*.{js,ts}: Import and use specific getters from config-manager.js (e.g., getMainProvider(), getLogLevel(), getMainMaxTokens()) to access configuration values needed for application logic
Use isApiKeySet(providerName, session) from config-manager.js to check if a provider's key is available before potentially attempting an AI call
Do not add direct console.log calls outside the logging utility - use the central log function instead
Ensure silent mode is disabled in a finally block to prevent it from staying enabled
Do not access the global silentMode variable directly - use the exported silent mode control functions instead
Do not duplicate task ID formatting logic across modules - centralize formatting utilities
Use ContextGatherer class from utils/contextGatherer.js for AI-powered commands that need project context, supporting tasks, files, custom text, and project tree context
Use FuzzyTaskSearch class from utils/fuzzyTaskSearch.js for automatic task relevance detection with configurable search parameters
Use fuzzy search to supplement user-provided task IDs and display discovered task IDs to users for transparency
Do not replace explicit user task selections with fuzzy results - fuzzy search should supplement, not replace user selections
Use readJSON and writeJSON utilities for all JSON file operations instead of raw fs.readFileSync or fs.writeFileSync
Include error handling for JSON file operations and validate JSON structure after reading
Use path.join() for cross-platform path construction and path.resolve() for absolute paths, validating paths before file operations
Support both .env files and MCP session environment for environment variable resolution with fallbacks for missing values
Prefer updating the core function to accept an outputFormat parameter and check outputFormat === 'json' before displaying UI elements

Files:

  • src/constants/providers.js
  • src/ai-providers/azure.js
scripts/modules/supported-models.json

📄 CodeRabbit inference engine (.cursor/rules/ai_providers.mdc)

Update scripts/modules/supported-models.json to add a new provider key with an array of model objects, each containing id, name (optional), swe_score, cost_per_1m_tokens (optional), allowed_roles, and max_tokens (optional but recommended)

Files:

  • scripts/modules/supported-models.json
scripts/modules/**/*

📄 CodeRabbit inference engine (.cursor/rules/dev_workflow.mdc)

Restart the MCP server if core logic in scripts/modules or MCP tool definitions change

Files:

  • scripts/modules/supported-models.json
src/ai-providers/*.js

📄 CodeRabbit inference engine (.cursor/rules/ai_providers.mdc)

src/ai-providers/*.js: Create a new provider module in src/ai-providers/<provider-name>.js that implements generate<ProviderName>Text, stream<ProviderName>Text, and generate<ProviderName>Object functions using the Vercel AI SDK
Provider modules in src/ai-providers/ must import generateText, streamText, generateObject from the ai package, the provider's create<ProviderName> function from @ai-sdk/<provider-name>, and the log utility from ../../scripts/modules/utils.js
Provider functions must include basic validation and try/catch error handling

src/ai-providers/*.js files should contain provider-specific wrappers for Vercel AI SDK functions and interact directly with Vercel AI SDK adapters

Files:

  • src/ai-providers/azure.js
🧠 Learnings (21)
📓 Common learnings
Learnt from: Crunchyman-ralph
Repo: eyaltoledano/claude-task-master PR: 1444
File: apps/cli/src/utils/auto-update/changelog.ts:103-111
Timestamp: 2025-11-25T18:32:29.828Z
Learning: The claude-task-master project uses a custom changelog format with PR numbers and author acknowledgements in the pattern `- [#PR](...) Thanks [author]! - Description`, which is parsed by the regex in apps/cli/src/utils/auto-update/changelog.ts.
Learnt from: Crunchyman-ralph
Repo: eyaltoledano/claude-task-master PR: 1178
File: packages/tm-core/src/auth/config.ts:5-7
Timestamp: 2025-09-02T21:51:27.921Z
Learning: The user Crunchyman-ralph prefers not to use node: scheme imports (e.g., 'node:os', 'node:path') for Node.js core modules and considers suggestions to change bare imports to node: scheme as too nitpicky.
Learnt from: Crunchyman-ralph
Repo: eyaltoledano/claude-task-master PR: 1069
File: .changeset/fix-tag-complexity-detection.md:0-0
Timestamp: 2025-08-02T15:33:22.656Z
Learning: For changeset files (.changeset/*.md), Crunchyman-ralph prefers to ignore formatting nitpicks about blank lines between frontmatter and descriptions, as he doesn't mind having them and wants to avoid such comments in future reviews.
Learnt from: Crunchyman-ralph
Repo: eyaltoledano/claude-task-master PR: 1132
File: .github/workflows/weekly-metrics-discord.yml:81-93
Timestamp: 2025-08-13T22:10:46.958Z
Learning: Crunchyman-ralph ignores YAML formatting nitpicks about trailing spaces when there's no project-specific YAML formatter configured, preferring to focus on functionality over cosmetic formatting issues.
Learnt from: Crunchyman-ralph
Repo: eyaltoledano/claude-task-master PR: 1132
File: .github/workflows/weekly-metrics-discord.yml:81-93
Timestamp: 2025-08-13T22:10:46.958Z
Learning: Crunchyman-ralph ignores YAML formatting nitpicks about trailing spaces when there's no project-specific YAML formatter configured, preferring to focus on functionality over cosmetic formatting issues.
Learnt from: Crunchyman-ralph
Repo: eyaltoledano/claude-task-master PR: 1105
File: scripts/modules/supported-models.json:242-254
Timestamp: 2025-08-08T11:33:15.297Z
Learning: Preference: In scripts/modules/supported-models.json, the "name" field is optional. For OpenAI entries (e.g., "gpt-5"), Crunchyman-ralph prefers omitting "name" when the id is explicit enough; avoid nitpicks requesting a "name" in such cases.
Learnt from: Crunchyman-ralph
Repo: eyaltoledano/claude-task-master PR: 1200
File: src/ai-providers/custom-sdk/grok-cli/language-model.js:96-100
Timestamp: 2025-09-19T16:06:42.182Z
Learning: The user Crunchyman-ralph prefers to keep environment variable names explicit (like GROK_CLI_API_KEY) rather than supporting multiple aliases, to avoid overlap and ensure clear separation between different CLI implementations.
Learnt from: Crunchyman-ralph
Repo: eyaltoledano/claude-task-master PR: 1178
File: packages/tm-core/src/subpath-exports.test.ts:6-9
Timestamp: 2025-09-03T12:45:30.724Z
Learning: The user Crunchyman-ralph prefers to avoid overly nitpicky or detailed suggestions in code reviews, especially for test coverage of minor import paths. Focus on more substantial issues rather than comprehensive coverage of all possible edge cases.
Learnt from: Crunchyman-ralph
Repo: eyaltoledano/claude-task-master PR: 1217
File: apps/cli/src/index.ts:16-21
Timestamp: 2025-09-18T16:35:35.147Z
Learning: The user Crunchyman-ralph considers suggestions to export types for better ergonomics (like exporting UpdateInfo type alongside related functions) as nitpicky and prefers not to implement such suggestions.
📚 Learning: 2025-11-24T17:57:14.743Z
Learnt from: CR
Repo: eyaltoledano/claude-task-master PR: 0
File: .cursor/rules/ai_providers.mdc:0-0
Timestamp: 2025-11-24T17:57:14.743Z
Learning: Applies to scripts/modules/config-manager.js : In `scripts/modules/config-manager.js`, update `MODEL_MAP` to include the new provider, ensure `VALID_PROVIDERS` includes the provider, update API key handling in `keyMap` and the `switch` statement in `getMcpApiKeyStatus` and `isApiKeySet`

Applied to files:

  • src/constants/providers.js
  • src/ai-providers/azure.js
📚 Learning: 2025-11-24T17:57:14.743Z
Learnt from: CR
Repo: eyaltoledano/claude-task-master PR: 0
File: .cursor/rules/ai_providers.mdc:0-0
Timestamp: 2025-11-24T17:57:14.743Z
Learning: Applies to scripts/modules/ai-services-unified.js : In `scripts/modules/ai-services-unified.js`, import new providers and add entries to the `PROVIDER_FUNCTIONS` map with `generateText`, `streamText`, and `generateObject` properties

Applied to files:

  • src/constants/providers.js
  • src/ai-providers/azure.js
📚 Learning: 2025-11-24T17:57:14.743Z
Learnt from: CR
Repo: eyaltoledano/claude-task-master PR: 0
File: .cursor/rules/ai_providers.mdc:0-0
Timestamp: 2025-11-24T17:57:14.743Z
Learning: Applies to src/ai-providers/*.js : Provider functions must include basic validation and try/catch error handling

Applied to files:

  • src/constants/providers.js
  • src/ai-providers/azure.js
📚 Learning: 2025-11-24T17:57:14.743Z
Learnt from: CR
Repo: eyaltoledano/claude-task-master PR: 0
File: .cursor/rules/ai_providers.mdc:0-0
Timestamp: 2025-11-24T17:57:14.743Z
Learning: Applies to src/ai-providers/*.js : Provider modules in `src/ai-providers/` must import `generateText`, `streamText`, `generateObject` from the `ai` package, the provider's `create<ProviderName>` function from `ai-sdk/<provider-name>`, and the `log` utility from `../../scripts/modules/utils.js`

Applied to files:

  • src/constants/providers.js
  • src/ai-providers/azure.js
📚 Learning: 2025-11-24T17:57:14.743Z
Learnt from: CR
Repo: eyaltoledano/claude-task-master PR: 0
File: .cursor/rules/ai_providers.mdc:0-0
Timestamp: 2025-11-24T17:57:14.743Z
Learning: Applies to src/ai-providers/*.js : Create a new provider module in `src/ai-providers/<provider-name>.js` that implements `generate<ProviderName>Text`, `stream<ProviderName>Text`, and `generate<ProviderName>Object` functions using the Vercel AI SDK

Applied to files:

  • src/constants/providers.js
  • src/ai-providers/azure.js
📚 Learning: 2025-11-24T17:57:14.743Z
Learnt from: CR
Repo: eyaltoledano/claude-task-master PR: 0
File: .cursor/rules/ai_providers.mdc:0-0
Timestamp: 2025-11-24T17:57:14.743Z
Learning: Applies to **/*.md : Update relevant documentation (like `README.md`) mentioning supported providers or configuration when adding a new AI provider

Applied to files:

  • src/constants/providers.js
📚 Learning: 2025-11-24T18:04:43.972Z
Learnt from: CR
Repo: eyaltoledano/claude-task-master PR: 0
File: .cursor/rules/utilities.mdc:0-0
Timestamp: 2025-11-24T18:04:43.972Z
Learning: Applies to **/*.{js,ts} : Use isApiKeySet(providerName, session) from config-manager.js to check if a provider's key is available before potentially attempting an AI call

Applied to files:

  • src/constants/providers.js
📚 Learning: 2025-07-18T17:09:40.548Z
Learnt from: CR
Repo: eyaltoledano/claude-task-master PR: 0
File: .cursor/rules/dependencies.mdc:0-0
Timestamp: 2025-07-18T17:09:40.548Z
Learning: Applies to scripts/modules/dependency-manager.js : Use Set objects to identify and remove duplicate dependencies

Applied to files:

  • src/constants/providers.js
📚 Learning: 2025-11-24T17:58:07.992Z
Learnt from: CR
Repo: eyaltoledano/claude-task-master PR: 0
File: .cursor/rules/architecture.mdc:0-0
Timestamp: 2025-11-24T17:58:07.992Z
Learning: Applies to src/ai-providers/*.js : src/ai-providers/*.js files should contain provider-specific wrappers for Vercel AI SDK functions and interact directly with Vercel AI SDK adapters

Applied to files:

  • src/constants/providers.js
  • src/ai-providers/azure.js
📚 Learning: 2025-11-24T17:57:14.743Z
Learnt from: CR
Repo: eyaltoledano/claude-task-master PR: 0
File: .cursor/rules/ai_providers.mdc:0-0
Timestamp: 2025-11-24T17:57:14.743Z
Learning: Applies to scripts/modules/supported-models.json : Update `scripts/modules/supported-models.json` to add a new provider key with an array of model objects, each containing `id`, `name` (optional), `swe_score`, `cost_per_1m_tokens` (optional), `allowed_roles`, and `max_tokens` (optional but recommended)

Applied to files:

  • scripts/modules/supported-models.json
  • src/ai-providers/azure.js
📚 Learning: 2025-07-21T14:14:48.694Z
Learnt from: rtmcrc
Repo: eyaltoledano/claude-task-master PR: 933
File: scripts/modules/supported-models.json:238-238
Timestamp: 2025-07-21T14:14:48.694Z
Learning: Model version updates in scripts/modules/supported-models.json may be included in feature PRs if they provide practical improvements like reduced error rates, even if not directly related to the main feature being implemented.

Applied to files:

  • scripts/modules/supported-models.json
📚 Learning: 2025-08-08T11:34:45.482Z
Learnt from: Crunchyman-ralph
Repo: eyaltoledano/claude-task-master PR: 1105
File: .changeset/vast-weeks-fetch.md:5-5
Timestamp: 2025-08-08T11:34:45.482Z
Learning: In this repo, the supported models list is auto-generated by CI into docs/models.md from scripts/modules/supported-models.json via .github/workflows/update-models-md.yml and docs/scripts/models-json-to-markdown.js. Don’t request manual edits to the Markdown; ensure the JSON is correct instead.

Applied to files:

  • scripts/modules/supported-models.json
📚 Learning: 2025-08-08T11:34:45.482Z
Learnt from: Crunchyman-ralph
Repo: eyaltoledano/claude-task-master PR: 1105
File: .changeset/vast-weeks-fetch.md:5-5
Timestamp: 2025-08-08T11:34:45.482Z
Learning: In this repo, supported-models.md is auto-generated by CI from supported-models.json; do not request manual edits to that file—ensure JSON entries are correct instead.

Applied to files:

  • scripts/modules/supported-models.json
📚 Learning: 2025-08-08T11:34:45.482Z
Learnt from: Crunchyman-ralph
Repo: eyaltoledano/claude-task-master PR: 1105
File: .changeset/vast-weeks-fetch.md:5-5
Timestamp: 2025-08-08T11:34:45.482Z
Learning: This repo’s supported models list is CI-generated into docs/models.md from scripts/modules/supported-models.json (workflow: .github/workflows/update-models-md.yml using docs/scripts/models-json-to-markdown.js). Don’t request manual edits to the Markdown; ensure the JSON is correct instead. README links to docs/models.md.

Applied to files:

  • scripts/modules/supported-models.json
📚 Learning: 2025-08-08T11:33:15.297Z
Learnt from: Crunchyman-ralph
Repo: eyaltoledano/claude-task-master PR: 1105
File: scripts/modules/supported-models.json:242-254
Timestamp: 2025-08-08T11:33:15.297Z
Learning: Preference: In scripts/modules/supported-models.json, the "name" field is optional. For OpenAI entries (e.g., "gpt-5"), Crunchyman-ralph prefers omitting "name" when the id is explicit enough; avoid nitpicks requesting a "name" in such cases.

Applied to files:

  • scripts/modules/supported-models.json
  • src/ai-providers/azure.js
📚 Learning: 2025-11-24T17:57:14.743Z
Learnt from: CR
Repo: eyaltoledano/claude-task-master PR: 0
File: .cursor/rules/ai_providers.mdc:0-0
Timestamp: 2025-11-24T17:57:14.743Z
Learning: When setting a model ID using the `models` MCP tool or CLI command, DO NOT include the `provider:` prefix—use only the model ID itself (e.g., `gpt-4o` not `openai:gpt-4o`)

Applied to files:

  • scripts/modules/supported-models.json
  • src/ai-providers/azure.js
📚 Learning: 2025-10-31T18:07:17.402Z
Learnt from: Crunchyman-ralph
Repo: eyaltoledano/claude-task-master PR: 1360
File: src/ai-providers/glm.js:0-0
Timestamp: 2025-10-31T18:07:17.402Z
Learning: In src/ai-providers/glm.js, the GLM provider's getClient method should allow defaulting to the 'coding' endpoint when an invalid or unspecified route parameter is provided, as this is the correct behavior per Z.ai's OpenAI-compatible API documentation. Do not enforce strict route validation that throws errors for unknown routes.

Applied to files:

  • src/ai-providers/azure.js
📚 Learning: 2025-11-24T17:57:14.743Z
Learnt from: CR
Repo: eyaltoledano/claude-task-master PR: 0
File: .cursor/rules/ai_providers.mdc:0-0
Timestamp: 2025-11-24T17:57:14.743Z
Learning: To add a new AI provider with official Vercel AI SDK support, install the provider package via `npm install ai-sdk/<provider-name>`

Applied to files:

  • src/ai-providers/azure.js
📚 Learning: 2025-11-24T17:57:14.743Z
Learnt from: CR
Repo: eyaltoledano/claude-task-master PR: 0
File: .cursor/rules/ai_providers.mdc:0-0
Timestamp: 2025-11-24T17:57:14.743Z
Learning: Applies to tests/unit/ai-providers/*.test.js : Create unit tests in `tests/unit/ai-providers/<provider-name>.test.js` that mock the provider's AI SDK module and test each exported function for correct client instantiation, parameter passing, result handling, and error handling

Applied to files:

  • src/ai-providers/azure.js
📚 Learning: 2025-11-24T17:58:07.992Z
Learnt from: CR
Repo: eyaltoledano/claude-task-master PR: 0
File: .cursor/rules/architecture.mdc:0-0
Timestamp: 2025-11-24T17:58:07.992Z
Learning: Applies to scripts/modules/ai-services-unified.js : ai-services-unified.js should export generateTextService and generateObjectService, handle provider/model selection based on role and .taskmasterconfig, resolve API keys, implement fallback and retry logic, and orchestrate calls to provider-specific implementations

Applied to files:

  • src/ai-providers/azure.js
🧬 Code graph analysis (1)
src/ai-providers/azure.js (2)
scripts/modules/ai-services-unified.js (3)
  • modelId (529-529)
  • baseURL (533-533)
  • path (191-191)
scripts/modules/config-manager.js (1)
  • modelId (703-703)
🔇 Additional comments (5)
scripts/modules/supported-models.json (1)

1001-1066: Azure reasoning models are well-structured and complete.

The five new Azure models (gpt-5, o1, o3, o3-mini, o4-mini) follow the required schema with id, name, swe_score, cost_per_1m_tokens, allowed_roles, max_tokens, and supported. The api_type: "responses" metadata correctly enables the new endpoint detection logic in AzureProvider.isReasoningModel().

src/constants/providers.js (1)

39-41: Good use of Set for deduplication.

The Set correctly prevents duplicate entries when merging VALIDATED_PROVIDERS and CUSTOM_PROVIDERS_ARRAY, addressing the overlap issue with azure, openrouter, bedrock, and ollama appearing in both arrays.

src/ai-providers/azure.js (3)

46-61: Reasoning model detection with heuristic fallback looks correct.

The implementation properly checks api_type === 'responses' from the model registry first, then falls back to a regex heuristic for custom Azure deployment names. This addresses the concern about Azure deployments using custom names.

One note: the regex ^(gpt-5|o1|o3|o4) will also match model IDs like o1-preview or o4-mini-high, which is likely the desired behavior for forward compatibility with reasoning model variants.


69-104: URL adjustment logic correctly handles query strings and various path patterns.

The implementation properly:

  1. Uses URL parsing to preserve query parameters (e.g., ?api-version=...)
  2. Handles the three main Azure URL patterns: /chat/completions, /openai/deployments/<dep>, and /openai
  3. Includes a robust fallback for malformed URLs that still preserves query strings

The guard at line 97 (!/\/responses$/.test(newPath)) correctly prevents double-appending /responses.


115-129: Client initialization correctly integrates URL adjustment.

The getClient method properly extracts modelId from params and uses it to compute the adjusted base URL before passing to createAzure. This aligns with how modelId is sourced from roleConfig in the unified AI services layer.

Comment on lines +14 to +18
'mistral',
'azure',
'openrouter',
'bedrock',
'ollama'
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Check if mistral key exists in supported-models.json
echo "Checking for 'mistral' key in supported-models.json..."
jq 'has("mistral")' scripts/modules/supported-models.json

echo ""
echo "All top-level provider keys in supported-models.json:"
jq 'keys | sort' scripts/modules/supported-models.json

Repository: eyaltoledano/claude-task-master

Length of output: 390


Add mistral to supported-models.json or remove from VALIDATED_PROVIDERS.

The mistral provider was added to VALIDATED_PROVIDERS (line 14 in src/constants/providers.js), but no corresponding "mistral" key exists in supported-models.json. Mistral models will fail validation since VALIDATED_PROVIDERS are checked against the supported-models registry. Either add mistral configuration to supported-models.json or remove it from VALIDATED_PROVIDERS.

🤖 Prompt for AI Agents
In src/constants/providers.js around lines 14–18, `mistral` was added to
VALIDATED_PROVIDERS but no corresponding "mistral" entry exists in
supported-models.json; this causes validation failures. Fix by either adding a
"mistral" key to supported-models.json with the same schema/fields used by other
providers (model list, capabilities, validation rules) so the validator can find
mistral models, or remove "mistral" from VALIDATED_PROVIDERS if support isn’t
ready; ensure whichever change you make keeps the JSON schema consistent with
existing provider entries and update any related tests/fixtures.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants