Skip to content
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
35 changes: 35 additions & 0 deletions .changeset/fix-anthropic-multi-turn-tool-calls.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
---
'@tanstack/ai': patch
'@tanstack/ai-client': patch
'@tanstack/ai-anthropic': patch
'@tanstack/ai-gemini': patch
---

fix(ai, ai-client, ai-anthropic, ai-gemini): fix multi-turn conversations failing after tool calls

**Core (@tanstack/ai):**

- Lazy assistant message creation: `StreamProcessor` now defers creating the assistant message until the first content-bearing chunk arrives (text, tool call, thinking, or error), eliminating empty `parts: []` messages from appearing during auto-continuation when the model returns no content
- Add `prepareAssistantMessage()` (lazy) alongside deprecated `startAssistantMessage()` (eager, backwards-compatible)
- Add `getCurrentAssistantMessageId()` to check if a message was created
- **Rewrite `uiMessageToModelMessages()` to preserve part ordering**: the function now walks parts sequentially instead of separating by type, producing correctly interleaved assistant/tool messages (text1 + toolCall1 β†’ toolResult1 β†’ text2 + toolCall2 β†’ toolResult2) instead of concatenating all text and batching all tool calls. This fixes multi-round tool flows where the model would see garbled conversation history and re-call tools unnecessarily.
- Deduplicate tool result messages: when a client tool has both a `tool-result` part and a `tool-call` part with `output`, only one `role: 'tool'` message is emitted per tool call ID

**Client (@tanstack/ai-client):**

- Update `ChatClient.processStream()` to use lazy assistant message creation, preventing UI flicker from empty messages being created then removed

**Anthropic:**

- Fix consecutive user-role messages violating Anthropic's alternating role requirement by merging them in `formatMessages`
- Deduplicate `tool_result` blocks with the same `tool_use_id`
- Filter out empty assistant messages from conversation history
- Suppress duplicate `RUN_FINISHED` event from `message_stop` when `message_delta` already emitted one
- Fix `TEXT_MESSAGE_END` incorrectly emitting for `tool_use` content blocks
- Add Claude Opus 4.6 model support with adaptive thinking and effort parameter

**Gemini:**

- Fix consecutive user-role messages violating Gemini's alternating role requirement by merging them in `formatMessages`
- Deduplicate `functionResponse` parts with the same name (tool call ID)
- Filter out empty model messages from conversation history
10 changes: 5 additions & 5 deletions examples/ts-group-chat/package.json
Original file line number Diff line number Diff line change
Expand Up @@ -14,11 +14,11 @@
"@tanstack/ai-client": "workspace:*",
"@tanstack/ai-react": "workspace:*",
"@tanstack/react-devtools": "^0.8.2",
"@tanstack/react-router": "^1.141.1",
"@tanstack/react-router-devtools": "^1.139.7",
"@tanstack/react-router-ssr-query": "^1.139.7",
"@tanstack/react-start": "^1.141.1",
"@tanstack/router-plugin": "^1.139.7",
"@tanstack/react-router": "^1.158.4",
"@tanstack/react-router-devtools": "^1.158.4",
"@tanstack/react-router-ssr-query": "^1.158.4",
"@tanstack/react-start": "^1.159.0",
"@tanstack/router-plugin": "^1.158.4",
"capnweb": "^0.1.0",
"react": "^19.2.3",
"react-dom": "^19.2.3",
Expand Down
12 changes: 6 additions & 6 deletions examples/ts-react-chat/package.json
Original file line number Diff line number Diff line change
Expand Up @@ -20,14 +20,14 @@
"@tanstack/ai-openrouter": "workspace:*",
"@tanstack/ai-react": "workspace:*",
"@tanstack/ai-react-ui": "workspace:*",
"@tanstack/nitro-v2-vite-plugin": "^1.141.0",
"@tanstack/nitro-v2-vite-plugin": "^1.154.7",
"@tanstack/react-devtools": "^0.8.2",
"@tanstack/react-router": "^1.141.1",
"@tanstack/react-router-devtools": "^1.139.7",
"@tanstack/react-router-ssr-query": "^1.139.7",
"@tanstack/react-start": "^1.141.1",
"@tanstack/react-router": "^1.158.4",
"@tanstack/react-router-devtools": "^1.158.4",
"@tanstack/react-router-ssr-query": "^1.158.4",
"@tanstack/react-start": "^1.159.0",
"@tanstack/react-store": "^0.8.0",
"@tanstack/router-plugin": "^1.139.7",
"@tanstack/router-plugin": "^1.158.4",
"@tanstack/store": "^0.8.0",
"highlight.js": "^11.11.1",
"lucide-react": "^0.561.0",
Expand Down
6 changes: 3 additions & 3 deletions examples/ts-react-chat/src/routeTree.gen.ts
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ export interface FileRoutesByFullPath {
'/': typeof IndexRoute
'/api/tanchat': typeof ApiTanchatRoute
'/example/guitars/$guitarId': typeof ExampleGuitarsGuitarIdRoute
'/example/guitars': typeof ExampleGuitarsIndexRoute
'/example/guitars/': typeof ExampleGuitarsIndexRoute
}
export interface FileRoutesByTo {
'/': typeof IndexRoute
Expand All @@ -60,7 +60,7 @@ export interface FileRouteTypes {
| '/'
| '/api/tanchat'
| '/example/guitars/$guitarId'
| '/example/guitars'
| '/example/guitars/'
fileRoutesByTo: FileRoutesByTo
to: '/' | '/api/tanchat' | '/example/guitars/$guitarId' | '/example/guitars'
id:
Expand Down Expand Up @@ -97,7 +97,7 @@ declare module '@tanstack/react-router' {
'/example/guitars/': {
id: '/example/guitars/'
path: '/example/guitars'
fullPath: '/example/guitars'
fullPath: '/example/guitars/'
preLoaderRoute: typeof ExampleGuitarsIndexRouteImport
parentRoute: typeof rootRouteImport
}
Expand Down
4 changes: 2 additions & 2 deletions examples/ts-solid-chat/package.json
Original file line number Diff line number Diff line change
Expand Up @@ -19,8 +19,8 @@
"@tanstack/ai-openai": "workspace:*",
"@tanstack/ai-solid": "workspace:*",
"@tanstack/ai-solid-ui": "workspace:*",
"@tanstack/nitro-v2-vite-plugin": "^1.141.0",
"@tanstack/router-plugin": "^1.139.7",
"@tanstack/nitro-v2-vite-plugin": "^1.154.7",
"@tanstack/router-plugin": "^1.158.4",
"@tanstack/solid-ai-devtools": "workspace:*",
"@tanstack/solid-devtools": "^0.7.15",
"@tanstack/solid-router": "^1.139.10",
Expand Down
157 changes: 119 additions & 38 deletions packages/typescript/ai-anthropic/src/adapters/text.ts
Original file line number Diff line number Diff line change
Expand Up @@ -247,6 +247,7 @@ export class AnthropicTextAdapter<
const validKeys: Array<keyof InternalTextProviderOptions> = [
'container',
'context_management',
'effort',
'mcp_servers',
'service_tier',
'stop_sequences',
Expand Down Expand Up @@ -450,7 +451,74 @@ export class AnthropicTextAdapter<
})
}

return formattedMessages
// Post-process: Anthropic requires strictly alternating user/assistant roles.
// Tool results are sent as role:'user' messages, which can create consecutive
// user messages when followed by a new user message. Merge them.
return this.mergeConsecutiveSameRoleMessages(formattedMessages)
}

/**
* Merge consecutive messages of the same role into a single message.
* Anthropic's API requires strictly alternating user/assistant roles.
* Tool results are wrapped as role:'user' messages, which can collide
* with actual user messages in multi-turn conversations.
*
* Also filters out empty assistant messages (e.g., from a previous failed request).
*/
private mergeConsecutiveSameRoleMessages(
messages: InternalTextProviderOptions['messages'],
): InternalTextProviderOptions['messages'] {
const merged: InternalTextProviderOptions['messages'] = []

for (const msg of messages) {
// Skip empty assistant messages (no content or empty string)
if (msg.role === 'assistant') {
const hasContent = Array.isArray(msg.content)
? msg.content.length > 0
: typeof msg.content === 'string' && msg.content.length > 0
if (!hasContent) {
continue
}
}

const prev = merged[merged.length - 1]
if (prev && prev.role === msg.role) {
// Normalize both contents to arrays and concatenate
const prevBlocks = Array.isArray(prev.content)
? prev.content
: typeof prev.content === 'string' && prev.content
? [{ type: 'text' as const, text: prev.content }]
: []
const msgBlocks = Array.isArray(msg.content)
? msg.content
: typeof msg.content === 'string' && msg.content
? [{ type: 'text' as const, text: msg.content }]
: []
prev.content = [...prevBlocks, ...msgBlocks]
} else {
merged.push({ ...msg })
}
}

// De-duplicate tool_result blocks with the same tool_use_id.
// This can happen when the core layer generates tool results from both
// the tool-result part and the tool-call part's output field.
for (const msg of merged) {
if (Array.isArray(msg.content)) {
const seenToolResultIds = new Set<string>()
msg.content = msg.content.filter((block: any) => {
if (block.type === 'tool_result' && block.tool_use_id) {
if (seenToolResultIds.has(block.tool_use_id)) {
return false // Remove duplicate
}
seenToolResultIds.add(block.tool_use_id)
}
return true
})
}
}

return merged
}

private async *processAnthropicStream(
Expand All @@ -473,6 +541,9 @@ export class AnthropicTextAdapter<
let stepId: string | null = null
let hasEmittedRunStarted = false
let hasEmittedTextMessageStart = false
let hasEmittedRunFinished = false
// Track current content block type for proper content_block_stop handling
let currentBlockType: string | null = null

try {
for await (const event of stream) {
Expand All @@ -488,6 +559,7 @@ export class AnthropicTextAdapter<
}

if (event.type === 'content_block_start') {
currentBlockType = event.content_block.type
if (event.content_block.type === 'tool_use') {
currentToolIndex++
toolCallsMap.set(currentToolIndex, {
Expand Down Expand Up @@ -572,59 +644,68 @@ export class AnthropicTextAdapter<
}
}
} else if (event.type === 'content_block_stop') {
const existing = toolCallsMap.get(currentToolIndex)
if (existing) {
// If tool call wasn't started yet (no args), start it now
if (!existing.started) {
existing.started = true
if (currentBlockType === 'tool_use') {
const existing = toolCallsMap.get(currentToolIndex)
if (existing) {
// If tool call wasn't started yet (no args), start it now
if (!existing.started) {
existing.started = true
yield {
type: 'TOOL_CALL_START',
toolCallId: existing.id,
toolName: existing.name,
model,
timestamp,
index: currentToolIndex,
}
}

// Emit TOOL_CALL_END
let parsedInput: unknown = {}
try {
const parsed = existing.input ? JSON.parse(existing.input) : {}
parsedInput = parsed && typeof parsed === 'object' ? parsed : {}
} catch {
parsedInput = {}
}

yield {
type: 'TOOL_CALL_START',
type: 'TOOL_CALL_END',
toolCallId: existing.id,
toolName: existing.name,
model,
timestamp,
index: currentToolIndex,
input: parsedInput,
}
}

// Emit TOOL_CALL_END
let parsedInput: unknown = {}
try {
const parsed = existing.input ? JSON.parse(existing.input) : {}
parsedInput = parsed && typeof parsed === 'object' ? parsed : {}
} catch {
parsedInput = {}
}

yield {
type: 'TOOL_CALL_END',
toolCallId: existing.id,
toolName: existing.name,
model,
timestamp,
input: parsedInput,
} else {
// Emit TEXT_MESSAGE_END only for text blocks (not tool_use blocks)
if (hasEmittedTextMessageStart && accumulatedContent) {
yield {
type: 'TEXT_MESSAGE_END',
messageId,
model,
timestamp,
}
}
}

// Emit TEXT_MESSAGE_END if we had text content
if (hasEmittedTextMessageStart && accumulatedContent) {
currentBlockType = null
Comment on lines +681 to +692
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟑 Minor

TEXT_MESSAGE_END may emit spuriously for non-text, non-tool_use block types (e.g., thinking).

The else branch fires for any content_block_stop where currentBlockType !== 'tool_use', which includes thinking blocks. If a text block preceded the thinking block, this would emit a spurious TEXT_MESSAGE_END when the thinking block stops β€” because hasEmittedTextMessageStart and accumulatedContent are both still truthy.

In practice Anthropic puts thinking blocks before text blocks, so this isn't currently triggered. But if the response ordering changes or new block types are added, it could surface.

Suggested guard
          } else {
-            // Emit TEXT_MESSAGE_END only for text blocks (not tool_use blocks)
-            if (hasEmittedTextMessageStart && accumulatedContent) {
+            // Emit TEXT_MESSAGE_END only when a text block ends
+            if (
+              currentBlockType === 'text' &&
+              hasEmittedTextMessageStart &&
+              accumulatedContent
+            ) {
              yield {
                type: 'TEXT_MESSAGE_END',
                messageId,
                model,
                timestamp,
              }
            }
          }
πŸ€– Prompt for AI Agents
In `@packages/typescript/ai-anthropic/src/adapters/text.ts` around lines 684 -
695, The code emits a TEXT_MESSAGE_END for any content_block_stop where
currentBlockType !== 'tool_use', which can spuriously include non-text types
like 'thinking'; modify the guard so the yield of { type: 'TEXT_MESSAGE_END',
... } only happens when the block that just ended was actually a text block
(e.g., check currentBlockType === 'text' or explicitly match the set of text
block types) in addition to hasEmittedTextMessageStart and accumulatedContent;
update the conditional around the yield in the same block where
hasEmittedTextMessageStart, accumulatedContent, and currentBlockType are
referenced (symbols: currentBlockType, hasEmittedTextMessageStart,
accumulatedContent, and the TEXT_MESSAGE_END yield) so TEXT_MESSAGE_END is not
emitted for non-text/non-tool_use block types.

} else if (event.type === 'message_stop') {
// Only emit RUN_FINISHED from message_stop if message_delta didn't already emit one.
// message_delta carries the real stop_reason (tool_use, end_turn, etc.),
// while message_stop is just a completion signal.
if (!hasEmittedRunFinished) {
yield {
type: 'TEXT_MESSAGE_END',
messageId,
type: 'RUN_FINISHED',
runId,
model,
timestamp,
finishReason: 'stop',
}
}
} else if (event.type === 'message_stop') {
yield {
type: 'RUN_FINISHED',
runId,
model,
timestamp,
finishReason: 'stop',
}
} else if (event.type === 'message_delta') {
if (event.delta.stop_reason) {
hasEmittedRunFinished = true
switch (event.delta.stop_reason) {
case 'tool_use': {
yield {
Expand Down
Loading
Loading