Skip to content

fix: 兼容上游 reasoning 字段并统一输出 reasoning_content#78

Merged
pescn merged 1 commit intomainfrom
codex/reasoning-content-fallback
Mar 3, 2026
Merged

fix: 兼容上游 reasoning 字段并统一输出 reasoning_content#78
pescn merged 1 commit intomainfrom
codex/reasoning-content-fallback

Conversation

@pescn
Copy link
Contributor

@pescn pescn commented Mar 3, 2026

变更说明

  • OpenAI upstream 适配器兼容读取 message.reasoning_contentmessage.reasoning(流式与非流式都支持)
  • 新增统一提取函数,优先使用 reasoning_content,否则回退到 reasoning
  • 保持下游输出字段名不变,仍然通过 reasoning_content 返回
  • 新增单元测试覆盖 reasoning 回退、优先级和流式 delta 场景

验证

  • bun run check
  • bun test src/adapters/upstream/openai.test.ts

备注

  • 仓库当前全量 bun test 存在既有失败(src/search/__tests__/compiler.test.ts 4项),与本 PR 改动无关。

Summary by CodeRabbit

发布说明

  • 新功能

    • 增强了对推理内容处理的支持,现在可识别多种推理响应格式。
  • 测试

    • 添加了全面的测试套件,验证推理内容的解析和流式处理逻辑。

@gemini-code-assist
Copy link

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

此拉取请求旨在增强与上游 OpenAI 接口的兼容性,通过统一处理其响应中可能出现的 reasoningreasoning_content 字段。核心目标是确保无论上游提供哪个字段,系统都能正确提取推理内容并以一致的格式输出,从而提高系统的健壮性和数据处理的灵活性。

Highlights

  • OpenAI 上游适配器兼容性: OpenAI 上游适配器现在兼容读取 reasoningreasoning_content 字段(支持流式和非流式)。
  • 统一推理内容提取: 新增了一个统一的提取函数 extractReasoningText,优先使用 reasoning_content,否则回退到 reasoning
  • 下游输出字段名保持不变: 下游输出字段名保持不变,仍通过 reasoning_content 返回。
  • 新增单元测试: 新增了单元测试,覆盖了 reasoning 回退、优先级和流式 delta 场景。
Changelog
  • backend/src/adapters/upstream/openai.test.ts
    • 新增了测试用例,验证了 openaiUpstreamAdapterreasoning 字段的兼容性,包括非流式响应中 reasoning 字段的解析、reasoning_content 优先级的处理以及流式响应中 reasoning delta 的解析。
  • backend/src/adapters/upstream/openai.ts
    • OpenAIChoiceOpenAIStreamChoice 接口中添加了可选的 reasoning 字段,以支持上游新的响应格式。
    • 引入了 extractReasoningText 辅助函数,用于从响应负载中优先提取 reasoning_content,并在其不存在时回退到 reasoning
    • 更新了 convertResponse 函数,使其在处理非流式响应时使用 extractReasoningText 来统一提取推理内容。
    • 修改了流式响应处理逻辑,确保在处理 content_block_delta 时也能通过 extractReasoningText 提取 reasoningreasoning_content
Activity
  • 作者已运行类型检查 (tsc) 和单元测试 (bun test) 进行验证。
  • 验证结果显示类型检查成功通过。
  • bun test 运行中,日志显示了一些 JSON 解析错误,这可能是测试环境或特定测试用例的问题,但与本次 PR 的核心变更(推理字段兼容性)直接关联的测试似乎已通过。
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@coderabbitai
Copy link

coderabbitai bot commented Mar 3, 2026

📝 Walkthrough

概览

为OpenAI上游适配器添加测试套件,验证非流式和流式响应解析逻辑。同时增加支持reasoning字段,并引入extractReasoningText辅助函数以规范化从reasoning_content或reasoning字段提取推理内容。

变更

内容类别 文件 总结
测试套件
backend/src/adapters/upstream/openai.test.ts
为parseResponse和parseStreamResponse方法添加完整的测试覆盖,验证非流式响应中thinking块和内容的正确顺序,以及流式响应中reasoning和content delta的增量处理。
适配器增强
backend/src/adapters/upstream/openai.ts
在OpenAIChoice.message和OpenAIStreamChoice.delta中添加可选的reasoning字段;引入extractReasoningText辅助函数以统一处理reasoning_content或reasoning字段;更新convertResponse和流处理路径使用新的辅助函数。

代码审查工作量

🎯 2 (Simple) | ⏱️ ~12 分钟

相关PR

诗歌

🐰 代码测试验身体,
推理字段多条路,
辅助函数来规范,
流式非流皆顺畅!✨

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 66.67% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed 标题清晰地总结了主要变更:添加对上游 reasoning 字段的兼容性支持,并统一输出 reasoning_content,与代码更改的核心内容完全相关。

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
  • 📝 Generate docstrings (stacked PR)
  • 📝 Generate docstrings (commit on current branch)
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch codex/reasoning-content-fallback

Tip

Try Coding Plans. Let us write the prompt for your AI agent so you can ship faster (with fewer bugs).
Share your feedback on Discord.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

Hello, thank you for your contribution. This change is to be compatible with the upstream API's reasoning field and prioritize reasoning_content. The overall implementation is correct, and corresponding unit tests have been added, which is great. I have one suggestion regarding the implementation of the extractReasoningText function, focusing on clarity and conciseness as per our guidelines for helper functions, which you can refer to.

Comment on lines +302 to +311
if (!payload) {
return undefined;
}
if (payload.reasoning_content && payload.reasoning_content.length > 0) {
return payload.reasoning_content;
}
if (payload.reasoning && payload.reasoning.length > 0) {
return payload.reasoning;
}
return undefined;

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

To improve clarity and ensure helper functions return values that are ready for use, consider making this function body more concise. In JavaScript/TypeScript, non-empty strings are 'truthy' values, while empty strings are 'falsy' values. You can leverage the || operator and optional chaining (?.) to achieve the same logic, making the code more concise and readable, and directly returning the desired value without requiring further processing by the caller.

  return payload?.reasoning_content || payload?.reasoning || undefined;
References
  1. To improve clarity, helper functions should return objects that are ready for use, without requiring the caller to immediately override properties. If a property should be null, omit it from the returned object rather than setting it and having the caller nullify it.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
backend/src/adapters/upstream/openai.test.ts (1)

72-97: 建议:补充流式响应的优先级测试用例

当前测试覆盖了流式 delta 中 reasoning 字段的解析,但缺少对流式场景下 reasoning_content 优先于 reasoning 的测试,建议补充以保持与非流式测试的覆盖一致性。

💡 可选的补充测试
test("prefers reasoning_content over reasoning in stream delta", async () => {
  const stream = [
    'data: {"id":"chatcmpl-4","object":"chat.completion.chunk","created":1700000003,"model":"test-model","choices":[{"index":0,"delta":{"role":"assistant","reasoning_content":"preferred stream reasoning","reasoning":"fallback stream reasoning"},"finish_reason":null}]}',
    'data: {"id":"chatcmpl-4","object":"chat.completion.chunk","created":1700000003,"model":"test-model","choices":[{"index":0,"delta":{"content":"stream text"},"finish_reason":"stop"}]}',
    "data: [DONE]",
  ].join("\n");

  const response = new Response(stream);
  const chunks: Array<unknown> = [];
  for await (const chunk of openaiUpstreamAdapter.parseStreamResponse(response)) {
    chunks.push(chunk);
  }

  expect(chunks).toContainEqual({
    type: "content_block_delta",
    index: 0,
    delta: { type: "thinking_delta", thinking: "preferred stream reasoning" },
  });
});
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/src/adapters/upstream/openai.test.ts` around lines 72 - 97, Add a new
stream test that ensures parseStreamResponse prefers reasoning_content over
reasoning when parsing stream deltas: in the same test file use
openaiUpstreamAdapter.parseStreamResponse to feed a stream where the first
chunk's delta contains both reasoning_content and reasoning (e.g.,
reasoning_content="preferred..." and reasoning="fallback..."), collect yielded
chunks, and assert there is a content_block_delta with delta { type:
"thinking_delta", thinking: "preferred..." } to confirm reasoning_content takes
precedence.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@backend/src/adapters/upstream/openai.test.ts`:
- Around line 72-97: Add a new stream test that ensures parseStreamResponse
prefers reasoning_content over reasoning when parsing stream deltas: in the same
test file use openaiUpstreamAdapter.parseStreamResponse to feed a stream where
the first chunk's delta contains both reasoning_content and reasoning (e.g.,
reasoning_content="preferred..." and reasoning="fallback..."), collect yielded
chunks, and assert there is a content_block_delta with delta { type:
"thinking_delta", thinking: "preferred..." } to confirm reasoning_content takes
precedence.

ℹ️ Review info

Configuration used: Repository UI (base), Organization UI (inherited)

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 80cc1ef and 4884ae8.

📒 Files selected for processing (2)
  • backend/src/adapters/upstream/openai.test.ts
  • backend/src/adapters/upstream/openai.ts

@pescn pescn merged commit a394871 into main Mar 3, 2026
2 checks passed
@pescn
Copy link
Contributor Author

pescn commented Mar 3, 2026

补充说明,该问题源自:vllm-project/vllm#27755

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant