fix: 兼容上游 reasoning 字段并统一输出 reasoning_content#78
Conversation
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! 此拉取请求旨在增强与上游 OpenAI 接口的兼容性,通过统一处理其响应中可能出现的 Highlights
Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
📝 Walkthrough概览为OpenAI上游适配器添加测试套件,验证非流式和流式响应解析逻辑。同时增加支持reasoning字段,并引入extractReasoningText辅助函数以规范化从reasoning_content或reasoning字段提取推理内容。 变更
代码审查工作量🎯 2 (Simple) | ⏱️ ~12 分钟 相关PR
诗歌
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Tip Try Coding Plans. Let us write the prompt for your AI agent so you can ship faster (with fewer bugs). Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Code Review
Hello, thank you for your contribution. This change is to be compatible with the upstream API's reasoning field and prioritize reasoning_content. The overall implementation is correct, and corresponding unit tests have been added, which is great. I have one suggestion regarding the implementation of the extractReasoningText function, focusing on clarity and conciseness as per our guidelines for helper functions, which you can refer to.
| if (!payload) { | ||
| return undefined; | ||
| } | ||
| if (payload.reasoning_content && payload.reasoning_content.length > 0) { | ||
| return payload.reasoning_content; | ||
| } | ||
| if (payload.reasoning && payload.reasoning.length > 0) { | ||
| return payload.reasoning; | ||
| } | ||
| return undefined; |
There was a problem hiding this comment.
To improve clarity and ensure helper functions return values that are ready for use, consider making this function body more concise. In JavaScript/TypeScript, non-empty strings are 'truthy' values, while empty strings are 'falsy' values. You can leverage the || operator and optional chaining (?.) to achieve the same logic, making the code more concise and readable, and directly returning the desired value without requiring further processing by the caller.
return payload?.reasoning_content || payload?.reasoning || undefined;References
- To improve clarity, helper functions should return objects that are ready for use, without requiring the caller to immediately override properties. If a property should be null, omit it from the returned object rather than setting it and having the caller nullify it.
There was a problem hiding this comment.
🧹 Nitpick comments (1)
backend/src/adapters/upstream/openai.test.ts (1)
72-97: 建议:补充流式响应的优先级测试用例当前测试覆盖了流式 delta 中
reasoning字段的解析,但缺少对流式场景下reasoning_content优先于reasoning的测试,建议补充以保持与非流式测试的覆盖一致性。💡 可选的补充测试
test("prefers reasoning_content over reasoning in stream delta", async () => { const stream = [ 'data: {"id":"chatcmpl-4","object":"chat.completion.chunk","created":1700000003,"model":"test-model","choices":[{"index":0,"delta":{"role":"assistant","reasoning_content":"preferred stream reasoning","reasoning":"fallback stream reasoning"},"finish_reason":null}]}', 'data: {"id":"chatcmpl-4","object":"chat.completion.chunk","created":1700000003,"model":"test-model","choices":[{"index":0,"delta":{"content":"stream text"},"finish_reason":"stop"}]}', "data: [DONE]", ].join("\n"); const response = new Response(stream); const chunks: Array<unknown> = []; for await (const chunk of openaiUpstreamAdapter.parseStreamResponse(response)) { chunks.push(chunk); } expect(chunks).toContainEqual({ type: "content_block_delta", index: 0, delta: { type: "thinking_delta", thinking: "preferred stream reasoning" }, }); });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/src/adapters/upstream/openai.test.ts` around lines 72 - 97, Add a new stream test that ensures parseStreamResponse prefers reasoning_content over reasoning when parsing stream deltas: in the same test file use openaiUpstreamAdapter.parseStreamResponse to feed a stream where the first chunk's delta contains both reasoning_content and reasoning (e.g., reasoning_content="preferred..." and reasoning="fallback..."), collect yielded chunks, and assert there is a content_block_delta with delta { type: "thinking_delta", thinking: "preferred..." } to confirm reasoning_content takes precedence.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@backend/src/adapters/upstream/openai.test.ts`:
- Around line 72-97: Add a new stream test that ensures parseStreamResponse
prefers reasoning_content over reasoning when parsing stream deltas: in the same
test file use openaiUpstreamAdapter.parseStreamResponse to feed a stream where
the first chunk's delta contains both reasoning_content and reasoning (e.g.,
reasoning_content="preferred..." and reasoning="fallback..."), collect yielded
chunks, and assert there is a content_block_delta with delta { type:
"thinking_delta", thinking: "preferred..." } to confirm reasoning_content takes
precedence.
ℹ️ Review info
Configuration used: Repository UI (base), Organization UI (inherited)
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
backend/src/adapters/upstream/openai.test.tsbackend/src/adapters/upstream/openai.ts
|
补充说明,该问题源自:vllm-project/vllm#27755 |
变更说明
message.reasoning_content和message.reasoning(流式与非流式都支持)reasoning_content,否则回退到reasoningreasoning_content返回reasoning回退、优先级和流式 delta 场景验证
bun run checkbun test src/adapters/upstream/openai.test.ts备注
bun test存在既有失败(src/search/__tests__/compiler.test.ts4项),与本 PR 改动无关。Summary by CodeRabbit
发布说明
新功能
测试