Skip to content

400 error message returned from OpenAi LLM model lost when converting ChatCompletionChunk. #4959

@chickenlj

Description

@chickenlj

Bug description

When LLM returns 400 error message, an empty ChatCompletionChunk was returned from OpenAiApi, which is not expected, as it makes it impossible for the following process to catch or realize that a 400 error has happened.

For example, the result returned by LLM model:

{"error":{"code":"invalid_parameter_error","param":null,"message":"<400> InternalError.Algo.InvalidParameter: An assistant message with \"tool_calls\" must be followed by tool messages responding to each \"tool_call_id\". The following tool_call_ids did not have response messages: message[2].role","type":"invalid_request_error"},"id":"chatcmpl-c4b9b68f-8be2-482f-99d4-f114c4160a0c"}

The ChatCompletionChunk generated after ModelOptionsUtils.jsonToObject(content, ChatCompletionChunk.class):

ChatCompletionChunk[id=chatcmpl-c4b9b68f-8be2-482f-99d4-f114c4160a0c, choices=[], created=null, model=null, serviceTier=null, systemFingerprint=null, object=null, usage=null]

Environment
1.1.0

Steps to reproduce
Make some illegal messages and send them to the LLM.

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions