-
-
Notifications
You must be signed in to change notification settings - Fork 4.4k
Fix gpt-5-codex with Codex CLI: Resolve negative reasoning IDs and missing tool calls #15639
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix gpt-5-codex with Codex CLI: Resolve negative reasoning IDs and missing tool calls #15639
Conversation
…I#14991) Fix two critical bugs preventing Codex CLI from working with gpt-5-codex: 1. Negative reasoning IDs: Changed hash() to abs(hash()) to ensure reasoning IDs are always positive. OpenAI API rejects negative IDs like 'rs_-7253640029133646746'. 2. Missing tool calls: Added function_call handler in _handle_raw_dict_response_item() to properly convert function_call items from Responses API to tool_calls in Chat Completion format. Changes: - litellm/llms/openai/responses/transformation.py: Use abs(hash()) - litellm/completion_extras/litellm_responses_transformation/transformation.py: Handle function_call items - tests: Add 5 comprehensive tests covering both fixes Fixes BerriAI#14846 Fixes BerriAI#14991
@timelfrink is attempting to deploy a commit to the CLERKIEAI Team on Vercel. A member of the Team first needs to authorize it. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can you share a screenshot of it working for gpt-5-codex with this change ?
I built and tested this version. The error when running codex as IS
Docker build log``` litellm-timelfrink git:(fix/gpt-5-codex-reasoning-id-and-tool-calls) docker build \ -f docker/Dockerfile.non_root \ -t litellm-proxy:local-test \ .
Litellm Log with gpt5-codex model selected in codex## Litellm Log ``` litellm_local | During handling of the above exception, another exception occurred: litellm_local | litellm_local | Traceback (most recent call last): litellm_local | File "/usr/lib/python3.13/site-packages/litellm/responses/main.py", line 442, in aresponses litellm_local | response = await init_response litellm_local | ^^^^^^^^^^^^^^^^^^^ litellm_local | File "/usr/lib/python3.13/site-packages/litellm/llms/custom_httpx/llm_http_handler.py", line 1981, in async_response_api_handler litellm_local | raise self._handle_error( litellm_local | ~~~~~~~~~~~~~~~~~~^ litellm_local | e=e, litellm_local | ^^^^ litellm_local | provider_config=responses_api_provider_config, litellm_local | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ litellm_local | ) litellm_local | ^ litellm_local | File "/usr/lib/python3.13/site-packages/litellm/llms/custom_httpx/llm_http_handler.py", line 3312, in _handle_error litellm_local | raise provider_config.get_error_class( litellm_local | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^ litellm_local | error_message=error_text, litellm_local | ^^^^^^^^^^^^^^^^^^^^^^^^^ litellm_local | status_code=status_code, litellm_local | ^^^^^^^^^^^^^^^^^^^^^^^^ litellm_local | headers=error_headers, litellm_local | ^^^^^^^^^^^^^^^^^^^^^^ litellm_local | ) litellm_local | ^ litellm_local | File "/usr/lib/python3.13/site-packages/litellm/llms/base_llm/responses/transformation.py", line 206, in get_error_class litellm_local | raise BaseLLMException( litellm_local | ...<3 lines>... litellm_local | ) litellm_local | litellm.llms.base_llm.chat.transformation.BaseLLMException: { litellm_local | "error": { litellm_local | "message": "The encrypted content for item rs_5183945415363688018 could not be verified.", litellm_local | "type": "invalid_request_error", litellm_local | "param": null, litellm_local | "code": null litellm_local | } litellm_local | } litellm_local | litellm_local | During handling of the above exception, another exception occurred: litellm_local | litellm_local | Traceback (most recent call last): litellm_local | File "/usr/lib/python3.13/site-packages/litellm/utils.py", line 1474, in wrapper_async litellm_local | result = await original_function(*args, **kwargs) litellm_local | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ litellm_local | File "/usr/lib/python3.13/site-packages/litellm/responses/main.py", line 461, in aresponses litellm_local | raise litellm.exception_type( litellm_local | ~~~~~~~~~~~~~~~~~~~~~~^ litellm_local | model=model, litellm_local | ^^^^^^^^^^^^ litellm_local | ...<3 lines>... litellm_local | extra_kwargs=kwargs, litellm_local | ^^^^^^^^^^^^^^^^^^^^ litellm_local | ) litellm_local | ^ litellm_local | File "/usr/lib/python3.13/site-packages/litellm/litellm_core_utils/exception_mapping_utils.py", line 2273, in exception_type litellm_local | raise e litellm_local | File "/usr/lib/python3.13/site-packages/litellm/litellm_core_utils/exception_mapping_utils.py", line 392, in exception_type litellm_local | raise BadRequestError( litellm_local | ...<6 lines>... litellm_local | ) litellm_local | litellm.exceptions.BadRequestError: litellm.BadRequestError: OpenAIException - { litellm_local | "error": { litellm_local | "message": "The encrypted content for item rs_5183945415363688018 could not be verified.", litellm_local | "type": "invalid_request_error", litellm_local | "param": null, litellm_local | "code": null litellm_local | } litellm_local | } litellm_local | litellm_local | During handling of the above exception, another exception occurred: litellm_local | litellm_local | Traceback (most recent call last): litellm_local | File "/usr/lib/python3.13/site-packages/litellm/integrations/custom_logger.py", line 400, in async_log_event litellm_local | await callback_func( litellm_local | ...<4 lines>... litellm_local | ) litellm_local | File "/usr/lib/python3.13/site-packages/litellm/router.py", line 4626, in async_deployment_callback_on_failure litellm_local | deployment_name = kwargs["litellm_params"]["metadata"].get( litellm_local | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ litellm_local | AttributeError: 'NoneType' object has no attribute 'get' litellm_local | 23:46:21 - LiteLLM Router:INFO: router.py:2847 - ageneric_api_call_with_fallbacks(model=gpt-5-codex) Exception litellm.BadRequestError: OpenAIException - { litellm_local | "error": { litellm_local | "message": "The encrypted content for item rs_5183945415363688018 could not be verified.", litellm_local | "type": "invalid_request_error", litellm_local | "param": null, litellm_local | "code": null litellm_local | } litellm_local | } ``` |
did you find fix for this error? |
Title
Fix gpt-5-codex with Codex CLI: Resolve negative reasoning IDs and missing tool calls
Relevant issues
Fixes #14846
Fixes #14991
Pre-Submission checklist
Please complete all items before asking a LiteLLM maintainer to review your PR
tests/litellm/
directory, Adding at least 1 test is a hard requirement - see detailsmake test-unit
Type
🐛 Bug Fix
Changes
Problem
Codex CLI fails when using
gpt-5-codex
model through LiteLLM proxy due to two critical bugs:Negative reasoning IDs (Issue [Bug]: LiteLLM proxy makes malformed request to OpenAI when using gpt-5 models with Codex CLI #14991): Python's
hash()
function returns negative values ~50% of the time, generating invalid reasoning IDs likers_-7253640029133646746
. OpenAI API rejects these with 400 error:Invalid 'input[2].id': 'reasoning_-7253640029133646746'. Expected an ID that begins with 'rs'.
Missing tool calls (Issue [Bug]: LiteLLM proxy for Codex CLI does not work with the new gpt-5-codex model #14846): When gpt-5-codex returns
function_call
items in Responses API format, LiteLLM's transformation handler ignores them, causing Codex CLI to receive no output (last_agent_message: None
).Solution
Fix 1: Reasoning ID Generation (
litellm/llms/openai/responses/transformation.py:129
)f"rs_{hash(str(item_data))}"
tof"rs_{abs(hash(str(item_data)))}"
Fix 2: Function Call Handling (
litellm/completion_extras/litellm_responses_transformation/transformation.py:76-91
)function_call
type items in_handle_raw_dict_response_item()
tool_calls
in Chat Completion formatTesting
Added 5 comprehensive tests in
tests/test_litellm/llms/openai/responses/test_openai_responses_transformation.py
:test_reasoning_id_generation_always_positive()
- Tests 100 iterations to ensure no negative IDstest_reasoning_id_generation_with_existing_id()
- Verifies existing IDs are preservedtest_handle_raw_dict_response_function_call()
- Tests function_call item conversiontest_handle_raw_dict_response_multiple_function_calls()
- Tests multiple tool callstest_handle_raw_dict_response_reasoning_ignored()
- Ensures reasoning items still ignoredAll tests pass ✅ (32/32 tests in transformation test file)
Test Results
Full test suite for transformation module (32 tests):