Skip to content

Conversation

timelfrink
Copy link
Contributor

@timelfrink timelfrink commented Oct 17, 2025

Title

Fix gpt-5-codex with Codex CLI: Resolve negative reasoning IDs and missing tool calls

Relevant issues

Fixes #14846
Fixes #14991

Pre-Submission checklist

Please complete all items before asking a LiteLLM maintainer to review your PR

  • I have Added testing in the tests/litellm/ directory, Adding at least 1 test is a hard requirement - see details
  • [] I have added a screenshot of my new test passing locally
  • My PR passes all unit tests on make test-unit
  • My PR's scope is as isolated as possible, it only solves 1 specific problem

Type

🐛 Bug Fix

Changes

Problem

Codex CLI fails when using gpt-5-codex model through LiteLLM proxy due to two critical bugs:

  1. Negative reasoning IDs (Issue [Bug]: LiteLLM proxy makes malformed request to OpenAI when using gpt-5 models with Codex CLI #14991): Python's hash() function returns negative values ~50% of the time, generating invalid reasoning IDs like rs_-7253640029133646746. OpenAI API rejects these with 400 error: Invalid 'input[2].id': 'reasoning_-7253640029133646746'. Expected an ID that begins with 'rs'.

  2. Missing tool calls (Issue [Bug]: LiteLLM proxy for Codex CLI does not work with the new gpt-5-codex model #14846): When gpt-5-codex returns function_call items in Responses API format, LiteLLM's transformation handler ignores them, causing Codex CLI to receive no output (last_agent_message: None).

Solution

Fix 1: Reasoning ID Generation (litellm/llms/openai/responses/transformation.py:129)

  • Changed f"rs_{hash(str(item_data))}" to f"rs_{abs(hash(str(item_data)))}"
  • Ensures reasoning IDs are always positive

Fix 2: Function Call Handling (litellm/completion_extras/litellm_responses_transformation/transformation.py:76-91)

  • Added handler for function_call type items in _handle_raw_dict_response_item()
  • Converts function_call items to proper tool_calls in Chat Completion format

Testing

Added 5 comprehensive tests in tests/test_litellm/llms/openai/responses/test_openai_responses_transformation.py:

  • test_reasoning_id_generation_always_positive() - Tests 100 iterations to ensure no negative IDs
  • test_reasoning_id_generation_with_existing_id() - Verifies existing IDs are preserved
  • test_handle_raw_dict_response_function_call() - Tests function_call item conversion
  • test_handle_raw_dict_response_multiple_function_calls() - Tests multiple tool calls
  • test_handle_raw_dict_response_reasoning_ignored() - Ensures reasoning items still ignored

All tests pass ✅ (32/32 tests in transformation test file)

Test Results

$ poetry run pytest tests/test_litellm/llms/openai/responses/test_openai_responses_transformation.py::test_reasoning_id_generation_always_positive tests/test_litellm/llms/openai/responses/test_openai_responses_transformation.py::test_reasoning_id_generation_with_existing_id tests/test_litellm/llms/openai/responses/test_openai_responses_transformation.py::test_handle_raw_dict_response_function_call tests/test_litellm/llms/openai/responses/test_openai_responses_transformation.py::test_handle_raw_dict_response_multiple_function_calls tests/test_litellm/llms/openai/responses/test_openai_responses_transformation.py::test_handle_raw_dict_response_reasoning_ignored -v

================================================= test session starts ==================================================
collected 5 items

test_handle_raw_dict_response_function_call PASSED                                                              [ 20%]
test_handle_raw_dict_response_multiple_function_calls PASSED                                                    [ 40%]
test_handle_raw_dict_response_reasoning_ignored PASSED                                                          [ 60%]
test_reasoning_id_generation_always_positive PASSED                                                             [ 80%]
test_reasoning_id_generation_with_existing_id PASSED                                                            [100%]

============================================= 5 passed, 1 warning in 1.77s =============================================

Full test suite for transformation module (32 tests):

$ poetry run pytest tests/test_litellm/llms/openai/responses/test_openai_responses_transformation.py -v

============================================ 32 passed, 1 warning in 17.76s ============================================

…I#14991)

Fix two critical bugs preventing Codex CLI from working with gpt-5-codex:

1. Negative reasoning IDs: Changed hash() to abs(hash()) to ensure
   reasoning IDs are always positive. OpenAI API rejects negative IDs
   like 'rs_-7253640029133646746'.

2. Missing tool calls: Added function_call handler in
   _handle_raw_dict_response_item() to properly convert function_call
   items from Responses API to tool_calls in Chat Completion format.

Changes:
- litellm/llms/openai/responses/transformation.py: Use abs(hash())
- litellm/completion_extras/litellm_responses_transformation/transformation.py:
  Handle function_call items
- tests: Add 5 comprehensive tests covering both fixes

Fixes BerriAI#14846
Fixes BerriAI#14991
Copy link

vercel bot commented Oct 17, 2025

@timelfrink is attempting to deploy a commit to the CLERKIEAI Team on Vercel.

A member of the Team first needs to authorize it.

Copy link
Contributor

@ishaan-jaff ishaan-jaff left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can you share a screenshot of it working for gpt-5-codex with this change ?

@timelfrink timelfrink marked this pull request as draft October 17, 2025 19:11
@JehandadK
Copy link

JehandadK commented Oct 18, 2025

I built and tested this version.

The error when running codex as
OPENAI_BASE_URL=http://localhost:4000/v1 OPENAI_API_KEY="sk-xxxxxxxxxxxxxx" codex

IS

⚠️ stream error: unexpected status 400 Bad Request: {"error":
{"message":"litellm.BadRequestError: OpenAIException - {\n  \"error\": {\n
\"message\": \"The encrypted content for item rs_5183945415363688018 could not
be verified.\",\n    \"type\": \"invalid_request_error\",\n    \"param\": null,
\n    \"code\": null\n  }\n}. Received Model Group=gpt-5\nAvailable Model Group
Fallbacks=None","type":null,"param":null,"code":"400"}}; retrying 5/5 in 3.39s…
Docker build log ``` litellm-timelfrink git:(fix/gpt-5-codex-reasoning-id-and-tool-calls) docker build \ -f docker/Dockerfile.non_root \ -t litellm-proxy:local-test \ .
  [+] Building 904.4s (25/25) FINISHED                                         docker:desktop-linux
   => [internal] load build definition from Dockerfile.non_root                                0.0s
   => => transferring dockerfile: 4.05kB                                                       0.0s
   => [internal] load metadata for cgr.dev/chainguard/python:latest-dev                       11.3s
   => [internal] load .dockerignore                                                            0.0s
   => => transferring context: 149B                                                            0.0s
   => [builder 1/6] FROM cgr.dev/chainguard/python:latest-dev@sha256:4d6561a6dd69ffb9d1dae7  108.0s
   => => resolve cgr.dev/chainguard/python:latest-dev@sha256:4d6561a6dd69ffb9d1dae7d2a8a62ee9  0.0s
   => => sha256:4d6561a6dd69ffb9d1dae7d2a8a62ee9273e984c49e1d0b3ae1e8c303bedd280 927B / 927B   0.0s
   => => sha256:bb2a8558316e9d4ccf1e41d0f9fceeb78fc079aa5f0636af0291532575eba 2.43kB / 2.43kB  0.0s
   => => sha256:9c02feadcf0e79569b3ac130dd7c03753ad1181e5c3371607dcbc6bbb3ece 2.37kB / 2.37kB  0.0s
   => => sha256:3b36284c168622a494b714b0fe31bcb4f0df9ee09237c241ffa5dfadc0 21.00MB / 21.00MB  90.2s
   => => sha256:31eabcf5bc1f3280b0076247079323192514c0844b9e14c080f655cb2 12.00MB / 12.00MB  104.9s
   => => sha256:1fbd4ec8ebb32cc6336eab38a9c60cab2430d72a745feafa8b27feb1a4e3 8.93MB / 8.93MB  90.0s
   => => sha256:92017f4764732753060fc58ce52ccba7e91a3d3c422094f8c08ab4172667 5.47MB / 5.47MB  91.8s
   => => sha256:b6881a2023ed354f1af7434fb16ff6eb16dec8de4f50a9358cd351d1d887 2.91MB / 2.91MB  91.9s
   => => extracting sha256:3b36284c168622a494b714b0fe31bcb4f0df9ee09237c241ffa5dfadc023b948    0.5s
   => => sha256:eadbb67beab788cfb167d813a58f9eba97c51a412318f5c8da856ca538d0 1.95MB / 1.95MB  92.8s
   => => sha256:ebc284a7e4d5a5b41158ba48e32eea032a68d2818484ca08597db1bfcf 16.08MB / 16.08MB  97.2s
   => => sha256:87c4789281b3fb5c74c4c6157834d5faa226c51b7db148a2ec41d028 240.64kB / 240.64kB  93.4s
   => => extracting sha256:31eabcf5bc1f3280b0076247079323192514c0844b9e14c080f655cb2f893464    0.9s
   => => extracting sha256:1fbd4ec8ebb32cc6336eab38a9c60cab2430d72a745feafa8b27feb1a4e3e5e6    0.2s
   => => extracting sha256:92017f4764732753060fc58ce52ccba7e91a3d3c422094f8c08ab41726671b04    0.4s
   => => extracting sha256:b6881a2023ed354f1af7434fb16ff6eb16dec8de4f50a9358cd351d1d8874cc7    0.3s
   => => extracting sha256:eadbb67beab788cfb167d813a58f9eba97c51a412318f5c8da856ca538d0f5f3    0.3s
   => => extracting sha256:ebc284a7e4d5a5b41158ba48e32eea032a68d2818484ca08597db1bfcfb48476    0.5s
   => => extracting sha256:87c4789281b3fb5c74c4c6157834d5faa226c51b7db148a2ec41d028cfe1a09c    0.1s
   => [internal] load build context                                                            2.9s
   => => transferring context: 50.31MB                                                         2.9s
   => [builder 2/6] WORKDIR /app                                                               0.3s
   => [runtime  3/16] RUN apk upgrade --no-cache &&   apk add --no-cache bash libstdc++ ca-c  14.6s
   => [builder 3/6] RUN apk add --no-cache build-base bash   && pip install --no-cache-dir -  10.4s
   => [builder 4/6] COPY . .                                                                   1.1s
   => [builder 5/6] RUN chmod +x docker/build_admin_ui.sh && ./docker/build_admin_ui.sh        0.4s
   => [builder 6/6] RUN rm -rf dist/* && python -m build &&   pip install dist/*.whl &&   p  237.5s
   => [runtime  4/16] COPY . .                                                                 0.6s 
   => [runtime  5/16] COPY --from=builder /app/docker/entrypoint.sh /app/docker/prod_entrypoi  0.0s 
   => [runtime  6/16] COPY --from=builder /app/docker/supervisord.conf /etc/supervisord.conf   0.0s 
   => [runtime  7/16] COPY --from=builder /app/schema.prisma /app/schema.prisma                0.0s 
   => [runtime  8/16] COPY --from=builder /app/dist/*.whl .                                    0.0s 
   => [runtime  9/16] COPY --from=builder /wheels/ /wheels/                                    0.3s 
   => [runtime 10/16] RUN pip install *.whl /wheels/* --no-index --find-links=/wheels/   &&   19.2s 
   => [runtime 11/16] RUN chmod +x docker/install_auto_router.sh && ./docker/install_auto_ro  64.3s 
   => [runtime 12/16] RUN pip uninstall jwt -y &&   pip uninstall PyJWT -y &&   pip install P  5.8s 
   => [runtime 13/16] RUN pip install --no-cache-dir prisma &&   chmod +x docker/entrypoint.s  3.1s 
   => [runtime 14/16] RUN mkdir -p /nonexistent /.npm &&   chown -R nobody:nogroup /app &&     5.5s 
   => [runtime 15/16] RUN PRISMA_PATH=$(python -c "import os, prisma; print(os.path.dirname(p  0.6s 
   => [runtime 16/16] RUN prisma generate                                                    429.6s 
   => exporting to image                                                                       5.4s 
   => => exporting layers                                                                      5.4s 
   => => writing image sha256:f07f19088ebee76ba426f5c13c970501d2b15ae93d75e6346999606bc236245  0.0s 
   => => naming to docker.io/library/litellm-proxy:local-test                                  0.0s 
                                                                                                    
  View build details: docker-desktop:
  ```


Litellm Log with gpt5-codex model selected in codex ## Litellm Log ``` litellm_local | During handling of the above exception, another exception occurred: litellm_local | litellm_local | Traceback (most recent call last): litellm_local | File "/usr/lib/python3.13/site-packages/litellm/responses/main.py", line 442, in aresponses litellm_local | response = await init_response litellm_local | ^^^^^^^^^^^^^^^^^^^ litellm_local | File "/usr/lib/python3.13/site-packages/litellm/llms/custom_httpx/llm_http_handler.py", line 1981, in async_response_api_handler litellm_local | raise self._handle_error( litellm_local | ~~~~~~~~~~~~~~~~~~^ litellm_local | e=e, litellm_local | ^^^^ litellm_local | provider_config=responses_api_provider_config, litellm_local | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ litellm_local | ) litellm_local | ^ litellm_local | File "/usr/lib/python3.13/site-packages/litellm/llms/custom_httpx/llm_http_handler.py", line 3312, in _handle_error litellm_local | raise provider_config.get_error_class( litellm_local | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^ litellm_local | error_message=error_text, litellm_local | ^^^^^^^^^^^^^^^^^^^^^^^^^ litellm_local | status_code=status_code, litellm_local | ^^^^^^^^^^^^^^^^^^^^^^^^ litellm_local | headers=error_headers, litellm_local | ^^^^^^^^^^^^^^^^^^^^^^ litellm_local | ) litellm_local | ^ litellm_local | File "/usr/lib/python3.13/site-packages/litellm/llms/base_llm/responses/transformation.py", line 206, in get_error_class litellm_local | raise BaseLLMException( litellm_local | ...<3 lines>... litellm_local | ) litellm_local | litellm.llms.base_llm.chat.transformation.BaseLLMException: { litellm_local | "error": { litellm_local | "message": "The encrypted content for item rs_5183945415363688018 could not be verified.", litellm_local | "type": "invalid_request_error", litellm_local | "param": null, litellm_local | "code": null litellm_local | } litellm_local | } litellm_local | litellm_local | During handling of the above exception, another exception occurred: litellm_local | litellm_local | Traceback (most recent call last): litellm_local | File "/usr/lib/python3.13/site-packages/litellm/utils.py", line 1474, in wrapper_async litellm_local | result = await original_function(*args, **kwargs) litellm_local | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ litellm_local | File "/usr/lib/python3.13/site-packages/litellm/responses/main.py", line 461, in aresponses litellm_local | raise litellm.exception_type( litellm_local | ~~~~~~~~~~~~~~~~~~~~~~^ litellm_local | model=model, litellm_local | ^^^^^^^^^^^^ litellm_local | ...<3 lines>... litellm_local | extra_kwargs=kwargs, litellm_local | ^^^^^^^^^^^^^^^^^^^^ litellm_local | ) litellm_local | ^ litellm_local | File "/usr/lib/python3.13/site-packages/litellm/litellm_core_utils/exception_mapping_utils.py", line 2273, in exception_type litellm_local | raise e litellm_local | File "/usr/lib/python3.13/site-packages/litellm/litellm_core_utils/exception_mapping_utils.py", line 392, in exception_type litellm_local | raise BadRequestError( litellm_local | ...<6 lines>... litellm_local | ) litellm_local | litellm.exceptions.BadRequestError: litellm.BadRequestError: OpenAIException - { litellm_local | "error": { litellm_local | "message": "The encrypted content for item rs_5183945415363688018 could not be verified.", litellm_local | "type": "invalid_request_error", litellm_local | "param": null, litellm_local | "code": null litellm_local | } litellm_local | } litellm_local | litellm_local | During handling of the above exception, another exception occurred: litellm_local | litellm_local | Traceback (most recent call last): litellm_local | File "/usr/lib/python3.13/site-packages/litellm/integrations/custom_logger.py", line 400, in async_log_event litellm_local | await callback_func( litellm_local | ...<4 lines>... litellm_local | ) litellm_local | File "/usr/lib/python3.13/site-packages/litellm/router.py", line 4626, in async_deployment_callback_on_failure litellm_local | deployment_name = kwargs["litellm_params"]["metadata"].get( litellm_local | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ litellm_local | AttributeError: 'NoneType' object has no attribute 'get' litellm_local | 23:46:21 - LiteLLM Router:INFO: router.py:2847 - ageneric_api_call_with_fallbacks(model=gpt-5-codex) Exception litellm.BadRequestError: OpenAIException - { litellm_local | "error": { litellm_local | "message": "The encrypted content for item rs_5183945415363688018 could not be verified.", litellm_local | "type": "invalid_request_error", litellm_local | "param": null, litellm_local | "code": null litellm_local | } litellm_local | } ```

@uje-m
Copy link

uje-m commented Oct 19, 2025

I built and tested this version.

The error when running codex as OPENAI_BASE_URL=http://localhost:4000/v1 OPENAI_API_KEY="sk-xxxxxxxxxxxxxx" codex

IS

⚠️ stream error: unexpected status 400 Bad Request: {"error":
{"message":"litellm.BadRequestError: OpenAIException - {\n  \"error\": {\n
\"message\": \"The encrypted content for item rs_5183945415363688018 could not
be verified.\",\n    \"type\": \"invalid_request_error\",\n    \"param\": null,
\n    \"code\": null\n  }\n}. Received Model Group=gpt-5\nAvailable Model Group
Fallbacks=None","type":null,"param":null,"code":"400"}}; retrying 5/5 in 3.39s…

Docker build log

[+] Building 904.4s (25/25) FINISHED docker:desktop-linux
=> [internal] load build definition from Dockerfile.non_root 0.0s
=> => transferring dockerfile: 4.05kB 0.0s
=> [internal] load metadata for cgr.dev/chainguard/python:latest-dev 11.3s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 149B 0.0s
=> [builder 1/6] FROM cgr.dev/chainguard/python:latest-dev@sha256:4d6561a6dd69ffb9d1dae7 108.0s
=> => resolve cgr.dev/chainguard/python:latest-dev@sha256:4d6561a6dd69ffb9d1dae7d2a8a62ee9 0.0s
=> => sha256:4d6561a6dd69ffb9d1dae7d2a8a62ee9273e984c49e1d0b3ae1e8c303bedd280 927B / 927B 0.0s
=> => sha256:bb2a8558316e9d4ccf1e41d0f9fceeb78fc079aa5f0636af0291532575eba 2.43kB / 2.43kB 0.0s
=> => sha256:9c02feadcf0e79569b3ac130dd7c03753ad1181e5c3371607dcbc6bbb3ece 2.37kB / 2.37kB 0.0s
=> => sha256:3b36284c168622a494b714b0fe31bcb4f0df9ee09237c241ffa5dfadc0 21.00MB / 21.00MB 90.2s
=> => sha256:31eabcf5bc1f3280b0076247079323192514c0844b9e14c080f655cb2 12.00MB / 12.00MB 104.9s
=> => sha256:1fbd4ec8ebb32cc6336eab38a9c60cab2430d72a745feafa8b27feb1a4e3 8.93MB / 8.93MB 90.0s
=> => sha256:92017f4764732753060fc58ce52ccba7e91a3d3c422094f8c08ab4172667 5.47MB / 5.47MB 91.8s
=> => sha256:b6881a2023ed354f1af7434fb16ff6eb16dec8de4f50a9358cd351d1d887 2.91MB / 2.91MB 91.9s
=> => extracting sha256:3b36284c168622a494b714b0fe31bcb4f0df9ee09237c241ffa5dfadc023b948 0.5s
=> => sha256:eadbb67beab788cfb167d813a58f9eba97c51a412318f5c8da856ca538d0 1.95MB / 1.95MB 92.8s
=> => sha256:ebc284a7e4d5a5b41158ba48e32eea032a68d2818484ca08597db1bfcf 16.08MB / 16.08MB 97.2s
=> => sha256:87c4789281b3fb5c74c4c6157834d5faa226c51b7db148a2ec41d028 240.64kB / 240.64kB 93.4s
=> => extracting sha256:31eabcf5bc1f3280b0076247079323192514c0844b9e14c080f655cb2f893464 0.9s
=> => extracting sha256:1fbd4ec8ebb32cc6336eab38a9c60cab2430d72a745feafa8b27feb1a4e3e5e6 0.2s
=> => extracting sha256:92017f4764732753060fc58ce52ccba7e91a3d3c422094f8c08ab41726671b04 0.4s
=> => extracting sha256:b6881a2023ed354f1af7434fb16ff6eb16dec8de4f50a9358cd351d1d8874cc7 0.3s
=> => extracting sha256:eadbb67beab788cfb167d813a58f9eba97c51a412318f5c8da856ca538d0f5f3 0.3s
=> => extracting sha256:ebc284a7e4d5a5b41158ba48e32eea032a68d2818484ca08597db1bfcfb48476 0.5s
=> => extracting sha256:87c4789281b3fb5c74c4c6157834d5faa226c51b7db148a2ec41d028cfe1a09c 0.1s
=> [internal] load build context 2.9s
=> => transferring context: 50.31MB 2.9s
=> [builder 2/6] WORKDIR /app 0.3s
=> [runtime 3/16] RUN apk upgrade --no-cache && apk add --no-cache bash libstdc++ ca-c 14.6s
=> [builder 3/6] RUN apk add --no-cache build-base bash && pip install --no-cache-dir - 10.4s
=> [builder 4/6] COPY . . 1.1s
=> [builder 5/6] RUN chmod +x docker/build_admin_ui.sh && ./docker/build_admin_ui.sh 0.4s
=> [builder 6/6] RUN rm -rf dist/* && python -m build && pip install dist/.whl && p 237.5s
=> [runtime 4/16] COPY . . 0.6s
=> [runtime 5/16] COPY --from=builder /app/docker/entrypoint.sh /app/docker/prod_entrypoi 0.0s
=> [runtime 6/16] COPY --from=builder /app/docker/supervisord.conf /etc/supervisord.conf 0.0s
=> [runtime 7/16] COPY --from=builder /app/schema.prisma /app/schema.prisma 0.0s
=> [runtime 8/16] COPY --from=builder /app/dist/
.whl . 0.0s
=> [runtime 9/16] COPY --from=builder /wheels/ /wheels/ 0.3s
=> [runtime 10/16] RUN pip install .whl /wheels/ --no-index --find-links=/wheels/ && 19.2s
=> [runtime 11/16] RUN chmod +x docker/install_auto_router.sh && ./docker/install_auto_ro 64.3s
=> [runtime 12/16] RUN pip uninstall jwt -y && pip uninstall PyJWT -y && pip install P 5.8s
=> [runtime 13/16] RUN pip install --no-cache-dir prisma && chmod +x docker/entrypoint.s 3.1s
=> [runtime 14/16] RUN mkdir -p /nonexistent /.npm && chown -R nobody:nogroup /app && 5.5s
=> [runtime 15/16] RUN PRISMA_PATH=$(python -c "import os, prisma; print(os.path.dirname(p 0.6s
=> [runtime 16/16] RUN prisma generate 429.6s
=> exporting to image 5.4s
=> => exporting layers 5.4s
=> => writing image sha256:f07f19088ebee76ba426f5c13c970501d2b15ae93d75e6346999606bc236245 0.0s
=> => naming to docker.io/library/litellm-proxy:local-test 0.0s

View build details: docker-desktop:

Litellm Log with gpt5-codex model selected in codex

did you find fix for this error?

@timelfrink timelfrink closed this Oct 19, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

4 participants