Skip to content

Fix #180: Handle unsupported response_format parameter for non-OpenAI providers#249

Closed
ChinmayShringi wants to merge 1 commit intomainfrom
fix-180-response-format
Closed

Fix #180: Handle unsupported response_format parameter for non-OpenAI providers#249
ChinmayShringi wants to merge 1 commit intomainfrom
fix-180-response-format

Conversation

@ChinmayShringi
Copy link
Owner

Problem

Ontology generation fails with 500 error when using LLM providers like Groq that don't support the OpenAI-specific response_format={"type": "json_object"} parameter.

Root Cause

The code uses response_format={"type": "json_object"} which is an OpenAI-specific feature. When using Groq or other providers that don't support this parameter, the API call fails with a 500 error.

Solution

Added graceful fallback logic in three files:

  1. llm_client.py - Core LLM client with automatic retry without response_format
  2. simulation_config_generator.py - Direct API calls with fallback
  3. oasis_profile_generator.py - Direct API calls with fallback

When a 400/500 error related to response_format is detected, the code automatically retries without the parameter and relies on the system prompt to enforce JSON output format.

Testing

  • Tested with Groq API (llama-3.1-8b-instant, llama-3.2-1b-preview, llama-3.1-70b-versatile)
  • Backwards compatible with OpenAI API
  • No breaking changes to existing functionality

Fixes #180


Original PR: 666ghj/MiroFish#186
Original Author: @hkc5

@ChinmayShringi ChinmayShringi added the LLM API Issues related to LLM API integration label Mar 17, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

LLM API Issues related to LLM API integration

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Invalid Discord Invitation Link in README.md

1 participant