Releases: BerriAI/litellm
Releases · BerriAI/litellm
v1.80.0.rc.1
Full Changelog: v1.80.0-nightly...v1.80.0.rc.1
v1.80.0-nightly
What's Changed
- [Feat] Day-0, Add gpt-5.1 and gpt-5.1-codex family support by @Sameerlite in #16598
- fix: exclude unauthorized MCP servers from allowed server list by @uc4w6c in #16551
- [Feat] VertexAI - Add BGE Embeddings support by @ishaan-jaff in #16033
- [Docs] LiteLLM Quick start - show how model resolution works by @ishaan-jaff in #16602
- fix: support Anthropic tool_use and tool_result in token counter by @orolega in #16351
- [Feat] RunwayML - Add support for /audio/speech
eleven_multilingual_v2endpoint by @ishaan-jaff in #16604 - [Bug fix] Fixes SambaNova API rejecting requests when message content is passed as a list format by @ishaan-jaff in #16612
- fix(ui): Remove misleading 'Custom' option mention from OpenAI endpoint tooltips by @sep-grindr in #16622
- [UI] Add RunwayML on Admin UI supported models/providers by @ishaan-jaff in #16606
- [Infra] Migrate Add Model Fields to Backend by @yuneng-jiang in #16620
- fix: forward OpenAI organization for image generation by @pnookala-godaddy in #16607
- fix: avoid crashing when MCP server record lacks credentials by @uc4w6c in #16601
- [Fix] UI - Button Styles and Sizing in Settings Pages by @yuneng-jiang in #16600
- [Feature] Pagination for /spend/logs/session/ui endpoint by @yuneng-jiang in #16603
- remove generic exception handling by @otaviofbrito in #16599
- fix: parse failed chunks for Groq by @Tomas2D in #16595
- fix(mcp): Fix Gemini conversation format issue with MCP auto-execution by @dtunikov in #16592
- Add fal-ai/flux/schnell support by @Sameerlite in #16580
- Add all Imagen4 variants of fal ai in model map by @Sameerlite in #16579
- [Perf] Embeddings: Use router's O(1) lookup and shared sessions by @AlexsanderHamir in #16344
- feat(openai): Add support for reasoning_effort='none' in GPT-5.1 by @Chesars in #16658
- Voyageai pricing and doc update by @fzowl in #16641
- [Feature] UI - Normalize table action columns appearance by @yuneng-jiang in #16657
- [Feat] Bedrock Batches - Add support for custom KMS encryption keys in Bedrock Batch operations by @ishaan-jaff in #16662
- [Feature] UI - Add Model use backend data by @yuneng-jiang in #16664
- [Feat] Model Management API - Add API Endpoint for creating model access group by @ishaan-jaff in #16663
- Add Vertex Kimi-K2-Thinking by @emerzon in #16671
- fix: Resolve pytest module name collision for test_transformation.py files by @Chesars in #16661
- [Docs] Add docs on APIs for model access management by @ishaan-jaff in #16673
- [Docs] Add docs for showing how to auto reload new pricing data by @ishaan-jaff in #16675
- Revert "[Feat] VertexAI - Add BGE Embeddings support " by @ishaan-jaff in #16677
- [Fix] UI - Remove Description Field from LLM Credentials by @yuneng-jiang in #16608
- feat: add dynamic OAuth2 metadata discovery for MCP servers by @uc4w6c in #16676
- Agents - support agent registration + discovery (A2A spec) by @krrishdholakia in #16615
- [Feature] UI - New Callbacks table by @yuneng-jiang in #16512
- fix(openai-video): use GET for /v1/videos/{video_id}/content by @pnookala-godaddy in #16672
- [Feature] UI - Expose user_alias in view and update path by @yuneng-jiang in #16669
- fix: preserve $defs for Anthropic tools input schema by @lukapecnik in #16648
- Bugfix: Ensure detector-id is passed as header to IBM detector server by @RobGeada in #16649
- feat(openai): Add verbosity parameter support for GPT-5 family models in Chat Completions API by @Chesars in #16660
- Fix UI logos loading with SERVER_ROOT_PATH by @jyeros in #16618
- [Infra] Building UI for QA by @yuneng-jiang in #16686
- [Ci/CD] Speed up litellm mapped tests job on circle ci by @ishaan-jaff in #16688
- Prometheus - make OSS by @krrishdholakia in #16689
- Litellm anthropic contex management param support by @Sameerlite in #16528
- Vector store files Stable Release by @Sameerlite in #16643
- Litellm ci cd fixes 2 by @ishaan-jaff in #16693
New Contributors
- @sep-grindr made their first contribution in #16622
- @pnookala-godaddy made their first contribution in #16607
- @dtunikov made their first contribution in #16592
- @lukapecnik made their first contribution in #16648
- @jyeros made their first contribution in #16618
Full Changelog: v1.79.3.dev7...v1.80.0-nightly
v1.79.3.dev7
What's Changed
- [Feature] UI - Move Budgets out of Experimental by @yuneng-jiang in #16544
- fix: allow tool call even when server name prefix is missing by @uc4w6c in #16425
- fix: Improve Azure auth parameter handling for None values by @ultmaster in #14436
- [Fix] UI - SSO Modal Cosmetic Changes by @yuneng-jiang in #16554
- [Docs] Fix code block indentation for fallbacks page by @bchrobot in #16542
- [Feat] Add RunwayML Img Gen API support by @ishaan-jaff in #16557
- [Feature] UI - Config Guardrails should not be deletable from table by @yuneng-jiang in #16540
- [Feature] Add Langfuse OTEL and SQS to Health Check by @yuneng-jiang in #16514
- Add all gemini image models support in image generation by @Sameerlite in #16526
- Fix raising wrong 429 error on wrong exception by @Sameerlite in #16482
- [Fix] UI - Delete Callbacks Failing by @yuneng-jiang in #16473
- [Fix] /spend/logs/ui Access Control by @yuneng-jiang in #16446
- Add Gemini image edit support by @Sameerlite in #16430
- fix(gemini): Preserve non-ASCII characters in function call arguments by @Chesars in #16550
- docs(openai): Document reasoning_effort summary field options by @Chesars in #16549
- feat: Add support for reasoning_effort="none" for Gemini models by @Chesars in #16548
- feat: Add headers to VLLM Passthrough requests [Log success Events] by @LucasSugi in #16532
- feat(bedrock): Add bearer token authentication support for AgentCore by @busla in #16556
New Contributors
- @ultmaster made their first contribution in #14436
- @bchrobot made their first contribution in #16542
Full Changelog: v1.79.3.dev5...v1.79.3.dev7
v1.79.1-stable-patch-1
Full Changelog: v1.79.1-stable...v1.79.1-stable-patch-1
v1.79.3.dev5
What's Changed
- [Infra] CI/CD - Bump up docker version for e2e ui testing by @yuneng-jiang in #16506
- Add Zscaler AI Guard hook by @jwang-gif in #15691
- [Feature] UI - Add Tags To Edit Key Flow by @yuneng-jiang in #16500
- fix: remove enterprise restriction from guardrails list endpoint by @pazevedo-hyland in #15333
- [Feat] New Provider - Add RunwayML Provider for video generations by @ishaan-jaff in #16505
- [Feature] UI - Improve Usage Indicator by @yuneng-jiang in #16504
- [Feature] UI - Add LiteLLM Params to Edit Model by @yuneng-jiang in #16496
- [Fix] Litellm tags usage add request_id by @yuneng-jiang in #16111
- [Fix] Use user budget instead of key budget when creating new team by @yuneng-jiang in #16074
- Documentation Code Example corrections by @AnthonyMonaco in #16502
- fix: Sanitize null token usage in OpenAI-compatible responses by @AlanPonnachan in #16493
- fix: add new models, delete repeat models, update pricing. by @mcowger in #16491
- Add atexit handlers to flush callbacks for async completions by @andrewm4894 in #16487
- Update model logging format for custom LLM provider by @f14-bertolotti in #16485
- Vertex ai Rerank safe load vertex ai creds by @Sameerlite in #16479
- fix(agentcore): Convert SSE stream iterator to async for proper streaming support by @busla in #16293
- fix: use vllm passthrough config for hosted vllm provider instead of raising error by @MightyGoldenOctopus in #16537
- fix: app_roles missing from jwt payload by @mubashir1osmani in #16448
- [Fix] - Bedrock Knowledge Bases - add support for filtering kb queries by @ishaan-jaff in #16543
- [Fix] Bedrock Embeddings - Ensure correct
aws_regionis used when provided dynamically by @ishaan-jaff in #16547 - docs: update broken Slack invite links to support page by @Chesars in #16546
New Contributors
- @jwang-gif made their first contribution in #15691
- @AnthonyMonaco made their first contribution in #16502
- @andrewm4894 made their first contribution in #16487
- @f14-bertolotti made their first contribution in #16485
- @busla made their first contribution in #16293
- @MightyGoldenOctopus made their first contribution in #16537
Full Changelog: v1.79.dev.1...v1.79.3.dev5
v1.79.3.dev2
What's Changed
- [Infra] CI/CD - Bump up docker version for e2e ui testing by @yuneng-jiang in #16506
- Add Zscaler AI Guard hook by @jwang-gif in #15691
New Contributors
- @jwang-gif made their first contribution in #15691
Full Changelog: v1.79.dev.1...v1.79.3.dev2
v1.79.dev.1
What's Changed
- Fix container api link in release page by @Sameerlite in #16440
- Add softgen to projects that are using litellm by @artplan1 in #16423
- [Feature] UI - Model Info Page Health Check by @yuneng-jiang in #16416
- add kimi k2 thinking by @artplan1 in #16445
- docs: fix streaming example in README by @Chesars in #16461
- [Fix] Management Endpoints - Fixes inconsistent error responses in customer management endpoints. Non-existent user errors now return proper 404 status codes with consistent error schema format across all endpoints. by @ishaan-jaff in #16450
- [Infra] UI - Show Deprecation Warning for Model Analytics Tab by @yuneng-jiang in #16417
- fix: allow internal users to access video generation routes by @JehandadK in #16472
- [Bug Fix] - LiteLLM Usage shows key_hash- by @ishaan-jaff in #16471
- [Feature] UI - Test Key Page show models based on selected endpoint by @yuneng-jiang in #16452
- Add GET list of providers endpoint by @Sameerlite in #16432
- [Feature] UI - Invite User Searchable Team Select by @yuneng-jiang in #16454
- Add sdk focused examples for custom prompt management by @Sameerlite in #16441
- Fix magistral streaming to emit reasoning chunks by @Sameerlite in #16434
- Add docs for tracking callback failure by @Sameerlite in #16474
- fix[16428]: remove strict master_key check in add_deployment by @vmiscenko in #16453
- fix(proxy): Correct date range filtering in /spend/logs endpoint by @AlanPonnachan in #16443
- fix: update model_cost_map_url to use environment variable by @mcowger in #16429
- feat(router): Support default fallbacks for unknown models by @AlanPonnachan in #16419
- fix(langfuse): Handle null usage values to prevent validation errors by @AlanPonnachan in #16396
- fix: apply provided timeout value to ClientTimeout.total by @yellowsubmarine372 in #16395
- [Bug] Updated spend would not be sent to CloudZero by @Hebruwu in #16201
- fix: unable to delete MCP server from permission settings by @uc4w6c in #16407
- [Fix] Bedrock Knowledge bases - ensure users can access
search_resultsfor both stream + non stream response to /chat/completions by @ishaan-jaff in #16459 - [AI Gateway] - End User Budgets - Allow pointing max_end_user budget to an id, so the default ID applies to all end users by @ishaan-jaff in #16456
New Contributors
- @artplan1 made their first contribution in #16423
- @JehandadK made their first contribution in #16472
- @vmiscenko made their first contribution in #16453
- @mcowger made their first contribution in #16429
- @yellowsubmarine372 made their first contribution in #16395
- @Hebruwu made their first contribution in #16201
Full Changelog: v1.79.3-nightly...v1.79.dev.1
v1.79.3-stable
Full Changelog: v1.79.3-nightly...v1.79.3-stable
v1.79.3.rc.1
What's Changed
- Litellm dev 10 29 2025 p1 by @krrishdholakia in #16404
- Revert "Litellm dev 10 29 2025 p1" by @krrishdholakia in #16409
- fix: install runtime node for prisma by @AlexsanderHamir in #16410
- Add Vertex and Gemini Videos API with Cost Tracking + UI support by @Sameerlite in #16323
- Adds support for returning Azure Content Policy error information when exceptions from Azure OpenAI occur by @Sameerlite in #16231
- [Fix] UI - Various Small Issues by @yuneng-jiang in #16406
- Litellm dev 10 29 2025 p1 by @krrishdholakia in #16411
- (feat) audio transcription - add gpt-4o-transcribe cost tracking by @krrishdholakia in #16412
- [Bug Fix] Content Filter Guard by @ishaan-jaff in #16414
- [Docs] litellm content filter guard by @ishaan-jaff in #16413
- add: performance improvements to release notes by @AlexsanderHamir in #16401
- [Docs] Litellm 1 79 2 rc by @ishaan-jaff in #16415
Full Changelog: v1.79.2-nightly...v1.79.3.rc.1
v1.79.3-nightly
What's Changed
- Litellm dev 10 29 2025 p1 by @krrishdholakia in #16404
- Revert "Litellm dev 10 29 2025 p1" by @krrishdholakia in #16409
- fix: install runtime node for prisma by @AlexsanderHamir in #16410
- Add Vertex and Gemini Videos API with Cost Tracking + UI support by @Sameerlite in #16323
- Adds support for returning Azure Content Policy error information when exceptions from Azure OpenAI occur by @Sameerlite in #16231
- [Fix] UI - Various Small Issues by @yuneng-jiang in #16406
- Litellm dev 10 29 2025 p1 by @krrishdholakia in #16411
- (feat) audio transcription - add gpt-4o-transcribe cost tracking by @krrishdholakia in #16412
- [Bug Fix] Content Filter Guard by @ishaan-jaff in #16414
- [Docs] litellm content filter guard by @ishaan-jaff in #16413
- add: performance improvements to release notes by @AlexsanderHamir in #16401
- [Docs] Litellm 1 79 2 rc by @ishaan-jaff in #16415
Full Changelog: v1.79.2-nightly...v1.79.3-nightly