-
-
Notifications
You must be signed in to change notification settings - Fork 4.6k
[Perf] Alexsander fixes round 2 - Oct 18th #15695
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Add early return for models without '/' to avoid expensive get_model_list() calls for 99% of standard model requests (gpt-4, claude-3, etc). - Refactor _is_prompt_management_model() with "/" check before model lookup - Add unit tests to verify optimization doesn't break detection
…essary queries
This commit introduces several performance optimizations to the Redis caching layer:
**DualCache Improvements (dual_cache.py):**
1. Increase batch cache size limit from 100 to 1000
- Allows for larger batch operations, reducing Redis round-trips
2. Throttle repeated Redis queries for cache misses
- Update last_redis_batch_access_time for ALL queried keys, including those
with None values
- Prevents excessive Redis queries for frequently-accessed non-existent keys
3. Add early exit optimization
- Short-circuit when redis_result is None or contains only None values
- Avoids unnecessary processing when no cache hits are found
4. Optimize key lookup performance
- Replace O(n) keys.index() calls with O(1) dict lookup via key_to_index mapping
- Reduces algorithmic complexity in batch operations
5. Streamline cache updates
- Combine result updates and in-memory cache updates in single loop
- Only cache non-None values to avoid polluting in-memory cache
**CooldownCache Improvements (cooldown_cache.py):**
1. Enhanced early return logic
- Check if all values in results are None, not just if results is None
- Prevents unnecessary iteration when no valid cooldown data exists
These changes significantly improve Redis caching performance, especially for:
- High-throughput batch operations
- Scenarios with frequent cache misses
- Large-scale deployments with many concurrent requests
- Add DEFAULT_MAX_REDIS_BATCH_CACHE_SIZE constant (default: 1000) - Update DualCache to use constant from constants.py - Document new environment variable in config_settings.md
…ly return The previous early return optimization in _is_prompt_management_model() was checking if the model name parameter contained '/' and returning False if it didn't. This broke detection for model aliases (e.g., 'chatbot_actions') that don't have '/' in their name but map to prompt management models (e.g., 'langfuse/openai-gpt-3.5-turbo'). Changed the early return logic to only exit early when: - Model name contains '/' AND - The prefix is NOT a known prompt management provider This maintains the performance optimization for 99% of direct model calls (avoiding expensive get_model_list lookups) while correctly handling: - Direct prompt management calls (e.g., 'langfuse/model') - Model aliases without '/' (e.g., 'chatbot_actions') - Regular models with/without '/' (e.g., 'gpt-3.5-turbo', 'openai/gpt-4') Fixes test: test_router_prompt_management_factory
Replace deepcopy with list() in _pre_call_checks - runs on every request. Only pops from list, never modifies deployment dicts, so shallow copy is safe. Performance: 1400x faster on hot path Impact: 2-5x overall throughput improvement for routing workloads Tests: Added regression test to ensure no mutation + filtering works
Replace expensive copy.deepcopy() with shallow copy for default_deployment in _common_checks_available_deployment() hot path. Changes: - Use dict.copy() for top-level deployment dict - Use dict.copy() for nested litellm_params dict - Only the 'model' field is modified, so deep recursion is unnecessary Impact: - 100x+ faster for default deployment path (every request when used) - deepcopy recursively traverses entire object tree - Shallow copy only copies two dict levels (exactly what's needed) Test coverage: - Added regression test to verify deployment isolation - Ensures returned deployments don't mutate original default_deployment - Validates multiple concurrent requests get independent copies
Remove unnecessary deployment['litellm_params'].copy() in _completion and _acompletion functions. The dict is only read and spread into a new dict, never modified, making the defensive copy wasteful. Changes: - Remove .copy() in _completion (sync hot path) - Remove .copy() in _acompletion (async hot path) Impact: - Every completion request (highest traffic endpoints) - Eliminates unnecessary dict allocation and copy on every call - Dict spreading already creates new dict, so no mutation possible Test coverage: - Added tests verifying deployment params unchanged after calls - Tests both sync and async completion paths - Validates optimization doesn't introduce mutations
Replace O(n²) list pop pattern with O(n) set-based filtering in _pre_call_checks() to improve routing performance under high load. Changes: - Use set() instead of list for invalid_model_indices tracking - Replace reversed list.pop() loop with single-pass list comprehension - Eliminate redundant list→set conversion overhead Impact: - Hot path optimization: runs on every request through the router - ~2-5x faster filtering when many deployments fail validation - Most beneficial with 50+ deployments per model group or high invalidation rates (rate limits, context window exceeded) Technical details: Old: O(k²) where k = invalid deployments (pop shifts remaining elements) New: O(n) single pass with O(1) set membership checks
feat(proxy): Add configurable GC thresholds and enhance memory debugging endpoints - Add PYTHON_GC_THRESHOLD env var to configure garbage collection thresholds - Add POST /debug/memory/gc/configure endpoint for runtime GC tuning - Enhance memory debugging endpoints with better structure and explanations - Add comprehensive router and cache memory tracking - Include worker PID in all debug responses for multi-worker debugging
Extract 6 helper functions from get_memory_details to fix linter error PLR0915 (too many statements). Improves maintainability while preserving functionality.
Removes early exit optimization that checked model_name prefix instead of the actual litellm_params model. This incorrectly returned False for custom model aliases that map to prompt management providers. Example: "my-langfuse-prompt/test_id" -> "langfuse_prompt/actual_id" The method now correctly checks the underlying model's prefix. Fixes test_is_prompt_management_model_optimization
Resolved 6 mypy type errors in proxy/common_utils/debug_utils.py by adding explicit Dict[str, Any] annotations to dictionary variables where mypy was incorrectly inferring narrow types. This allows the dictionaries to accept different value types (strings, nested dicts) for error handling and various return structures. Fixed: - Line 246: caches dictionary in get_memory_summary() - Line 371: cache_stats dictionary in _get_cache_memory_stats() - Line 439: litellm_router_memory dictionary in _get_router_memory_stats()
- Replace tuple[...], list[...] with Tuple[...], List[...] from typing - Replace Dict | None with Optional[Dict] for Python 3.8 compatibility - Add missing imports: List, Optional, Tuple to typing imports Fixes TypeError: 'type' object is not subscriptable in Python 3.8
|
The latest updates on your projects. Learn more about Vercel for GitHub.
💡 Enable Vercel Agent with $100 free credit for automated AI reviews |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
[Perf] Alexsander fixes round 2 - Oct 18th
Relevant issues
Pre-Submission checklist
Please complete all items before asking a LiteLLM maintainer to review your PR
tests/litellm/directory, Adding at least 1 test is a hard requirement - see detailsmake test-unitType
🆕 New Feature
🐛 Bug Fix
🧹 Refactoring
📖 Documentation
🚄 Infrastructure
✅ Test
Changes