You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: deploy/kubernetes/istio/README.md
+13-9Lines changed: 13 additions & 9 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,14 +1,14 @@
1
1
# vLLM Semantic Router as ExtProc server for Istio Gateway
2
2
3
-
This guide provides step-by-step instructions for deploying the vLLM Semantic Router (vsr) with Istio Gateway on Kubernetes. Istio Gateway uses Envoy under the covers so it is possible to use vsr with it. There are multiple topologies possible to combine Istio Gateway with vsr. This document describes one of the common topologies.
3
+
This guide provides step-by-step instructions for deploying the vLLM Semantic Router (vsr) with Istio Gateway on Kubernetes. Istio Gateway uses Envoy under the covers so it is possible to use vsr with it. However there are differences between how different Envoy based Gateways process the ExtProc protocol, hence the deployment described here is different from the deployment of vsr alongwith other types of Envoy based Gateways as described in the other guides in this repo. There are multiple architecture options possible to combine Istio Gateway with vsr. This document describes one of the options.
4
4
5
5
## Architecture Overview
6
6
7
7
The deployment consists of:
8
8
9
-
-**vLLM Semantic Router**: Provides intelligent request routing and classification
10
-
-**Istio Gateway**: IstioGateway that uses an Envoy proxy under the covers
11
-
-**Gateway API Inference Extension**: Additional control and data plane for endpoint picking that can optionally attach to the same Istio gateway as vLLM Semantic Router.
9
+
-**vLLM Semantic Router**: Provides intelligent request routing and processing decisions to Envoy based Gateways
10
+
-**Istio Gateway**: Istio's implementation of Kubernetes Gateway API that uses an Envoy proxy under the covers
11
+
-**Gateway API Inference Extension**: Additional APIs to extend the Gateway API for Inference via ExtProc servers
12
12
-**Two instances of vLLM serving 1 model each**: Example backend LLMs for illustrating semantic routing in this topology
As noted earlier in this exercise we deploy two LLMs viz. a llama3-8b model (meta-llama/Llama-3.1-8B-Instruct) and a phi4-mini model (microsoft/Phi-4-mini-instruct). In this exercise we chose to serve these models using two separate instances of the [vLLM inference server](https://docs.vllm.ai/en/latest/) running in the default namespace of the kubernetes cluster. For this exercise you may choose to use any inference server to serve these models but we have provided manifests to run these in vLLM containers as a reference.
46
+
In this exercise we deploy two LLMs viz. a llama3-8b model (meta-llama/Llama-3.1-8B-Instruct) and a phi4-mini model (microsoft/Phi-4-mini-instruct). We serve these models using two separate instances of the [vLLM inference server](https://docs.vllm.ai/en/latest/) running in the default namespace of the kubernetes cluster. You may choose any other inference engines as long as they expose OpenAI API endpoints. First install a secret for your HuggingFace token previously stored in env variable HF_TOKEN and then deploy the models as shown below.
This may take several (10+) minutes the first time this is run to download the model up until the vLLM pod running this model is in READY state. Similarly also deploy the second LLM (phi4-mini) and wait for several minutes until the pod is in READY state..
57
+
This may take several (10+) minutes the first time this is run to download the model up until the vLLM pod running this model is in READY state. Similarly also deploy the second LLM (phi4-mini) and wait for several minutes until the pod is in READY state.
The file deploy/kubernetes/istio/config.yaml will get used to configure vsr when it is installed in the next step. The example config file provided already in this repo should work f you use the same LLMs as in this exercise but you can choose to play with this config to enable or disable individual vsr features. Ensure that your vllm_endpoints in the file match the ip/ port of the llm services you are running. It is usually good to start with basic features of vsr such as prompt classification and model routing before experimenting with other features as described elsewhere in the vsr documentation.
85
+
The file deploy/kubernetes/istio/config.yaml will get used to configure vsr when it is installed in the next step. Ensure that the models in the config file match the models you are using and that the vllm_endpoints in the file match the ip/ port of the llm kubernetes services you are running. It is usually good to start with basic features of vsr such as prompt classification and model routing before experimenting with other features such as PromptGuard or ToolCalling.
82
86
83
87
## Step 4: Deploy vLLM Semantic Router
84
88
@@ -99,7 +103,7 @@ kubectl get pods -n vllm-semantic-router-system
99
103
100
104
We will use a recent build of Istio for this exercise so that we have the option of also using the v1.0.0 GA version of the Gateway API Inference Extension CRDs and EPP functionality.
101
105
102
-
Follow the procedures described in the Gateway API [Inference Extensions documentation](https://gateway-api-inference-extension.sigs.k8s.io/guides/) to deploy the 1.28 (or newer) version of Istio Gateway, the Kubernetes Gateway API CRDs and the Gateway API Inference Extension v1.0.0. Do not install any of the HTTPRoute resources from that guide however, just use it to deploy the Istio gateway and CRDs. If installed correctly you should see the api CRDs for gateway api and inference extension as well as pods running for the Istio gateway and Istiod using the commands shown below.
106
+
Follow the procedures described in the Gateway API [Inference Extensions documentation](https://gateway-api-inference-extension.sigs.k8s.io/guides/) to deploy the 1.28 (or newer) version of Istio control plane, Istio Gateway, the Kubernetes Gateway API CRDs and the Gateway API Inference Extension v1.0.0. Do not install any of the HTTPRoute resources from that guide however, just use it to deploy the Istio gateway and CRDs. If installed correctly you should see the api CRDs for gateway api and inference extension as well as pods running for the Istio gateway and Istiod using the commands shown below.
# NOT supported: domain names (example.com), protocol prefixes (http://), paths (/api), ports in address (use 'port' field)
33
36
vllm_endpoints:
34
37
- name: "endpoint1"
35
-
address: "10.104.192.205"#IPv4 address - REQUIRED format
38
+
address: "10.98.150.102"# Static IPv4 of llama3-8b k8s service
36
39
port: 80
37
40
weight: 1
38
41
- name: "endpoint2"
39
-
address: "10.99.27.202"#IPv4 address - REQUIRED format
42
+
address: "10.98.118.242"# Static IPv4 of phi4-mini k8s service
40
43
port: 80
41
44
weight: 1
42
45
43
46
model_config:
44
47
"llama3-8b":
48
+
# reasoning_family: "" # This model uses Qwen-3 reasoning syntax
45
49
preferred_endpoints: ["endpoint1"]
46
50
pii_policy:
47
-
allow_by_default: false
51
+
allow_by_default: true
48
52
"phi4-mini":
53
+
# reasoning_family: "" # This model uses Qwen-3 reasoning syntax
49
54
preferred_endpoints: ["endpoint2"]
50
55
pii_policy:
51
-
allow_by_default: false
56
+
allow_by_default: true
52
57
53
58
# Classifier configuration
54
59
classifier:
@@ -68,83 +73,116 @@ classifier:
68
73
# Categories with new use_reasoning field structure
69
74
categories:
70
75
- name: business
76
+
system_prompt: "You are a senior business consultant and strategic advisor with expertise in corporate strategy, operations management, financial analysis, marketing, and organizational development. Provide practical, actionable business advice backed by proven methodologies and industry best practices. Consider market dynamics, competitive landscape, and stakeholder interests in your recommendations."
77
+
# jailbreak_enabled: true # Optional: Override global jailbreak detection per category
78
+
# jailbreak_threshold: 0.8 # Optional: Override global jailbreak threshold per category
71
79
model_scores:
72
-
- model: llama3-8b
80
+
- model: llama3-8b
73
81
score: 0.8
74
-
use_reasoning: false # Business performs better without reasoning
75
-
- model: phi4-mini
82
+
use_reasoning: false # Business performs better without reasoning
83
+
- model: phi4-mini
76
84
score: 0.3
77
-
use_reasoning: false # Business performs better without reasoning
85
+
use_reasoning: false # Business performs better without reasoning
78
86
- name: law
87
+
system_prompt: "You are a knowledgeable legal expert with comprehensive understanding of legal principles, case law, statutory interpretation, and legal procedures across multiple jurisdictions. Provide accurate legal information and analysis while clearly stating that your responses are for informational purposes only and do not constitute legal advice. Always recommend consulting with qualified legal professionals for specific legal matters."
79
88
model_scores:
80
89
- model: llama3-8b
81
-
score: 0.7
90
+
score: 0.4
82
91
use_reasoning: false
83
92
- name: psychology
93
+
system_prompt: "You are a psychology expert with deep knowledge of cognitive processes, behavioral patterns, mental health, developmental psychology, social psychology, and therapeutic approaches. Provide evidence-based insights grounded in psychological research and theory. When discussing mental health topics, emphasize the importance of professional consultation and avoid providing diagnostic or therapeutic advice."
94
+
semantic_cache_enabled: true
95
+
semantic_cache_similarity_threshold: 0.92# High threshold for psychology - sensitive to nuances
84
96
model_scores:
85
97
- model: llama3-8b
86
-
score: 0.7
98
+
score: 0.6
87
99
use_reasoning: false
88
100
- name: biology
101
+
system_prompt: "You are a biology expert with comprehensive knowledge spanning molecular biology, genetics, cell biology, ecology, evolution, anatomy, physiology, and biotechnology. Explain biological concepts with scientific accuracy, use appropriate terminology, and provide examples from current research. Connect biological principles to real-world applications and emphasize the interconnectedness of biological systems."
89
102
model_scores:
90
103
- model: llama3-8b
91
104
score: 0.9
92
105
use_reasoning: false
93
106
- name: chemistry
107
+
system_prompt: "You are a chemistry expert specializing in chemical reactions, molecular structures, and laboratory techniques. Provide detailed, step-by-step explanations."
94
108
model_scores:
95
109
- model: llama3-8b
96
110
score: 0.6
97
-
use_reasoning: false # Enable reasoning for complex chemistry
111
+
use_reasoning: false # Enable reasoning for complex chemistry
98
112
- name: history
113
+
system_prompt: "You are a historian with expertise across different time periods and cultures. Provide accurate historical context and analysis."
99
114
model_scores:
100
115
- model: llama3-8b
101
116
score: 0.7
102
117
use_reasoning: false
103
118
- name: other
119
+
system_prompt: "You are a helpful and knowledgeable assistant. Provide accurate, helpful responses across a wide range of topics."
120
+
semantic_cache_enabled: true
121
+
semantic_cache_similarity_threshold: 0.75# Lower threshold for general chat - less sensitive
104
122
model_scores:
105
123
- model: llama3-8b
106
124
score: 0.7
107
125
use_reasoning: false
108
126
- name: health
127
+
system_prompt: "You are a health and medical information expert with knowledge of anatomy, physiology, diseases, treatments, preventive care, nutrition, and wellness. Provide accurate, evidence-based health information while emphasizing that your responses are for educational purposes only and should never replace professional medical advice, diagnosis, or treatment. Always encourage users to consult healthcare professionals for medical concerns and emergencies."
128
+
semantic_cache_enabled: true
129
+
semantic_cache_similarity_threshold: 0.95# High threshold for health - very sensitive to word changes
109
130
model_scores:
110
131
- model: llama3-8b
111
132
score: 0.5
112
133
use_reasoning: false
113
134
- name: economics
135
+
system_prompt: "You are an economics expert with deep understanding of microeconomics, macroeconomics, econometrics, financial markets, monetary policy, fiscal policy, international trade, and economic theory. Analyze economic phenomena using established economic principles, provide data-driven insights, and explain complex economic concepts in accessible terms. Consider both theoretical frameworks and real-world applications in your responses."
114
136
model_scores:
115
137
- model: llama3-8b
116
-
score: 0.8
138
+
score: 1.0
117
139
use_reasoning: false
118
140
- name: math
141
+
system_prompt: "You are a mathematics expert. Provide step-by-step solutions, show your work clearly, and explain mathematical concepts in an understandable way."
119
142
model_scores:
120
143
- model: phi4-mini
121
-
score: 0.8
122
-
use_reasoning: false
123
-
- model: llama3-8b
124
-
score: 0.3
125
-
use_reasoning: false
144
+
score: 1.0
145
+
use_reasoning: false # Enable reasoning for complex math
126
146
- name: physics
147
+
system_prompt: "You are a physics expert with deep understanding of physical laws and phenomena. Provide clear explanations with mathematical derivations when appropriate."
127
148
model_scores:
128
149
- model: llama3-8b
129
150
score: 0.7
130
-
use_reasoning: false
151
+
use_reasoning: false# Enable reasoning for physics
131
152
- name: computer science
153
+
system_prompt: "You are a computer science expert with knowledge of algorithms, data structures, programming languages, and software engineering. Provide clear, practical solutions with code examples when helpful."
132
154
model_scores:
133
155
- model: llama3-8b
134
-
score: 0.7
156
+
score: 0.6
135
157
use_reasoning: false
136
158
- name: philosophy
159
+
system_prompt: "You are a philosophy expert with comprehensive knowledge of philosophical traditions, ethical theories, logic, metaphysics, epistemology, political philosophy, and the history of philosophical thought. Engage with complex philosophical questions by presenting multiple perspectives, analyzing arguments rigorously, and encouraging critical thinking. Draw connections between philosophical concepts and contemporary issues while maintaining intellectual honesty about the complexity and ongoing nature of philosophical debates."
137
160
model_scores:
138
161
- model: llama3-8b
139
162
score: 0.5
140
163
use_reasoning: false
141
164
- name: engineering
165
+
system_prompt: "You are an engineering expert with knowledge across multiple engineering disciplines including mechanical, electrical, civil, chemical, software, and systems engineering. Apply engineering principles, design methodologies, and problem-solving approaches to provide practical solutions. Consider safety, efficiency, sustainability, and cost-effectiveness in your recommendations. Use technical precision while explaining concepts clearly, and emphasize the importance of proper engineering practices and standards."
142
166
model_scores:
143
167
- model: llama3-8b
144
168
score: 0.7
145
169
use_reasoning: false
146
170
147
-
default_model: llama3-8b
171
+
default_model: "llama3-8b"
172
+
173
+
# Auto model name for automatic model selection (optional)
174
+
# This is the model name that clients should use to trigger automatic model selection
175
+
# If not specified, defaults to "MoM" (Mixture of Models)
176
+
# For backward compatibility, "auto" is always accepted as an alias
177
+
# Example: auto_model_name: "MoM" # or any other name you prefer
178
+
# auto_model_name: "MoM"
179
+
180
+
# Include configured models in /v1/models list endpoint (optional, default: false)
181
+
# When false (default): only the auto model name is returned in the /v1/models endpoint
182
+
# When true: all models configured in model_config are also included in the /v1/models endpoint
183
+
# This is useful for clients that need to discover all available models
184
+
# Example: include_config_models_in_list: true
185
+
# include_config_models_in_list: false
148
186
149
187
# Reasoning family configurations
150
188
reasoning_families:
@@ -166,6 +204,9 @@ reasoning_families:
166
204
# Global default reasoning effort level
167
205
default_reasoning_effort: high
168
206
207
+
# Gateway route cache clearing
208
+
clear_route_cache: true # Enable for some gateways such as Istio
0 commit comments