Skip to content

Commit ca09f75

Browse files
authored
add missing docs (#3871)
* add missing docs for intern-s1 * a bunch of fix * fix * temp remove gptoss
1 parent b955268 commit ca09f75

File tree

5 files changed

+23
-5
lines changed

5 files changed

+23
-5
lines changed

README.md

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -127,7 +127,8 @@ LMDeploy is a toolkit for compressing, deploying, and serving LLM, developed by
127127
<li>Baichuan2 (7B-13B)</li>
128128
<li>Code Llama (7B - 34B)</li>
129129
<li>ChatGLM2 (6B)</li>
130-
<li>GLM4 (9B)</li>
130+
<li>GLM-4 (9B)</li>
131+
<li>GLM-4-0414 (9B, 32B)</li>
131132
<li>CodeGeeX4 (9B)</li>
132133
<li>YI (6B-34B)</li>
133134
<li>Mistral (7B)</li>
@@ -158,6 +159,8 @@ LMDeploy is a toolkit for compressing, deploying, and serving LLM, developed by
158159
<li>InternVL2 (1B-76B)</li>
159160
<li>InternVL2.5(MPO) (1B-78B)</li>
160161
<li>InternVL3 (1B-78B)</li>
162+
<li>Intern-S1 (241B)</li>
163+
<li>Intern-S1-mini (8.3B)</li>
161164
<li>Mono-InternVL (2B)</li>
162165
<li>ChemVLM (8B-26B)</li>
163166
<li>CogVLM-Chat (17B)</li>
@@ -167,6 +170,7 @@ LMDeploy is a toolkit for compressing, deploying, and serving LLM, developed by
167170
<li>Phi-3-vision (4.2B)</li>
168171
<li>Phi-3.5-vision (4.2B)</li>
169172
<li>GLM-4V (9B)</li>
173+
<li>GLM-4.1V-Thinking (9B)</li>
170174
<li>Llama3.2-vision (11B, 90B)</li>
171175
<li>Molmo (7B-D,72B)</li>
172176
<li>Gemma3 (1B - 27B)</li>

README_ja.md

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -125,7 +125,8 @@ LMDeploy TurboMindエンジンは卓越した推論能力を持ち、さまざ
125125
<li>Baichuan2 (7B-13B)</li>
126126
<li>Code Llama (7B - 34B)</li>
127127
<li>ChatGLM2 (6B)</li>
128-
<li>GLM4 (9B)</li>
128+
<li>GLM-4 (9B)</li>
129+
<li>GLM-4-0414 (9B, 32B)</li>
129130
<li>CodeGeeX4 (9B)</li>
130131
<li>YI (6B-34B)</li>
131132
<li>Mistral (7B)</li>
@@ -156,6 +157,8 @@ LMDeploy TurboMindエンジンは卓越した推論能力を持ち、さまざ
156157
<li>InternVL2 (1B-76B)</li>
157158
<li>InternVL2.5(MPO) (1B-78B)</li>
158159
<li>InternVL3 (1B-78B)</li>
160+
<li>Intern-S1 (241B)</li>
161+
<li>Intern-S1-mini (8.3B)</li>
159162
<li>Mono-InternVL (2B)</li>
160163
<li>ChemVLM (8B-26B)</li>
161164
<li>CogVLM-Chat (17B)</li>
@@ -165,6 +168,7 @@ LMDeploy TurboMindエンジンは卓越した推論能力を持ち、さまざ
165168
<li>Phi-3-vision (4.2B)</li>
166169
<li>Phi-3.5-vision (4.2B)</li>
167170
<li>GLM-4V (9B)</li>
171+
<li>GLM-4.1V-Thinking (9B)</li>
168172
<li>Llama3.2-vision (11B, 90B)</li>
169173
<li>Molmo (7B-D,72B)</li>
170174
<li>Gemma3 (1B - 27B)</li>

README_zh-CN.md

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -129,7 +129,8 @@ LMDeploy TurboMind 引擎拥有卓越的推理能力,在各种规模的模型
129129
<li>Baichuan2 (7B-13B)</li>
130130
<li>Code Llama (7B - 34B)</li>
131131
<li>ChatGLM2 (6B)</li>
132-
<li>GLM4 (9B)</li>
132+
<li>GLM-4 (9B)</li>
133+
<li>GLM-4-0414 (9B, 32B)</li>
133134
<li>CodeGeeX4 (9B)</li>
134135
<li>YI (6B-34B)</li>
135136
<li>Mistral (7B)</li>
@@ -160,6 +161,8 @@ LMDeploy TurboMind 引擎拥有卓越的推理能力,在各种规模的模型
160161
<li>InternVL2 (1B-76B)</li>
161162
<li>InternVL2.5(MPO) (1B-78B)</li>
162163
<li>InternVL3 (1B-78B)</li>
164+
<li>Intern-S1 (241B)</li>
165+
<li>Intern-S1-mini (8.3B)</li>
163166
<li>Mono-InternVL (2B)</li>
164167
<li>ChemVLM (8B-26B)</li>
165168
<li>CogVLM-Chat (17B)</li>
@@ -169,6 +172,7 @@ LMDeploy TurboMind 引擎拥有卓越的推理能力,在各种规模的模型
169172
<li>Phi-3-vision (4.2B)</li>
170173
<li>Phi-3.5-vision (4.2B)</li>
171174
<li>GLM-4V (9B)</li>
175+
<li>GLM-4.1V-Thinking (9B)</li>
172176
<li>Llama3.2-vision (11B, 90B)</li>
173177
<li>Molmo (7B-D,72B)</li>
174178
<li>Gemma3 (1B - 27B)</li>

docs/en/supported_models/supported_models.md

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -17,6 +17,8 @@ The following tables detail the models supported by LMDeploy's TurboMind engine
1717
| InternLM3 | 8B | LLM | Yes | Yes | Yes | Yes |
1818
| InternLM-XComposer2 | 7B, 4khd-7B | MLLM | Yes | Yes | Yes | Yes |
1919
| InternLM-XComposer2.5 | 7B | MLLM | Yes | Yes | Yes | Yes |
20+
| Intern-S1 | 241B | MLLM | Yes | Yes | Yes | Yes |
21+
| Intern-S1-mini | 8.3B | MLLM | Yes | Yes | Yes | Yes |
2022
| Qwen | 1.8B - 72B | LLM | Yes | Yes | Yes | Yes |
2123
| Qwen1.5<sup>\[1\]</sup> | 1.8B - 110B | LLM | Yes | Yes | Yes | Yes |
2224
| Qwen2<sup>\[2\]</sup> | 0.5B - 72B | LLM | Yes | Yes\* | Yes\* | Yes |
@@ -67,6 +69,8 @@ The following tables detail the models supported by LMDeploy's TurboMind engine
6769
| InternLM2 | 7B - 20B | LLM | Yes | Yes | Yes | Yes | Yes |
6870
| InternLM2.5 | 7B | LLM | Yes | Yes | Yes | Yes | Yes |
6971
| InternLM3 | 8B | LLM | Yes | Yes | Yes | Yes | Yes |
72+
| Intern-S1 | 241B | MLLM | Yes | Yes | Yes | Yes | - |
73+
| Intern-S1-mini | 8.3B | MLLM | Yes | Yes | Yes | Yes | - |
7074
| Baichuan2 | 7B | LLM | Yes | Yes | Yes | Yes | No |
7175
| Baichuan2 | 13B | LLM | Yes | Yes | Yes | No | No |
7276
| ChatGLM2 | 6B | LLM | Yes | Yes | Yes | No | No |
@@ -111,7 +115,6 @@ The following tables detail the models supported by LMDeploy's TurboMind engine
111115
| Phi-3.5-mini | 3.8B | LLM | Yes | Yes | No | - | - |
112116
| Phi-3.5-MoE | 16x3.8B | LLM | Yes | Yes | No | - | - |
113117
| Phi-3.5-vision | 4.2B | MLLM | Yes | Yes | No | - | - |
114-
| gpt-oss | 20B | LLM | Yes | Yes | No | - | - |
115118

116119
```{note}
117120
* [1] Currently Mono-InternVL does not support FP16 due to numerical instability. Please use BF16 instead.

docs/zh_cn/supported_models/supported_models.md

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -17,6 +17,8 @@
1717
| InternLM3 | 8B | LLM | Yes | Yes | Yes | Yes |
1818
| InternLM-XComposer2 | 7B, 4khd-7B | MLLM | Yes | Yes | Yes | Yes |
1919
| InternLM-XComposer2.5 | 7B | MLLM | Yes | Yes | Yes | Yes |
20+
| Intern-S1 | 241B | MLLM | Yes | Yes | Yes | Yes |
21+
| Intern-S1-mini | 8.3B | MLLM | Yes | Yes | Yes | Yes |
2022
| Qwen | 1.8B - 72B | LLM | Yes | Yes | Yes | Yes |
2123
| Qwen1.5<sup>\[1\]</sup> | 1.8B - 110B | LLM | Yes | Yes | Yes | Yes |
2224
| Qwen2<sup>\[2\]</sup> | 0.5B - 72B | LLM | Yes | Yes\* | Yes\* | Yes |
@@ -67,6 +69,8 @@
6769
| InternLM2 | 7B - 20B | LLM | Yes | Yes | Yes | Yes | Yes |
6870
| InternLM2.5 | 7B | LLM | Yes | Yes | Yes | Yes | Yes |
6971
| InternLM3 | 8B | LLM | Yes | Yes | Yes | Yes | Yes |
72+
| Intern-S1 | 241B | MLLM | Yes | Yes | Yes | Yes | - |
73+
| Intern-S1-mini | 8.3B | MLLM | Yes | Yes | Yes | Yes | - |
7074
| Baichuan2 | 7B | LLM | Yes | Yes | Yes | Yes | No |
7175
| Baichuan2 | 13B | LLM | Yes | Yes | Yes | No | No |
7276
| ChatGLM2 | 6B | LLM | Yes | Yes | Yes | No | No |
@@ -111,7 +115,6 @@
111115
| Phi-3.5-mini | 3.8B | LLM | Yes | Yes | No | - | - |
112116
| Phi-3.5-MoE | 16x3.8B | LLM | Yes | Yes | No | - | - |
113117
| Phi-3.5-vision | 4.2B | MLLM | Yes | Yes | No | - | - |
114-
| gpt-oss | 20B | LLM | Yes | Yes | No | - | - |
115118

116119
```{note}
117120
* [1] 目前,Mono-InternVL不支持FP16,因为数值不稳定。请改用BF16

0 commit comments

Comments
 (0)