Skip to content

Commit 79f2d03

Browse files
authored
Merge pull request #3990 from myhloli/dev
Dev
2 parents 2ac829c + d2c93b7 commit 79f2d03

File tree

12 files changed

+270
-133
lines changed

12 files changed

+270
-133
lines changed

README.md

Lines changed: 4 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -650,14 +650,14 @@ A WebUI developed based on Gradio, with a simple interface and only core parsing
650650
<td>Faster than transformers</td>
651651
<td>Fast, compatible with the vLLM ecosystem</td>
652652
<td>Fast, compatible with the LMDeploy ecosystem</td>
653-
<td>Suitable for OpenAI-compatible servers<sup>5</sup></td>
653+
<td>Suitable for OpenAI-compatible servers<sup>6</sup></td>
654654
</tr>
655655
<tr>
656656
<th>Operating System</th>
657657
<td colspan="2" style="text-align:center;">Linux<sup>2</sup> / Windows / macOS</td>
658658
<td style="text-align:center;">macOS<sup>3</sup></td>
659659
<td style="text-align:center;">Linux<sup>2</sup> / Windows<sup>4</sup> </td>
660-
<td style="text-align:center;">Linux<sup>2</sup> / Windows </td>
660+
<td style="text-align:center;">Linux<sup>2</sup> / Windows<sup>5</sup> </td>
661661
<td>Any</td>
662662
</tr>
663663
<tr>
@@ -693,7 +693,8 @@ A WebUI developed based on Gradio, with a simple interface and only core parsing
693693
<sup>2</sup> Linux supports only distributions released in 2019 or later.
694694
<sup>3</sup> MLX requires macOS 13.5 or later, recommended for use with version 14.0 or higher.
695695
<sup>4</sup> Windows vLLM support via WSL2(Windows Subsystem for Linux).
696-
<sup>5</sup> Servers compatible with the OpenAI API, such as local or remote model services deployed via inference frameworks like `vLLM`, `SGLang`, or `LMDeploy`.
696+
<sup>5</sup> Windows LMDeploy can only use the `turbomind` backend, which is slightly slower than the `pytorch` backend. If performance is critical, it is recommended to run it via WSL2.
697+
<sup>6</sup> Servers compatible with the OpenAI API, such as local or remote model services deployed via inference frameworks like `vLLM`, `SGLang`, or `LMDeploy`.
697698

698699

699700
### Install MinerU

README_zh-CN.md

Lines changed: 4 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -637,14 +637,14 @@ https://github.com/user-attachments/assets/4bea02c9-6d54-4cd6-97ed-dff14340982c
637637
<td>比transformers快</td>
638638
<td>速度快, 兼容vllm生态</td>
639639
<td>速度快, 兼容lmdeploy生态</td>
640-
<td>适用于OpenAI兼容服务器<sup>5</sup></td>
640+
<td>适用于OpenAI兼容服务器<sup>6</sup></td>
641641
</tr>
642642
<tr>
643643
<th>操作系统</th>
644644
<td colspan="2" style="text-align:center;">Linux<sup>2</sup> / Windows / macOS</td>
645645
<td style="text-align:center;">macOS<sup>3</sup></td>
646646
<td style="text-align:center;">Linux<sup>2</sup> / Windows<sup>4</sup> </td>
647-
<td style="text-align:center;">Linux<sup>2</sup> / Windows </td>
647+
<td style="text-align:center;">Linux<sup>2</sup> / Windows<sup>5</sup> </td>
648648
<td>不限</td>
649649
</tr>
650650
<tr>
@@ -680,7 +680,8 @@ https://github.com/user-attachments/assets/4bea02c9-6d54-4cd6-97ed-dff14340982c
680680
<sup>2</sup> Linux仅支持2019年及以后发行版
681681
<sup>3</sup> MLX需macOS 13.5及以上版本支持,推荐14.0以上版本使用
682682
<sup>4</sup> Windows vLLM通过WSL2(适用于 Linux 的 Windows 子系统)实现支持
683-
<sup>5</sup> 兼容OpenAI API的服务器,如通过`vLLM`/`SGLang`/`LMDeploy`等推理框架部署的本地模型服务器或远程模型服务
683+
<sup>5</sup> Windows LMDeploy只能使用`turbomind`后端,速度比`pytorch`后端稍慢,如对速度有要求建议通过WSL2运行
684+
<sup>6</sup> 兼容OpenAI API的服务器,如通过`vLLM`/`SGLang`/`LMDeploy`等推理框架部署的本地模型服务器或远程模型服务
684685

685686
> [!TIP]
686687
> 除以上主流环境与平台外,我们也收录了一些社区用户反馈的其他平台支持情况,详情请参考[其他加速卡适配](https://opendatalab.github.io/MinerU/zh/usage/)

docker/china/camb.Dockerfile

Lines changed: 23 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,23 @@
1+
# Base image containing the LMDeploy inference environment, requiring amd64 CPU + cambricon MLU.
2+
FROM
3+
4+
# Install libgl for opencv support & Noto fonts for Chinese characters
5+
RUN apt-get update && \
6+
apt-get install -y \
7+
fonts-noto-core \
8+
fonts-noto-cjk \
9+
fontconfig \
10+
libgl1 && \
11+
fc-cache -fv && \
12+
apt-get clean && \
13+
rm -rf /var/lib/apt/lists/*
14+
15+
# Install mineru latest
16+
RUN python3 -m pip install -U 'mineru[core]' -i https://mirrors.aliyun.com/pypi/simple --break-system-packages && \
17+
python3 -m pip cache purge
18+
19+
# Download models and update the configuration file
20+
RUN /bin/bash -c "mineru-models-download -s modelscope -m all"
21+
22+
# Set the entry point to activate the virtual environment and run the command line tool
23+
ENTRYPOINT ["/bin/bash", "-c", "export MINERU_MODEL_SOURCE=local && exec \"$@\"", "--"]

docker/china/maca.Dockerfile

Lines changed: 23 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,23 @@
1+
# Base image containing the LMDeploy inference environment, requiring amd64 CPU + metax GPU.
2+
FROM
3+
4+
# Install libgl for opencv support & Noto fonts for Chinese characters
5+
RUN apt-get update && \
6+
apt-get install -y \
7+
fonts-noto-core \
8+
fonts-noto-cjk \
9+
fontconfig \
10+
libgl1 && \
11+
fc-cache -fv && \
12+
apt-get clean && \
13+
rm -rf /var/lib/apt/lists/*
14+
15+
# Install mineru latest
16+
RUN python3 -m pip install -U 'mineru[core]' -i https://mirrors.aliyun.com/pypi/simple --break-system-packages && \
17+
python3 -m pip cache purge
18+
19+
# Download models and update the configuration file
20+
RUN /bin/bash -c "mineru-models-download -s modelscope -m all"
21+
22+
# Set the entry point to activate the virtual environment and run the command line tool
23+
ENTRYPOINT ["/bin/bash", "-c", "export MINERU_MODEL_SOURCE=local && exec \"$@\"", "--"]

docker/china/npu.Dockerfile

Lines changed: 23 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,23 @@
1+
# Base image containing the LMDeploy inference environment, requiring ARM CPU + Ascend NPU.
2+
FROM crpi-4crprmm5baj1v8iv.cn-hangzhou.personal.cr.aliyuncs.com/lmdeploy_dlinfer/ascend:mineru-a2
3+
4+
# Install libgl for opencv support & Noto fonts for Chinese characters
5+
RUN apt-get update && \
6+
apt-get install -y \
7+
fonts-noto-core \
8+
fonts-noto-cjk \
9+
fontconfig \
10+
libgl1 && \
11+
fc-cache -fv && \
12+
apt-get clean && \
13+
rm -rf /var/lib/apt/lists/*
14+
15+
# Install mineru latest
16+
RUN python3 -m pip install -U 'mineru[core]' -i https://mirrors.aliyun.com/pypi/simple --break-system-packages && \
17+
python3 -m pip cache purge
18+
19+
# Download models and update the configuration file
20+
RUN /bin/bash -c "mineru-models-download -s modelscope -m all"
21+
22+
# Set the entry point to activate the virtual environment and run the command line tool
23+
ENTRYPOINT ["/bin/bash", "-c", "export MINERU_MODEL_SOURCE=local && exec \"$@\"", "--"]

docker/china/ppu.Dockerfile

Lines changed: 23 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,23 @@
1+
# Base image containing the LMDeploy inference environment, requiring amd64 CPU + t-head PPU.
2+
FROM crpi-4crprmm5baj1v8iv.cn-hangzhou.personal.cr.aliyuncs.com/lmdeploy_dlinfer/ppu:mineru-ppu
3+
4+
# Install libgl for opencv support & Noto fonts for Chinese characters
5+
RUN apt-get update && \
6+
apt-get install -y \
7+
fonts-noto-core \
8+
fonts-noto-cjk \
9+
fontconfig \
10+
libgl1 && \
11+
fc-cache -fv && \
12+
apt-get clean && \
13+
rm -rf /var/lib/apt/lists/*
14+
15+
# Install mineru latest
16+
RUN python3 -m pip install -U 'mineru[core]' -i https://mirrors.aliyun.com/pypi/simple --break-system-packages && \
17+
python3 -m pip cache purge
18+
19+
# Download models and update the configuration file
20+
RUN /bin/bash -c "mineru-models-download -s modelscope -m all"
21+
22+
# Set the entry point to activate the virtual environment and run the command line tool
23+
ENTRYPOINT ["/bin/bash", "-c", "export MINERU_MODEL_SOURCE=local && exec \"$@\"", "--"]

docker/compose.yaml

Lines changed: 32 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
services:
22
mineru-vllm-server:
3-
image: mineru-vllm:latest
3+
image: mineru:latest
44
container_name: mineru-vllm-server
55
restart: always
66
profiles: ["vllm-server"]
@@ -28,8 +28,37 @@ services:
2828
device_ids: ["0"]
2929
capabilities: [gpu]
3030

31+
mineru-lmdeploy-server:
32+
image: mineru:latest
33+
container_name: mineru-lmdeploy-server
34+
restart: always
35+
profiles: [ "lmdeploy-server" ]
36+
ports:
37+
- 30000:30000
38+
environment:
39+
MINERU_MODEL_SOURCE: local
40+
entrypoint: mineru-lmdeploy-server
41+
command:
42+
--host 0.0.0.0
43+
--port 30000
44+
# --dp 2 # If using multiple GPUs, increase throughput using lmdeploy's multi-GPU parallel mode
45+
# --cache-max-entry-count 0.5 # If running on a single GPU and encountering VRAM shortage, reduce the KV cache size by this parameter, if VRAM issues persist, try lowering it further to `0.4` or below.
46+
ulimits:
47+
memlock: -1
48+
stack: 67108864
49+
ipc: host
50+
healthcheck:
51+
test: [ "CMD-SHELL", "curl -f http://localhost:30000/health || exit 1" ]
52+
deploy:
53+
resources:
54+
reservations:
55+
devices:
56+
- driver: nvidia
57+
device_ids: [ "0" ]
58+
capabilities: [ gpu ]
59+
3160
mineru-api:
32-
image: mineru-vllm:latest
61+
image: mineru:latest
3362
container_name: mineru-api
3463
restart: always
3564
profiles: ["api"]
@@ -57,7 +86,7 @@ services:
5786
capabilities: [ gpu ]
5887

5988
mineru-gradio:
60-
image: mineru-vllm:latest
89+
image: mineru:latest
6190
container_name: mineru-gradio
6291
restart: always
6392
profiles: ["gradio"]

docs/en/quick_start/docker_deployment.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ MinerU provides a convenient Docker deployment method, which helps quickly set u
66

77
```bash
88
wget https://gcore.jsdelivr.net/gh/opendatalab/MinerU@master/docker/global/Dockerfile
9-
docker build -t mineru-vllm:latest -f Dockerfile .
9+
docker build -t mineru:latest -f Dockerfile .
1010
```
1111

1212
> [!TIP]
@@ -31,7 +31,7 @@ docker run --gpus all \
3131
--shm-size 32g \
3232
-p 30000:30000 -p 7860:7860 -p 8000:8000 \
3333
--ipc=host \
34-
-it mineru-vllm:latest \
34+
-it mineru:latest \
3535
/bin/bash
3636
```
3737

docs/zh/quick_start/docker_deployment.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ MinerU提供了便捷的docker部署方式,这有助于快速搭建环境并
66

77
```bash
88
wget https://gcore.jsdelivr.net/gh/opendatalab/MinerU@master/docker/china/Dockerfile
9-
docker build -t mineru-vllm:latest -f Dockerfile .
9+
docker build -t mineru:latest -f Dockerfile .
1010
```
1111

1212
> [!TIP]
@@ -31,7 +31,7 @@ docker run --gpus all \
3131
--shm-size 32g \
3232
-p 30000:30000 -p 7860:7860 -p 8000:8000 \
3333
--ipc=host \
34-
-it mineru-vllm:latest \
34+
-it mineru:latest \
3535
/bin/bash
3636
```
3737

mineru/backend/vlm/utils.py

Lines changed: 1 addition & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -46,7 +46,6 @@ def enable_custom_logits_processors() -> bool:
4646

4747

4848
def set_lmdeploy_backend(device_type:str) -> str:
49-
lmdeploy_backend = ""
5049
if device_type.lower() in ["ascend", "maca", "camb"]:
5150
lmdeploy_backend = "pytorch"
5251
elif device_type.lower() in ["cuda"]:
@@ -65,12 +64,10 @@ def set_lmdeploy_backend(device_type:str) -> str:
6564
else:
6665
raise ValueError("Unsupported operating system.")
6766
else:
68-
raise ValueError(f"Unsupported device type: {device_type}")
67+
raise ValueError(f"Unsupported lmdeploy device type: {device_type}")
6968
return lmdeploy_backend
7069

7170

72-
73-
7471
def set_default_gpu_memory_utilization() -> float:
7572
from vllm import __version__ as vllm_version
7673
if version.parse(vllm_version) >= version.parse("0.11.0"):

0 commit comments

Comments
 (0)