Skip to content

Commit f7c99e4

Browse files
readme : add Vulkan notice (#2488)
* Add Vulkan notice in README.md * Fix formatting for Vulkan section in README.md * Fix formatting in README.md
1 parent 1d5752f commit f7c99e4

File tree

1 file changed

+11
-0
lines changed

1 file changed

+11
-0
lines changed

README.md

Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -18,6 +18,7 @@ High-performance inference of [OpenAI's Whisper](https://github.com/openai/whisp
1818
- Mixed F16 / F32 precision
1919
- [4-bit and 5-bit integer quantization support](https://github.com/ggerganov/whisper.cpp#quantization)
2020
- Zero memory allocations at runtime
21+
- Vulkan support
2122
- Support for CPU-only inference
2223
- [Efficient GPU support for NVIDIA](https://github.com/ggerganov/whisper.cpp#nvidia-gpu-support-via-cublas)
2324
- [OpenVINO Support](https://github.com/ggerganov/whisper.cpp#openvino-support)
@@ -429,6 +430,16 @@ make clean
429430
GGML_CUDA=1 make -j
430431
```
431432

433+
## Vulkan GPU support
434+
Cross-vendor solution which allows you to accelerate workload on your GPU.
435+
First, make sure your graphics card driver provides support for Vulkan API.
436+
437+
Now build `whisper.cpp` with Vulkan support:
438+
```
439+
make clean
440+
make GGML_VULKAN=1
441+
```
442+
432443
## BLAS CPU support via OpenBLAS
433444

434445
Encoder processing can be accelerated on the CPU via OpenBLAS.

0 commit comments

Comments
 (0)