Skip to content

Commit 861ea09

Browse files
toboil-featuresadutilleul
authored andcommitted
readme : update links and make commands (ggml-org#2489)
* Update links to headers in README.md * Add link to Vulkan section in README.md * Add "-j" for parallelism for "make" in README.md * Update README.md
1 parent 5344f26 commit 861ea09

File tree

1 file changed

+24
-24
lines changed

1 file changed

+24
-24
lines changed

README.md

Lines changed: 24 additions & 24 deletions
Original file line numberDiff line numberDiff line change
@@ -12,17 +12,17 @@ Stable: [v1.7.1](https://github.com/ggerganov/whisper.cpp/releases/tag/v1.7.1) /
1212
High-performance inference of [OpenAI's Whisper](https://github.com/openai/whisper) automatic speech recognition (ASR) model:
1313

1414
- Plain C/C++ implementation without dependencies
15-
- Apple Silicon first-class citizen - optimized via ARM NEON, Accelerate framework, Metal and [Core ML](https://github.com/ggerganov/whisper.cpp#core-ml-support)
15+
- Apple Silicon first-class citizen - optimized via ARM NEON, Accelerate framework, Metal and [Core ML](#core-ml-support)
1616
- AVX intrinsics support for x86 architectures
1717
- VSX intrinsics support for POWER architectures
1818
- Mixed F16 / F32 precision
19-
- [4-bit and 5-bit integer quantization support](https://github.com/ggerganov/whisper.cpp#quantization)
19+
- [4-bit and 5-bit integer quantization support](#quantization)
2020
- Zero memory allocations at runtime
21-
- Vulkan support
21+
- [Vulkan support](#vulkan-gpu-support)
2222
- Support for CPU-only inference
23-
- [Efficient GPU support for NVIDIA](https://github.com/ggerganov/whisper.cpp#nvidia-gpu-support-via-cublas)
24-
- [OpenVINO Support](https://github.com/ggerganov/whisper.cpp#openvino-support)
25-
- [Ascend NPU Support](https://github.com/ggerganov/whisper.cpp#ascend-npu-support)
23+
- [Efficient GPU support for NVIDIA](#nvidia-gpu-support)
24+
- [OpenVINO Support](#openvino-support)
25+
- [Ascend NPU Support](#ascend-npu-support)
2626
- [C-style API](https://github.com/ggerganov/whisper.cpp/blob/master/include/whisper.h)
2727

2828
Supported platforms:
@@ -89,7 +89,7 @@ Now build the [main](examples/main) example and transcribe an audio file like th
8989

9090
```bash
9191
# build the main example
92-
make
92+
make -j
9393

9494
# transcribe an audio file
9595
./main -f samples/jfk.wav
@@ -100,7 +100,7 @@ make
100100
For a quick demo, simply run `make base.en`:
101101

102102
```text
103-
$ make base.en
103+
$ make -j base.en
104104
105105
cc -I. -O3 -std=c11 -pthread -DGGML_USE_ACCELERATE -c ggml.c -o ggml.o
106106
c++ -I. -I./examples -O3 -std=c++11 -pthread -c whisper.cpp -o whisper.o
@@ -224,26 +224,26 @@ ffmpeg -i input.mp3 -ar 16000 -ac 1 -c:a pcm_s16le output.wav
224224
If you want some extra audio samples to play with, simply run:
225225

226226
```
227-
make samples
227+
make -j samples
228228
```
229229

230230
This will download a few more audio files from Wikipedia and convert them to 16-bit WAV format via `ffmpeg`.
231231

232232
You can download and run the other models as follows:
233233

234234
```
235-
make tiny.en
236-
make tiny
237-
make base.en
238-
make base
239-
make small.en
240-
make small
241-
make medium.en
242-
make medium
243-
make large-v1
244-
make large-v2
245-
make large-v3
246-
make large-v3-turbo
235+
make -j tiny.en
236+
make -j tiny
237+
make -j base.en
238+
make -j base
239+
make -j small.en
240+
make -j small
241+
make -j medium.en
242+
make -j medium
243+
make -j large-v1
244+
make -j large-v2
245+
make -j large-v3
246+
make -j large-v3-turbo
247247
```
248248

249249
## Memory usage
@@ -265,7 +265,7 @@ Here are the steps for creating and using a quantized model:
265265

266266
```bash
267267
# quantize a model with Q5_0 method
268-
make quantize
268+
make -j quantize
269269
./quantize models/ggml-base.en.bin models/ggml-base.en-q5_0.bin q5_0
270270

271271
# run the examples as usual, specifying the quantized model file
@@ -437,7 +437,7 @@ First, make sure your graphics card driver provides support for Vulkan API.
437437
Now build `whisper.cpp` with Vulkan support:
438438
```
439439
make clean
440-
make GGML_VULKAN=1
440+
make GGML_VULKAN=1 -j
441441
```
442442

443443
## BLAS CPU support via OpenBLAS
@@ -636,7 +636,7 @@ The [stream](examples/stream) tool samples the audio every half a second and run
636636
More info is available in [issue #10](https://github.com/ggerganov/whisper.cpp/issues/10).
637637

638638
```bash
639-
make stream
639+
make stream -j
640640
./stream -m ./models/ggml-base.en.bin -t 8 --step 500 --length 5000
641641
```
642642

0 commit comments

Comments
 (0)