Skip to content

Commit 721291b

Browse files
Update README.md
1 parent 8aab44b commit 721291b

File tree

1 file changed

+15
-15
lines changed

1 file changed

+15
-15
lines changed

README.md

Lines changed: 15 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -100,7 +100,7 @@ make -j
100100
For a quick demo, simply run `make base.en`:
101101

102102
```text
103-
$ make base.en -j
103+
$ make -j base.en
104104
105105
cc -I. -O3 -std=c11 -pthread -DGGML_USE_ACCELERATE -c ggml.c -o ggml.o
106106
c++ -I. -I./examples -O3 -std=c++11 -pthread -c whisper.cpp -o whisper.o
@@ -224,26 +224,26 @@ ffmpeg -i input.mp3 -ar 16000 -ac 1 -c:a pcm_s16le output.wav
224224
If you want some extra audio samples to play with, simply run:
225225

226226
```
227-
make samples -j
227+
make -j samples
228228
```
229229

230230
This will download a few more audio files from Wikipedia and convert them to 16-bit WAV format via `ffmpeg`.
231231

232232
You can download and run the other models as follows:
233233

234234
```
235-
make tiny.en -j
236-
make tiny -j
237-
make base.en -j
238-
make base -j
239-
make small.en -j
240-
make small -j
241-
make medium.en -j
242-
make medium -j
243-
make large-v1 -j
244-
make large-v2 -j
245-
make large-v3 -j
246-
make large-v3-turbo -j
235+
make -j tiny.en
236+
make -j tiny
237+
make -j base.en
238+
make -j base
239+
make -j small.en
240+
make -j small
241+
make -j medium.en
242+
make -j medium
243+
make -j large-v1
244+
make -j large-v2
245+
make -j large-v3
246+
make -j large-v3-turbo
247247
```
248248

249249
## Memory usage
@@ -265,7 +265,7 @@ Here are the steps for creating and using a quantized model:
265265

266266
```bash
267267
# quantize a model with Q5_0 method
268-
make quantize -j
268+
make -j quantize
269269
./quantize models/ggml-base.en.bin models/ggml-base.en-q5_0.bin q5_0
270270

271271
# run the examples as usual, specifying the quantized model file

0 commit comments

Comments
 (0)