File tree Expand file tree Collapse file tree 1 file changed +15
-15
lines changed Expand file tree Collapse file tree 1 file changed +15
-15
lines changed Original file line number Diff line number Diff line change @@ -100,7 +100,7 @@ make -j
100
100
For a quick demo, simply run ` make base.en ` :
101
101
102
102
``` text
103
- $ make base.en -j
103
+ $ make -j base.en
104
104
105
105
cc -I. -O3 -std=c11 -pthread -DGGML_USE_ACCELERATE -c ggml.c -o ggml.o
106
106
c++ -I. -I./examples -O3 -std=c++11 -pthread -c whisper.cpp -o whisper.o
@@ -224,26 +224,26 @@ ffmpeg -i input.mp3 -ar 16000 -ac 1 -c:a pcm_s16le output.wav
224
224
If you want some extra audio samples to play with, simply run:
225
225
226
226
```
227
- make samples -j
227
+ make -j samples
228
228
```
229
229
230
230
This will download a few more audio files from Wikipedia and convert them to 16-bit WAV format via ` ffmpeg ` .
231
231
232
232
You can download and run the other models as follows:
233
233
234
234
```
235
- make tiny.en -j
236
- make tiny -j
237
- make base.en -j
238
- make base -j
239
- make small.en -j
240
- make small -j
241
- make medium.en -j
242
- make medium -j
243
- make large-v1 -j
244
- make large-v2 -j
245
- make large-v3 -j
246
- make large-v3-turbo -j
235
+ make -j tiny.en
236
+ make -j tiny
237
+ make -j base.en
238
+ make -j base
239
+ make -j small.en
240
+ make -j small
241
+ make -j medium.en
242
+ make -j medium
243
+ make -j large-v1
244
+ make -j large-v2
245
+ make -j large-v3
246
+ make -j large-v3-turbo
247
247
```
248
248
249
249
## Memory usage
@@ -265,7 +265,7 @@ Here are the steps for creating and using a quantized model:
265
265
266
266
``` bash
267
267
# quantize a model with Q5_0 method
268
- make quantize -j
268
+ make -j quantize
269
269
./quantize models/ggml-base.en.bin models/ggml-base.en-q5_0.bin q5_0
270
270
271
271
# run the examples as usual, specifying the quantized model file
You can’t perform that action at this time.
0 commit comments