Skip to content

Commit 642af9e

Browse files
authored
Fixes for LLM release (#227)
* Fix various bugs from the initial release * Add models folder marker * fix for leap
1 parent 9045dc4 commit 642af9e

File tree

5 files changed

+12
-3
lines changed

5 files changed

+12
-3
lines changed

setup.py

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -79,6 +79,12 @@
7979
"onnx==1.16.0",
8080
"onnxruntime==1.18.0",
8181
"numpy==1.26.4",
82+
"tqdm",
83+
"accelerate",
84+
"py-cpuinfo",
85+
"sentencepiece",
86+
"datasets",
87+
"fastapi",
8288
"uvicorn[standard]",
8389
],
8490
},

src/turnkeyml/llm/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -102,9 +102,9 @@ You can also try Phi-3-Mini-128k-Instruct with the following commands:
102102

103103
> Note: no other models or devices are officially supported by `lemonade` on OGA at this time. Contributions appreciated!
104104
105-
## Install Ryzen AI NPU
105+
## Install RyzenAI NPU
106106

107-
To run your LLMs on Ryzen AI NPU, first install and set up the `ryzenai-transformers` conda environment (see instructions [here](https://github.com/amd/RyzenAI-SW/tree/main/example/transformers)). Then, install `lemonade` into `ryzenai-transformers`. The `ryzenai-npu-load` Tool will become available in that environment.
107+
To run your LLMs on RyzenAI NPU, first install and set up the `ryzenai-transformers` conda environment (see instructions [here](https://github.com/amd/RyzenAI-SW/blob/main/example/transformers/models/llm/docs/README.md)). Then, install `lemonade` into `ryzenai-transformers`. The `ryzenai-npu-load` Tool will become available in that environment.
108108

109109
You can try it out with: `lemonade -i meta-llama/Llama-2-7b-chat-hf ryzenai-npu-load --device DEVICE llm-prompt -p "Hello, my thoughts are"`
110110

src/turnkeyml/llm/leap.py

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -128,6 +128,8 @@ def from_pretrained(
128128
checkpoint != "TheBloke/Llama-2-7b-Chat-fp16"
129129
and checkpoint != "meta-llama/Llama-2-7b-chat-hf"
130130
and checkpoint != "microsoft/Phi-3-mini-4k-instruct"
131+
and checkpoint != "meta-llama/Meta-Llama-3-8B-Instruct"
132+
and checkpoint != "meta-llama/Meta-Llama-3-8B"
131133
):
132134
_raise_not_supported(recipe, checkpoint)
133135

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
This directory is where your OGA model folders go.

src/turnkeyml/version.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1 +1 @@
1-
__version__ = "4.0.1"
1+
__version__ = "4.0.2"

0 commit comments

Comments
 (0)