You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
* Add OpenVINO support
* Fix OpenVINO test on Windows
* Expand OpenVino support (remote models); add ONNX backend; tests
Also update push_to_hub
* Move OV test to test_backends
* Update push_to_hub test monkeypatching
* Remove some dead code
* Skip multi-process tests for now
Incompatible with OpenVINO imports; still works independently
* Move export_optimized_onnx_model to backend.py
* Update __init__ to address the export_optimized_onnx_model move
* Remove dot in commit message
* Add PR description for export_optimized_onnx_model
* OpenVINO will override export=False; update tests
* Add dynamic quantization exporting; docs; benchmarks, etc.
* Require 4.41.0 for eval_strategy, etc.
* Restrict optimum-intel rather than optimum
* Use subfolder rather than relying only on file_name
Relies on the upcoming optimum and optimum-intel versions; this is expected to fail until then.
* Add link to OVBaseModel.from_pretrained
* Add tips pointing to the new efficiency docs
* Another pointer to the new efficiency docs
* Expand the benchmark details
* Update min. requirements to optimum 1.23.0 & optimum-intel 1.20.0
---------
Co-authored-by: Tom Aarsen <[email protected]>
Copy file name to clipboardExpand all lines: docs/installation.md
+60-2Lines changed: 60 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,10 +1,14 @@
1
1
# Installation
2
2
3
-
We recommend **Python 3.8+**, **[PyTorch 1.11.0+](https://pytorch.org/get-started/locally/)**, and **[transformers v4.34.0+](https://github.com/huggingface/transformers)**. There are three options to install Sentence Transformers:
3
+
We recommend **Python 3.8+**, **[PyTorch 1.11.0+](https://pytorch.org/get-started/locally/)**, and **[transformers v4.41.0+](https://github.com/huggingface/transformers)**. There are 5 extra options to install Sentence Transformers:
4
4
***Default:** This allows for loading, saving, and inference (i.e., getting embeddings) of models.
5
-
***Default and Training**: All of the above plus training.
5
+
***ONNX:** This allows for loading, saving, inference, optimizing, and quantizing of models using the ONNX backend.
6
+
***OpenVINO:** This allows for loading, saving, and inference of models using the OpenVINO backend.
7
+
***Default and Training**: Like **Default**, plus training.
6
8
***Development**: All of the above plus some dependencies for developing Sentence Transformers, see [Editable Install](#editable-install).
7
9
10
+
Note that you can mix and match the various extras, e.g. ``pip install -U "sentence-transformers[train, onnx-gpu]"``
11
+
8
12
## Install with pip
9
13
10
14
```eval_rst
@@ -15,6 +19,24 @@ We recommend **Python 3.8+**, **[PyTorch 1.11.0+](https://pytorch.org/get-starte
15
19
16
20
pip install -U sentence-transformers
17
21
22
+
.. tab:: ONNX
23
+
24
+
For GPU and CPU:
25
+
::
26
+
27
+
pip install -U "sentence-transformers[onnx-gpu]"
28
+
29
+
For CPU only:
30
+
::
31
+
32
+
pip install -U "sentence-transformers[onnx]"
33
+
34
+
.. tab:: OpenVINO
35
+
36
+
::
37
+
38
+
pip install -U "sentence-transformers[openvino]"
39
+
18
40
.. tab:: Default and Training
19
41
20
42
::
@@ -47,6 +69,24 @@ We recommend **Python 3.8+**, **[PyTorch 1.11.0+](https://pytorch.org/get-starte
@@ -55,10 +56,14 @@ Once you have `installed <installation.html>`_ Sentence Transformers, you can ea
55
56
# [0.6660, 1.0000, 0.1411],
56
57
# [0.1046, 0.1411, 1.0000]])
57
58
58
-
With ``SentenceTransformer("all-MiniLM-L6-v2")`` we pick which `Sentence Transformer model <https://huggingface.co/models?library=sentence-transformers>`_ we load. In this example, we load `all-MiniLM-L6-v2 <https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2>`_, which is a MiniLM model finetuned on a large dataset of over 1 billion training pairs. Using `SentenceTransformer.similarity() <./package_reference/sentence_transformer/SentenceTransformer.html#sentence_transformers.SentenceTransformer.similarity>`_, we compute the similarity between all pairs of sentences. As expected, the similarity between the first two sentences (0.6660) is higher than the similarity between the first and the third sentence (0.1046) or the second and the third sentence (0.1411).
59
+
With ``SentenceTransformer("all-MiniLM-L6-v2")`` we pick which `Sentence Transformer model <https://huggingface.co/models?library=sentence-transformers>`_ we load. In this example, we load `all-MiniLM-L6-v2 <https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2>`_, which is a MiniLM model finetuned on a large dataset of over 1 billion training pairs. Using :meth:`SentenceTransformer.similarity() <sentence_transformers.SentenceTransformer.similarity>`, we compute the similarity between all pairs of sentences. As expected, the similarity between the first two sentences (0.6660) is higher than the similarity between the first and the third sentence (0.1046) or the second and the third sentence (0.1411).
59
60
60
61
Finetuning Sentence Transformer models is easy and requires only a few lines of code. For more information, see the `Training Overview <./sentence_transformer/training_overview.html>`_ section.
61
62
63
+
.. tip::
64
+
65
+
Read `Sentence Transformer > Usage > Speeding up Inference <sentence_transformer/usage/efficiency.html>`_ for tips on how to speed up inference of models by up to 2x-3x.
- **Model sizes**: it is recommended to filter away the large models that might not be feasible without excessive hardware.
33
33
- **Experimentation is key**: models that perform well on the leaderboard do not necessarily do well on your tasks, it is **crucial** to experiment with various promising models.
34
+
35
+
.. tip::
36
+
37
+
Read `Sentence Transformer > Usage > Speeding up Inference <./usage/efficiency.html>`_ for tips on how to speed up inference of models by up to 2x-3x.
0 commit comments