Skip to content

Commit 50e499a

Browse files
committed
Add docstring for precision
1 parent 7848583 commit 50e499a

File tree

1 file changed

+4
-0
lines changed

1 file changed

+4
-0
lines changed

sentence_transformers/SentenceTransformer.py

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -555,6 +555,10 @@ def encode_multi_process(
555555
If `prompt` is set, `prompt_name` is ignored.
556556
:param batch_size: Encode sentences with batch size
557557
:param chunk_size: Sentences are chunked and sent to the individual processes. If none, it determine a sensible size.
558+
:param precision: The precision to use for the embeddings. Can be "float32", "int8", "uint8", "binary", or
559+
"ubinary". All non-float32 precisions are quantized embeddings. Quantized embeddings are smaller in
560+
size and faster to compute, but may have a lower accuracy. They are useful for reducing the size
561+
of the embeddings of a corpus for semantic search, among other tasks. Defaults to "float32".
558562
:param normalize_embeddings: Whether to normalize returned vectors to have length 1. In that case,
559563
the faster dot-product (util.dot_score) instead of cosine similarity can be used.
560564
:return: 2d numpy array with shape [num_inputs, output_dimension]

0 commit comments

Comments
 (0)