You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
scores = model.predict([('How many people live in Berlin?', 'Berlin had a population of 3,520,031 registered inhabitants in an area of 891.82 square kilometers.'),
43
+
('What is the size of New York?', 'New York City is famous for the Metropolitan Museum of Art.')])
40
44
```
41
45
42
46
This returns a score 0...1 indicating if the paragraph is relevant for a given query.
43
47
48
+
49
+
For details on the usage, see [Applications - Information Retrieval](../examples/applications/information-retrieval/README.md)
50
+
51
+
52
+
### MS MARCO
53
+
[MS MARCO Passage Retrieval](https://github.com/microsoft/MSMARCO-Passage-Ranking) is a large dataset with real user queries from Bing search engine with annotated relevant text passages.
44
54
-**cross-encoder/ms-marco-TinyBERT-L-2** - MRR@10 on MS Marco Dev Set: 30.15
45
55
-**cross-encoder/ms-marco-TinyBERT-L-4** - MRR@10 on MS Marco Dev Set: 34.50
46
56
-**cross-encoder/ms-marco-TinyBERT-L-6** - MRR@10 on MS Marco Dev Set: 36.13
47
57
-**cross-encoder/ms-marco-electra-base** - MRR@10 on MS Marco Dev Set: 36.41
48
58
49
-
For details on the usage, see [Applications - Information Retrieval](../examples/applications/information-retrieval/README.md)
59
+
### SQuAD (QNLI)
60
+
61
+
QNLI is based on the [SQuAD dataset](https://rajpurkar.github.io/SQuAD-explorer/) and was introduced by the [GLUE Benchmar](https://arxiv.org/abs/1804.07461). Given a passage from Wikipedia, annotators created questions that are answerable by that passage.
62
+
63
+
-**cross-encoder/qnli-distilroberta-base** - Accuracy on QNLI dev set: 90.96
64
+
-**cross-encoder/qnli-electra-base** - Accuracy on QNLI dev set: 93.21
65
+
66
+
50
67
51
68
## NLI
52
-
The following models were trained on the [SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) datasets.
69
+
Given two sentences, are these contradicting each other, entailing one the other or are these netural? The following models were trained on the [SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) datasets.
53
70
-**cross-encoder/nli-distilroberta-base** - Accuracy on MNLI mismatched set: 83.98
54
71
-**cross-encoder/nli-roberta-base** - Accuracy on MNLI mismatched set: 87.47
72
+
-**cross-encoder/nli-deberta-base** - Accuracy on MNLI mismatched set: 88.08
55
73
56
74
```python
57
75
from sentence_transformers import CrossEncoder
@@ -61,4 +79,5 @@ scores = model.predict([('A man is eating pizza', 'A man eats something'), ('A b
0 commit comments