You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -221,7 +221,7 @@ Use either one of the following options
221
221
* Add the following Maven Coordinates to the interpreter's library list
222
222
223
223
```bash
224
-
com.johnsnowlabs.nlp:spark-nlp_2.11:2.0.5
224
+
com.johnsnowlabs.nlp:spark-nlp_2.11:2.0.6
225
225
```
226
226
227
227
* Add path to pre-built jar from [here](#pre-compiled-spark-nlp-and-spark-nlp-ocr) in the interpreter's library list making sure the jar is available to driver path
5- Now, all available **Spark Packages** are at your fingertips! Just search for **JohnSnowLabs:spark-nlp:version** where **version** stands for the library version such as: `1.8.4` or `2.0.5`
154
+
5- Now, all available **Spark Packages** are at your fingertips! Just search for **JohnSnowLabs:spark-nlp:version** where **version** stands for the library version such as: `1.8.4` or `2.0.6`
@@ -297,7 +297,7 @@ lightPipeline.annotate("Hello world, please annotate my text")
297
297
Spark NLP OCR Module is not included within Spark NLP. It is not an annotator and not an extension to Spark ML. You can include it with the following coordinates for Maven:
298
298
299
299
```bash
300
-
com.johnsnowlabs.nlp:spark-nlp-ocr_2.11:2.0.5
300
+
com.johnsnowlabs.nlp:spark-nlp-ocr_2.11:2.0.6
301
301
```
302
302
303
303
### Creating Spark datasets from PDF (To be used with Spark NLP)
0 commit comments