You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -248,7 +248,7 @@ Use either one of the following options
248
248
* Add the following Maven Coordinates to the interpreter's library list
249
249
250
250
```bash
251
-
com.johnsnowlabs.nlp:spark-nlp_2.11:2.2.0-rc2
251
+
com.johnsnowlabs.nlp:spark-nlp_2.11:2.2.0-rc3
252
252
```
253
253
254
254
* Add path to pre-built jar from [here](#pre-compiled-spark-nlp-and-spark-nlp-ocr) in the interpreter's library list making sure the jar is available to driver path
5- Now, all available **Spark Packages** are at your fingertips! Just search for **JohnSnowLabs:spark-nlp:version** where **version** stands for the library version such as: `1.8.4` or `2.2.0-rc2`
154
+
5- Now, all available **Spark Packages** are at your fingertips! Just search for **JohnSnowLabs:spark-nlp:version** where **version** stands for the library version such as: `1.8.4` or `2.2.0-rc3`
@@ -297,7 +297,7 @@ lightPipeline.annotate("Hello world, please annotate my text")
297
297
Spark NLP OCR Module is not included within Spark NLP. It is not an annotator and not an extension to Spark ML. You can include it with the following coordinates for Maven:
298
298
299
299
```bash
300
-
com.johnsnowlabs.nlp:spark-nlp-ocr_2.11:2.2.0-rc2
300
+
com.johnsnowlabs.nlp:spark-nlp-ocr_2.11:2.2.0-rc3
301
301
```
302
302
303
303
### Creating Spark datasets from PDF (To be used with Spark NLP)
0 commit comments