Skip to content

Commit 6c5cdf8

Browse files
committed
Fix misspellings [skip test]
1 parent f0b4e8d commit 6c5cdf8

File tree

2 files changed

+4
-4
lines changed

2 files changed

+4
-4
lines changed

README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -305,7 +305,7 @@ This is a cheatsheet for corresponding Spark NLP Maven package to Apache Spark /
305305
| 3.0/3.1/3.2/3.3 | `spark-nlp` | `spark-nlp-gpu` | `spark-nlp-aarch64` | `spark-nlp-m1` |
306306
| Start Function | `sparknlp.start()` | `sparknlp.start(gpu=True)` | `sparknlp.start(aarch64=True)` | `sparknlp.start(m1=True)` |
307307

308-
NPTE: `M1` and `AArch64` are under `experimental` support. Access and support to these architectures are limited by the community and we had to build most of the dependencies by ourselves to make them compatible. We support these two architectures, however, they may not work in some enviroments.
308+
NOTE: `M1` and `AArch64` are under `experimental` support. Access and support to these architectures are limited by the community and we had to build most of the dependencies by ourselves to make them compatible. We support these two architectures, however, they may not work in some environments.
309309

310310
## Spark Packages
311311

@@ -689,7 +689,7 @@ This script comes with the two options to define `pyspark` and `spark-nlp` versi
689689

690690
4. Now you can attach your notebook to the cluster and use Spark NLP!
691691

692-
NOTE: Databrick's runtimes support different Apache Spark major releases. Please make sure you choose the correct Spark NLP Maven pacakge name (Maven Coordinate) for your runtime from our [Pacakges Chetsheet](https://github.com/JohnSnowLabs/spark-nlp#packages-cheatsheet)
692+
NOTE: Databricks' runtimes support different Apache Spark major releases. Please make sure you choose the correct Spark NLP Maven package name (Maven Coordinate) for your runtime from our [Packages Cheatsheet](https://github.com/JohnSnowLabs/spark-nlp#packages-cheatsheet)
693693
694694
## EMR Cluster
695695

python/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -305,7 +305,7 @@ This is a cheatsheet for corresponding Spark NLP Maven package to Apache Spark /
305305
| 3.0/3.1/3.2/3.3 | `spark-nlp` | `spark-nlp-gpu` | `spark-nlp-aarch64` | `spark-nlp-m1` |
306306
| Start Function | `sparknlp.start()` | `sparknlp.start(gpu=True)` | `sparknlp.start(aarch64=True)` | `sparknlp.start(m1=True)` |
307307

308-
NPTE: `M1` and `AArch64` are under `experimental` support. Access and support to these architectures are limited by the community and we had to build most of the dependencies by ourselves to make them compatible. We support these two architectures, however, they may not work in some enviroments.
308+
NOTE: `M1` and `AArch64` are under `experimental` support. Access and support to these architectures are limited by the community and we had to build most of the dependencies by ourselves to make them compatible. We support these two architectures, however, they may not work in some environments.
309309

310310
## Spark Packages
311311

@@ -689,7 +689,7 @@ This script comes with the two options to define `pyspark` and `spark-nlp` versi
689689

690690
4. Now you can attach your notebook to the cluster and use Spark NLP!
691691

692-
NOTE: Databrick's runtimes support different Apache Spark major releases. Please make sure you choose the correct Spark NLP Maven pacakge name (Maven Coordinate) for your runtime from our [Pacakges Chetsheet](https://github.com/JohnSnowLabs/spark-nlp#packages-cheatsheet)
692+
NOTE: Databricks' runtimes support different Apache Spark major releases. Please make sure you choose the correct Spark NLP Maven package name (Maven Coordinate) for your runtime from our [Packages Cheatsheet](https://github.com/JohnSnowLabs/spark-nlp#packages-cheatsheet)
693693
694694
## EMR Cluster
695695

0 commit comments

Comments
 (0)