Releases: JohnSnowLabs/spark-nlp
6.1.4
📢 Spark NLP 6.1.4: Advancing Multimodal Workflows with Reader2Image
We are excited to announce the release of Spark NLP 6.1.4!
This version introduces a powerful new annotator, Reader2Image, which extends Spark NLP’s universal ingestion capabilities to embedded images across a wide range of document formats. With this release, Spark NLP users can now seamlessly integrate text and image processing in the same pipeline, unlocking new opportunities for vision-language modeling (VLM), multimodal search, and document understanding.
🔥 Highlights
- New
Reader2Image
Annotator: Extract and structure image content directly from documents like PDFs, Word, PowerPoint, Excel, HTML, Markdown, and Email files. - Multimodal Pipeline Expansion: Build workflows that combine text, tables, and now images for comprehensive document AI applications.
- Consistent Structured Output: Access image metadata (filename, dimensions, channels, mode) alongside binary image data in Spark DataFrames, fully compatible with other visual annotators.
🚀 New Features & Enhancements
Document Ingestion
-
Reader2Image
Annotator
A new multimodal annotator designed to parse image content embedded in structured documents. Supported formats include:- PDFs
- Word (.doc/.docx)
- Excel (.xls/.xlsx)
- PowerPoint (.ppt/.pptx)
- HTML & Markdown (.md)
- Email files (.eml, .msg)
Output Fields:
- File name
- Image dimensions (height, width)
- Number of channels
- Mode
- Binary image data
- Metadata
This enables seamless integration with vision-language models (VLMs), multimodal embeddings, and downstream Spark NLP annotators, all within the same distributed pipeline.
🐛 Bug Fixes
- None
❤️ Community Support
- Slack For live discussion with the Spark NLP community and the team
- GitHub Bug reports, feature requests, and contributions
- Discussions Engage with other community members, share ideas, and show off how you use Spark NLP!
- Medium Spark NLP articles
- JohnSnowLabs official Medium
- YouTube Spark NLP video tutorials
⚙️ Installation
Python
pip install spark-nlp==6.1.4
Spark Packages
spark-shell --packages com.johnsnowlabs.nlp:spark-nlp_2.12:6.1.4
pyspark --packages com.johnsnowlabs.nlp:spark-nlp_2.12:6.1.4
GPU
spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:6.1.4
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:6.1.4
Apple Silicon (M1 & M2)
spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-silicon_2.12:6.1.4
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-silicon_2.12:6.1.4
AArch64
spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:6.1.4
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:6.1.4
Maven
<dependency>
<groupId>com.johnsnowlabs.nlp</groupId>
<artifactId>spark-nlp_2.12</artifactId>
<version>6.1.4</version>
</dependency>
- GPU:
spark-nlp-gpu_2.12:6.1.4
- Apple Silicon:
spark-nlp-silicon_2.12:6.1.4
- AArch64:
spark-nlp-aarch64_2.12:6.1.4
FAT JARs
- CPU: https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/jars/spark-nlp-assembly-6.1.4.jar
- GPU: https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/jars/spark-nlp-gpu-assembly-6.1.4.jar
- M1/M2: https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/jars/spark-nlp-silicon-assembly-6.1.4.jar
- AArch64: https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/jars/spark-nlp-aarch64-assembly-6.1.4.jar
📊 What’s Changed
- [SPARKNLP-1261] Introducing
Reader2Image
Annotator (#14658) by @danilojsl
Full Changelog: 6.1.3...6.1.4
What's Changed
- SPARKNLP-1261 Introducing Reader2Image Annotator by @danilojsl in #14658
Full Changelog: 6.1.3...6.1.4
6.1.3
📢 Spark NLP 6.1.3: NerDL Graph Checker, Reader2Doc Enhancements, Ranking Finisher
We are pleased to announce Spark NLP 6.1.3, introducing a new graph validation annotator for NER training, enhancements to Reader2Doc for flexible document handling, and a new ranking finisher for AutoGGUFReranker outputs. This release focuses on improving training robustness, document processing flexibility, and retrieval ranking capabilities.
🔥 Highlights
- New NerDLGraphChecker annotator to validate NER training graphs before training starts.
- Reader2Doc enhancements with options for consolidated output and filtering.
- New AutoGGUFRerankerFinisher for ranking, filtering, and normalizing reranker outputs.
🚀 New Features & Enhancements
Named Entity Recognition (NER)
NerDLGraphChecker
:
A new annotator that validates whether a suitable NerDL graph is available for a given training dataset before embeddings or training start. This helps avoid wasted computation in custom training scenarios. (Link to notebook)
- Must be placed before embedding or
NerDLApproach
annotators. - Requires token and label columns in the dataset.
- Automatically extracts embedding dimensions from the pipeline to validate graph compatibility.
Document Processing
Reader2Doc
Enhancements:
New configuration options provide more control over output formatting:
outputAsDocument
: Concatenates all sentences into a single document.excludeNonText
: Filters out non-textual elements (e.g., tables, images) from the document.
Ranking & Retrieval
AutoGGUFRerankerFinisher
:
A finisher for processing AutoGGUFReranker
outputs, adding advanced ranking and filtering capabilities (Link to notebook):
- Top-k document selection.
- Score threshold filtering.
- Min-max score normalization (0–1 range).
- Sorting by relevance score.
- Rank assignment in metadata while preserving document structure.
🐛 Bug Fixes
None.
❤️ Community Support
- Slack Live discussion with the Spark NLP community and team
- GitHub Bug reports, feature requests, and contributions
- Discussions Share ideas and engage with other community members
- Medium Spark NLP technical articles
- JohnSnowLabs Medium Official blog
- YouTube Spark NLP tutorials and demos
Installation
Python
pip install spark-nlp==6.1.3
Spark Packages
spark-nlp on Apache Spark 3.0.x–3.4.x (Scala 2.12):
spark-shell --packages com.johnsnowlabs.nlp:spark-nlp_2.12:6.1.3
pyspark --packages com.johnsnowlabs.nlp:spark-nlp_2.12:6.1.3
GPU
spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:6.1.3
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:6.1.3
Apple Silicon (M1 & M2)
spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-silicon_2.12:6.1.3
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-silicon_2.12:6.1.3
AArch64
spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:6.1.3
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:6.1.3
Maven
spark-nlp:
<dependency>
<groupId>com.johnsnowlabs.nlp</groupId>
<artifactId>spark-nlp_2.12</artifactId>
<version>6.1.3</version>
</dependency>
spark-nlp-gpu:
<dependency>
<groupId>com.johnsnowlabs.nlp</groupId>
<artifactId>spark-nlp-gpu_2.12</artifactId>
<version>6.1.3</version>
</dependency>
spark-nlp-silicon:
<dependency>
<groupId>com.johnsnowlabs.nlp</groupId>
<artifactId>spark-nlp-silicon_2.12</artifactId>
<version>6.1.3</version>
</dependency>
spark-nlp-aarch64:
<dependency>
<groupId>com.johnsnowlabs.nlp</groupId>
<artifactId>spark-nlp-aarch64_2.12</artifactId>
<version>6.1.3</version>
</dependency>
FAT JARs
- CPU: https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/jars/spark-nlp-assembly-6.1.3.jar
- GPU: https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/jars/spark-nlp-gpu-assembly-6.1.3.jar
- M1: https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/jars/spark-nlp-silicon-assembly-6.1.3.jar
- AArch64: https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/jars/spark-nlp-aarch64-assembly-6.1.3.jar
What's Changed
Full Changelog: 6.1.2...6.1.3
6.1.2
📢 Spark NLP 6.1.2: AutoGGUFReranker and AutoGGUF improvements
We are excited to announce Spark NLP 6.1.2, enhancing AutoGGUF model support and introduces a brand new reranking annotator based on llama.cpp LLMs. This release also brings fixes for AutoGGUFVision model and improvements for CUDA compatibility of AutoGGUF models.
🔥 Highlights
New AutoGGUFReranker annotator for advanced LLM-based reranking in information retrieval and retrieval-augmented generation (RAG) pipelines.
🚀 New Features & Enhancements
Large Language Models (LLMs)
AutoGGUFReranker
A new annotator for reranking candidate results using AutoGGUF-based LLM embeddings. This enables more accurate ranking in retrieval pipelines, benefiting applications such as search, RAG, and question answering. (Link to notebook)
🐛 Bug Fixes
- Fixed Python initialization errors in
AutoGGUFVisionModel
. - Using
save
for AutoGGUF models now supports more file protocols. - Ensured better GPU support for AutoGGUF annotators on a broader range of CUDA devices.
❤️ Community Support
- Slack For live discussion with the Spark NLP community and the team
- GitHub Bug reports, feature requests, and contributions
- Discussions Engage with other community members, share ideas, and show off how you use Spark NLP!
- Medium Spark NLP articles
- JohnSnowLabs official Medium
- YouTube Spark NLP video tutorials
Installation
Python
pip install spark-nlp==6.1.2
Spark Packages
spark-nlp on Apache Spark 3.0.x, 3.1.x, 3.2.x, 3.3.x, and 3.4.x (Scala 2.12):
spark-shell --packages com.johnsnowlabs.nlp:spark-nlp_2.12:6.1.2
pyspark --packages com.johnsnowlabs.nlp:spark-nlp_2.12:6.1.2
GPU
spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:6.1.2
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:6.1.2
Apple Silicon (M1 & M2)
spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-silicon_2.12:6.1.2
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-silicon_2.12:6.1.2
AArch64
spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:6.1.2
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:6.1.2
Maven
spark-nlp on Apache Spark 3.0.x, 3.1.x, 3.2.x, 3.3.x, and 3.4.x:
<dependency>
<groupId>com.johnsnowlabs.nlp</groupId>
<artifactId>spark-nlp_2.12</artifactId>
<version>6.1.2</version>
</dependency>
spark-nlp-gpu:
<dependency>
<groupId>com.johnsnowlabs.nlp</groupId>
<artifactId>spark-nlp-gpu_2.12</artifactId>
<version>6.1.2</version>
</dependency>
spark-nlp-silicon:
<dependency>
<groupId>com.johnsnowlabs.nlp</groupId>
<artifactId>spark-nlp-silicon_2.12</artifactId>
<version>6.1.2</version>
</dependency>
spark-nlp-aarch64:
<dependency>
<groupId>com.johnsnowlabs.nlp</groupId>
<artifactId>spark-nlp-aarch64_2.12</artifactId>
<version>6.1.2</version>
</dependency>
FAT JARs
- CPU on Apache Spark 3.x/3.1.x/3.2.x/3.3.x/3.4.x: https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/jars/spark-nlp-assembly-6.1.2.jar
- GPU on Apache Spark 3.0.x/3.1.x/3.2.x/3.3.x/3.4.x: https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/jars/spark-nlp-gpu-assembly-6.1.2.jar
- M1 on Apache Spark 3.0.x/3.1.x/3.2.x/3.3.x/3.4.x: https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/jars/spark-nlp-silicon-assembly-6.1.2.jar
- AArch64 on Apache Spark 3.0.x/3.1.x/3.2.x/3.3.x/3.4.x: https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/jars/spark-nlp-aarch64-assembly-6.1.2.jar
What's Changed
- #14649 by @prabod
- #14650 by @DevinTDHa
Full Changelog: 6.1.1...6.1.2
6.1.1
📢 Spark NLP 6.1.1: Enhanced LLM Performance and Expanded Data Ingestion Capabilities
We are thrilled to announce Spark NLP 6.1.1, a focused release that delivers significant performance improvements and enhanced functionality for large language models and universal data ingestion. This release continues our commitment to providing state-of-the-art AI capabilities within the native Spark ecosystem, with optimized inference performance and expanded multimodal support.
🔥 Highlights
- Performance Boost for llama.cpp models: Inference optimizations in
AutoGGUFModel
andAutoGGUFEmbeddings
deliver improvements for large language model workflows on GPU. - Multimodal Vision Models Restored: The
AutoGGUFVisionModel
annotator is back with full functionality and latest SOTA VLMs, enabling sophisticated vision-language processing capabilities. - Enhanced Table Processing: New
Reader2Table
annotator streamlines tabular data extraction from multiple document formats with seamless pipeline integration. - Upgraded openVINO backend: We upgraded our openVINO backend to 2025.02 and added hyperthreading configuration options to maximize performance on multi-core systems.
🚀 New Features & Enhancements
Large Language Models (LLMs)
- Optimized
AutoGGUFModel
Performance: We improved the inference of llama.cpp models and achieved a 10% performance increase forAutoGGUFModel
on GPU. - Restored
AutoGGUFVisionModel
: The multimodal vision model annotator is fully operational again, enabling powerful vision-language processing capabilities. Users can now process images alongside text for comprehensive multimodal AI applications while using the latest SOTA vision-language models. - Enhanced Model Compatibility:
AutoGGUFModel
can now seamlessly load the language model components from pretrainedAutoGGUFVisionModel
instances, providing greater flexibility in model deployment and usage. (Link to notebook) - Robust Model Loading: Pretrained AutoGGUF-based annotators now load despite the inclusion of deprecated parameters, ensuring broader compatibility.
- Updated Default Models: All AutoGGUF annotators now use more recent and capable pretrained models:
Annotator | Default pretrained model |
---|---|
AutoGGUFModel | Phi_4_mini_instruct_Q4_K_M_gguf |
AutoGGUFEmbeddings | Qwen3_Embedding_0.6B_Q8_0_gguf |
AutoGGUFVisionModel | Qwen2.5_VL_3B_Instruct_Q4_K_M_gguf |
Document Ingestion
Reader2Table
Annotator: This powerful new annotator provides a streamlined interface for extracting and processing tabular data from various document formats (Link to notebook). It offers:- Unified API for interacting with Spark NLP readers
- Enhanced flexibility through reader-specific configurations
- Improved maintainability and scalability for data loading workflows
- Support for multiple formats including HTML, Word (.doc/.docx), Excel (.xls/.xlsx), PowerPoint (.ppt/.pptx), Markdown (.md), and CSV (.csv)
Performance Optimizations
- OpenVINO Upgrade: We upgrade the backend to 2025.02 and added comprehensive hyperthreading configuration options for the OpenVINO backend, enabling users to optimize performance on multi-core systems by fine-tuning thread allocation and CPU utilization.
🐛 Bug Fixes
None
❤️ Community Support
- Slack: For live discussion with the Spark NLP community and the team.
- GitHub: Bug reports, feature requests, and contributions.
- Discussions: Engage with other community members, share ideas, and show off how you use Spark NLP!
- Medium: Spark NLP articles.
- JohnSnowLabs official Medium
- YouTube: Spark NLP video tutorials.
Installation
Python
pip install spark-nlp==6.1.1
Spark Packages
spark-nlp on Apache Spark 3.0.x, 3.1.x, 3.2.x, 3.3.x, and 3.4.x (Scala 2.12):
spark-shell --packages com.johnsnowlabs.nlp:spark-nlp_2.12:6.1.1
pyspark --packages com.johnsnowlabs.nlp:spark-nlp_2.12:6.1.1
GPU
spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:6.1.1
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:6.1.1
Apple Silicon
spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-silicon_2.12:6.1.1
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-silicon_2.12:6.1.1
AArch64
spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:6.1.1
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:6.1.1
Maven
spark-nlp on Apache Spark 3.0.x, 3.1.x, 3.2.x, 3.3.x, and 3.4.x:
<dependency>
<groupId>com.johnsnowlabs.nlp</groupId>
<artifactId>spark-nlp_2.12</artifactId>
<version>6.1.1</version>
</dependency>
spark-nlp-gpu:
<dependency>
<groupId>com.johnsnowlabs.nlp</groupId>
<artifactId>spark-nlp-gpu_2.12</artifactId>
<version>6.1.1</version>
</dependency>
spark-nlp-silicon:
<dependency>
<groupId>com.johnsnowlabs.nlp</groupId>
<artifactId>spark-nlp-silicon_2.12</artifactId>
<version>6.1.1</version>
</dependency>
spark-nlp-aarch64:
<dependency>
<groupId>com.johnsnowlabs.nlp</groupId>
<artifactId>spark-nlp-aarch64_2.12</artifactId>
<version>6.1.1</version>
</dependency>
FAT JARs
- CPU on Apache Spark 3.x/3.1.x/3.2.x/3.3.x/3.4.x: https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/jars/spark-nlp-assembly-6.1.1.jar
- GPU on Apache Spark 3.0.x/3.1.x/3.2.x/3.3.x/3.4.x: https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/jars/spark-nlp-gpu-assembly-6.1.1.jar
- M1 on Apache Spark 3.0.x/3.1.x/3.2.x/3.3.x/3.4.x: https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/jars/spark-nlp-silicon-assembly-6.1.1.jar
- AArch64 on Apache Spark 3.0.x/3.1.x/3.2.x/3.3.x/3.4.x: https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/jars/spark-nlp-aarch64-assembly-6.1.1.jar
What's Changed
- #14641 by @prabod
- #14640 by @danilojsl
- #14644 by @DevinTDHa and @C-K-Loan
Full Changelog: 6.1.0...6.1.1
6.1.0
📢 Spark NLP 6.1.0: State-of-the-art LLM Capabilities and Advancing Universal Ingestion
We are excited to announce Spark NLP 6.1.0, another milestone for building scalable, distributed AI pipelines! This major release significantly enhances our capabilities for state-of-the-art multimodal and large language models and universal data ingestion. Upgrade Spark NLP to 6.1.0 to improve both usability and performance across ingestion, inference, and multimodal processing pipelines, all within the native Spark ecosystem.
🔥 Highlights
- Upgraded
llama.cpp
Integration: We've updated ourllama.cpp
backend to tagb5932
which supports inference with the latest generation of LLMs. - Unified Document Ingestion with
Reader2Doc
: Introducing a new annotator that streamlines the process of loading and integrating diverse file formats (PDFs, Word, Excel, PowerPoint, HTML, Text, Email, Markdown) directly into Spark NLP pipelines with a unified and flexible interface. - Support for Phi-4: Spark NLP now natively supports the Phi-4 model, allowing users to leverage its advanced reasoning capabilities.
🚀 New Features & Enhancements
Large Language Models (LLMs)
llama.cpp
Upgrade: Our llama.cpp backend has been upgraded to versionb5932
. This update enables native inference for the newest LLMs, such as Gemma 3 and Phi-4, ensuring broader model compatibility and improved performance.- NOTE: We are still in the process of upgrading our multimodal
AutoGGUFVisionModel
annotator to the latest backend. This means that this annotator will not be available in this version. As a workaround, please use version 6.0.5 of Spark NLP.
- NOTE: We are still in the process of upgrading our multimodal
- Phi-4 Model Support: Spark NLP now integrates the Phi-4 model, an advanced open model trained on a blend of synthetic data, filtered public domain content, and academic Q&A datasets. This integration enables sophisticated reasoning capabilities directly within Spark NLP. (Link to notebook)
Document Ingestion
Reader2Doc
Annotator: This new annotator provides a simplified, unified interface for integrating various Spark NLP readers. It supports a wide range of formats, including PDFs, plain text, HTML, Word (.doc
/.docx
), Excel (.xls
/.xlsx
), PowerPoint (.ppt
/.pptx
), email files (.eml
,.msg
), and Markdown (.md
).- Using this annotator, you can read all these different formats into Spark NLP documents, making them directly accessible in all your Spark NLP pipelines. This significantly reduces boilerplate code and enhances flexibility in data loading workflows, making it easier to scale and switch between data sources.
Let's use a code example to see how easy it is to use:
reader2doc = Reader2Doc() \
.setContentType("application/pdf") \
.setContentPath("./pdf-files") \
.setOutputCol("document")
# other NLP stages in `nlp_stages`
pipeline = Pipeline(stages=[reader2doc] + nlp_stages)
model = pipeline.fit(empty_df)
result_df = model.transform(empty_df)
Check out our full example notebook to see it in action.
🐛 Bug Fixes
- HuggingFace OpenVINO Notebook for Qwen2VL: Addressed and fixed issues in the notebook related to the OpenVINO conversion of the Qwen2VL model, ensuring smoother functionality.
❤️ Community Support
- Slack: For live discussion with the Spark NLP community and the team.
- GitHub: Bug reports, feature requests, and contributions.
- Discussions: Engage with other community members, share ideas, and show off how you use Spark NLP!
- Medium: Spark NLP articles.
- JohnSnowLabs official Medium
- YouTube: Spark NLP video tutorials.
Installation
Python
pip install spark-nlp==6.1.0
Spark Packages
spark-nlp on Apache Spark 3.0.x, 3.1.x, 3.2.x, 3.3.x, and 3.4.x (Scala 2.12):
spark-shell --packages com.johnsnowlabs.nlp:spark-nlp_2.12:6.1.0
pyspark --packages com.johnsnowlabs.nlp:spark-nlp_2.12:6.1.0
GPU
spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:6.1.0
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:6.1.0
Apple Silicon (M1 & M2)
spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-silicon_2.12:6.1.0
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-silicon_2.12:6.1.0
AArch64
spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:6.1.0
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:6.1.0
Maven
spark-nlp on Apache Spark 3.0.x, 3.1.x, 3.2.x, 3.3.x, and 3.4.x:
<dependency>
<groupId>com.johnsnowlabs.nlp</groupId>
<artifactId>spark-nlp_2.12</artifactId>
<version>6.1.0</version>
</dependency>
spark-nlp-gpu:
<dependency>
<groupId>com.johnsnowlabs.nlp</groupId>
<artifactId>spark-nlp-gpu_2.12</artifactId>
<version>6.1.0</version>
</dependency>
spark-nlp-silicon:
<dependency>
<groupId>com.johnsnowlabs.nlp</groupId>
<artifactId>spark-nlp-silicon_2.12</artifactId>
<version>6.1.0</version>
</dependency>
spark-nlp-aarch64:
<dependency>
<groupId>com.johnsnowlabs.nlp</groupId>
<artifactId>spark-nlp-aarch64_2.12</artifactId>
<version>6.1.0</version>
</dependency>
FAT JARs
- CPU on Apache Spark 3.x/3.1.x/3.2.x/3.3.x/3.4.x: https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/jars/spark-nlp-assembly-6.1.0.jar
- GPU on Apache Spark 3.0.x/3.1.x/3.2.x/3.3.x/3.4.x: https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/jars/spark-nlp-gpu-assembly-6.1.0.jar
- M1 on Apache Spark 3.0.x/3.1.x/3.2.x/3.3.x/3.4.x: https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/jars/spark-nlp-silicon-assembly-6.1.0.jar
- AArch64 on Apache Spark 3.0.x/3.1.x/3.2.x/3.3.x/3.4.x: https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/jars/spark-nlp-aarch64-assembly-6.1.0.jar
What's Changed
- Update HuggingFace_OpenVINO_in_Spark_NLP_Qwen2VL.ipynb #14631 by @AbdullahMubeenAnwar
- Sparknlp 1189 Introducing Phi4 #14606 by @prabod
- SPARKNLP-1259 Introducing Reader2Doc Annotator #14632 by @danilojsl
- [SPARKNLP-1194] Upgrade jsl-llamacpp to newest version #14633 by @DevinTDHa
- Add telemetry to github actions [skip-test] #14568 by @KshitizGIT
Full Changelog: 6.0.5...6.1.0
6.0.5
📢 Spark NLP 6.0.5: Enhanced Microsoft Fabric Integration & Markdown Processing
We're thrilled to announce the release of Spark NLP 6.0.5! This version introduces a new Markdown Reader, enabling direct processing of Markdown files into structured Spark DataFrames for more diverse NLP workflows. We have also enhanced Microsoft Fabric integration, allowing for seamless model downloads from Lakehouse containers.
🔥 Highlights
- New Markdown Reader: Introduce the new
MarkdownReader
for effortlessly parsing Markdown files into structured Spark DataFrames, paving the way for advanced content analysis and NLP on Markdown content. - Enhanced Microsoft Fabric Support: Download models directly from Microsoft Fabric Lakehouse containers, streamlining your NLP deployments in the Fabric environment.
🚀 New Features & Enhancements
-
New MarkdownReader Annotator: Introducing the
MarkdownReader
, a powerful new feature that allows you to read and parse Markdown files directly into a structured Spark DataFrame. This enables efficient processing and analysis of Markdown content for various NLP applications. We recommend using this reader automatically in ourPartition
annotator. (Link to notebook)partitioner = Partition(content_type = "text/markdown"").partition(md_directory)
-
Microsoft Fabric Integration: Spark NLP now supports downloading models from Microsoft Fabric Lakehouse containers, providing a more integrated and efficient workflow for users leveraging Microsoft Fabric. This enhancement ensures smoother model access and deployment within the Fabric ecosystem. For example, you can define the path to our pretrained models in Spark like so:
from pyspark import SparkConf conf = SparkConf() conf.set("spark.jsl.settings.pretrained.cache_folder", "abfss://[email protected]/lakehouse_folder.Lakehouse/Files/my_models")
🐛 Bug Fixes
We performed crucial maintenance updates to all of our example notebooks, ensuring that they are reproducible and properly displayed in GitHub.
❤️ Community Support
- Slack For live discussion with the Spark NLP community and the team
- GitHub Bug reports, feature requests, and contributions
- Discussions Engage with other community members, share ideas, and show off how you use Spark NLP!
- Medium Spark NLP articles
- JohnSnowLabs official Medium
- YouTube Spark NLP video tutorials
⚙️ Installation
Python
#PyPI
pip install spark-nlp==6.0.5
Spark Packages
spark-nlp on Apache Spark 3.0.x, 3.1.x, 3.2.x, 3.3.x, and 3.4.x (Scala 2.12):
spark-shell --packages com.johnsnowlabs.nlp:spark-nlp_2.12:6.0.5
pyspark --packages com.johnsnowlabs.nlp:spark-nlp_2.12:6.0.5
GPU
spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:6.0.5
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:6.0.5
Apple Silicon
spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-silicon_2.12:6.0.5
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-silicon_2.12:6.0.5
AArch64
spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:6.0.5
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:6.0.5
Maven
spark-nlp on Apache Spark 3.0.x, 3.1.x, 3.2.x, 3.3.x, and 3.4.x:
<dependency>
<groupId>com.johnsnowlabs.nlp</groupId>
<artifactId>spark-nlp_2.12</artifactId>
<version>6.0.5</version>
</dependency>
spark-nlp-gpu:
<dependency>
<groupId>com.johnsnowlabs.nlp</groupId>
<artifactId>spark-nlp-gpu_2.12</artifactId>
<version>6.0.5</version>
</dependency>
spark-nlp-silicon:
<dependency>
<groupId>com.johnsnowlabs.nlp</groupId>
<artifactId>spark-nlp-silicon_2.12</artifactId>
<version>6.0.5</version>
</dependency>
spark-nlp-aarch64:
<dependency>
<groupId>com.johnsnowlabs.nlp</groupId>
<artifactId>spark-nlp-aarch64_2.12</artifactId>
<version>6.0.5</version>
</dependency>
FAT JARs
- CPU on Apache Spark 3.x/3.1.x/3.2.x/3.3.x/3.4.x: https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/jars/spark-nlp-assembly-6.0.5.jar
- GPU on Apache Spark 3.0.x/3.1.x/3.2.x/3.3.x/3.4.x: https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/jars/spark-nlp-gpu-assembly-6.0.5.jar
- M1 on Apache Spark 3.0.x/3.1.x/3.2.x/3.3.x/3.4.x: https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/jars/spark-nlp-silicon-assembly-6.0.5.jar
- AArch64 on Apache Spark 3.0.x/3.1.x/3.2.x/3.3.x/3.4.x: https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/jars/spark-nlp-aarch64-assembly-6.0.5.jar
What's Changed
Full Changelog: 6.0.4...6.0.5
6.0.4
📢 Spark NLP 6.0.4: MiniLMEmbeddings, DataFrame Optimization, and Enhanced PDF Processing
We are excited to announce the release of Spark NLP 6.0.4! This version brings advancements in text embeddings with the introduction of the MiniLM family, Spark DataFrame optimizations, and enhanced PDF document parsing. Upgrade to 6.0.4 to leverage these cutting-edge features and expand your NLP capabilities at scale.
Stay updated with our latest examples and tutorials by visiting our Medium - Spark NLP blog!
🔥 Highlights
- Introducing MiniLMEmbeddings: Support for the efficient and powerful MiniLMEmbeddings models, providing state-of-the-art text representations.
- New DataFrameOptimizer: A new DataFrameOptimizer transformer to streamline and optimize Spark DataFrame operations, offering configurable repartitioning, caching, and persistence options.
- Advanced PDF Reader Features: Enhancements to the PDF Reader with extractCoordinates for spatial metadata, normalizeLigatures for improved text consistency, and a new exception column for enhanced fault tolerance.
🚀 New Features & Enhancements
Advanced Text Embeddings
This release introduces a new family of efficient text embedding models:
- MiniLMEmbeddings: Support for the
MiniLMEmbeddings
annotator, enabling the use of MiniLM models for generating highly efficient and effective sentence embeddings. These models are designed to provide strong performance while being significantly smaller and faster than larger alternatives, making them ideal for a wide range of NLP tasks requiring compact and powerful text representations. (Link to notebook)
Spark DataFrame Optimization
- DataFrameOptimizer: Introducing the new DataFrameOptimizer transformer, designed to enhance the performance and manageability of Spark DataFrames within your NLP pipelines. (Link to notebook)
- Configurable Repartitioning: Allows for automatic repartitioning of DataFrames, ensuring optimal data distribution for downstream processing.
- Optional Caching: Supports DataFrame caching (doCache) to significantly speed up iterative computations.
- Persistent Output: Adds robust support for persisting DataFrames to disk in various formats (csv, json, parquet) with custom writer options via outputOptions.
- Schema Preservation: Efficiently preserves the original DataFrame schema, making it a seamless utility for complex Spark NLP pipelines.
Enhanced PDF Document Processing
The PDF Reader and PdfToText transformer have been significantly improved for more comprehensive and fault-tolerant document parsing. (Link to notebook)
- Spatial Metadata Extraction (extractCoordinates): A new configurable parameter extractCoordinates in PdfToText and the PDF Reader. When enabled, this outputs detailed spatial metadata (text position and dimensions) for each character in the PDF.
- Ligature Normalization (normalizeLigatures): When extractCoordinates is enabled, the normalizeLigatures option ensures that ligature characters (e.g., fi, fl, œ) are automatically normalized to their decomposed forms (fi, fl, oe).
- Fault Tolerance with Exception Column: A new exception output column has been introduced to capture and log any processing errors encountered while handling individual PDF documents.
❤️ Community Support
- Slack For live discussion with the Spark NLP community and the team
- GitHub Bug reports, feature requests, and contributions
- Discussions Engage with other community members, share ideas, and show off how you use Spark NLP!
- Medium Spark NLP articles
- JohnSnowLabs official Medium
- YouTube Spark NLP video tutorials
⚙️ Installation
Python
#PyPI
pip install spark-nlp==6.0.4
Spark Packages
spark-nlp on Apache Spark 3.0.x, 3.1.x, 3.2.x, 3.3.x, and 3.4.x (Scala 2.12):
spark-shell --packages com.johnsnowlabs.nlp:spark-nlp_2.12:6.0.4
pyspark --packages com.johnsnowlabs.nlp:spark-nlp_2.12:6.0.4
GPU
spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:6.0.4
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:6.0.4
Apple Silicon
spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-silicon_2.12:6.0.4
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-silicon_2.12:6.0.4
AArch64
spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:6.0.4
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:6.0.4
Maven
spark-nlp on Apache Spark 3.0.x, 3.1.x, 3.2.x, 3.3.x, and 3.4.x:
<dependency>
<groupId>com.johnsnowlabs.nlp</groupId>
<artifactId>spark-nlp_2.12</artifactId>
<version>6.0.4</version>
</dependency>
spark-nlp-gpu:
<dependency>
<groupId>com.johnsnowlabs.nlp</groupId>
<artifactId>spark-nlp-gpu_2.12</artifactId>
<version>6.0.4</version>
</dependency>
spark-nlp-silicon:
<dependency>
<groupId>com.johnsnowlabs.nlp</groupId>
<artifactId>spark-nlp-silicon_2.12</artifactId>
<version>6.0.4</version>
</dependency>
spark-nlp-aarch64:
<dependency>
<groupId>com.johnsnowlabs.nlp</groupId>
<artifactId>spark-nlp-aarch64_2.12</artifactId>
<version>6.0.4</version>
</dependency>
FAT JARs
- CPU on Apache Spark 3.x/3.1.x/3.2.x/3.3.x/3.4.x: https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/jars/spark-nlp-assembly-6.0.4.jar
- GPU on Apache Spark 3.0.x/3.1.x/3.2.x/3.3.x/3.4.x: https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/jars/spark-nlp-gpu-assembly-6.0.4.jar
- M1 on Apache Spark 3.0.x/3.1.x/3.2.x/3.3.x/3.4.x: https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/jars/spark-nlp-silicon-assembly-6.0.4.jar
- AArch64 on Apache Spark 3.0.x/3.1.x/3.2.x/3.3.x/3.4.x: https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/jars/spark-nlp-aarch64-assembly-6.0.4.jar
What's Changed
- Sparknlp 282 Introducing MiniLMEmbeddings #14610 by @prabod
- [SPARKNLP-1086] Introducing DataFrameOptimizer #14607 by @danilojsl
- [SPARKNLP-1161] Adding features to PDF Reader #14596 by @danilojsl
Full Changelog: 6.0.3...6.0.4
6.0.3
📢 Spark NLP 6.0.3: Multimodal E5-V Embeddings and Enhanced Document Partitioning
We are excited to announce the release of Spark NLP 6.0.3! This version introduces significant advancements in multimodal capabilities and further refines document processing workflows. Upgrade to 6.0.3 to leverage these cutting-edge features and expand your NLP and vision task capabilities at scale.
🔥 Highlights
- Introducing E5-V Universal Multimodal Embeddings: Support for
E5VEmbeddings
, enabling universal multimodal embeddings with Multimodal Large Language Models (MLLMs). It can express semantic similarly between texts, images, or a combination of both. - Enhanced Document Partitioning: Improvements to the
Partition
andPartitionTransformer
annotators with new character and title-based chunking strategies. - New XML Reader: Added
sparknlp.read().xml()
and integrated XML support into thePartition
annotator for streamlined XML document processing.
🚀 New Features & Enhancements
E5-V Multimodal Embeddings
This release further boosts Spark NLP's multimodal processing power with the integration of E5-V.
E5VEmbeddings
is designed to adapt MLLMs for achieving universal multimodal embeddings. It leverages MLLMs with prompts to effectively bridge the modality gap between different types of inputs, demonstrating strong performance in multimodal embeddings even without fine-tuning. (Link to notebook)
Enhanced Unstructured Document Processing
The Partition
and PartitionTransformer
components now include additional chunking strategies and enhancements, which divides content into meaningful units based on the document's structure or number of characters.
- New Chunking Strategies (Link to notebook)
- Character Number Strategy (
maxCharacters
): Split documents by number of characters. - Title-Based Chunking Strategy (
byTitle
): Split documents by titles in the documents. Additional settings: - Soft Chunking Limit (
newAfterNChars
): Allows for early section breaks before reaching themaxCharacters
threshold. - Contextual Overlap (
overlapAll
): Adds trailing context from the previous chunk to the next, improving semantic continuity.
- Character Number Strategy (
- Enhancements
- Page Boundary Splitting: Respects
pageNumber
metadata and starts a new section when a page changes. - Title Inclusion Behavior: Ensures titles are embedded within the following content rather than forming isolated chunks.
- New XML Reader: This release introduces a new feature that enables reading and parsing XML files into a structured Spark DataFrame. (Link to notebook)
- Added
sparknlp.read().xml()
: This method accepts file paths of XML content. - Use in Partition: XML content can now be processed using the
Partition
annotator by settingcontent_type = "application/xml"
.
- Added
- Page Boundary Splitting: Respects
🐛 Bug Fixes
- @thec0dewriter fixed a typo in our excel reader notebook (#14591) Thanks a lot!
❤️ Community Support
- Slack For live discussion with the Spark NLP community and the team
- GitHub Bug reports, feature requests, and contributions
- Discussions Engage with other community members, share ideas, and show off how you use Spark NLP!
- Medium Spark NLP articles
- JohnSnowLabs official Medium
- YouTube Spark NLP video tutorials
⚙️ Installation
Python
#PyPI
pip install spark-nlp==6.0.3
Spark Packages
spark-nlp on Apache Spark 3.0.x, 3.1.x, 3.2.x, 3.3.x, and 3.4.x (Scala 2.12):
spark-shell --packages com.johnsnowlabs.nlp:spark-nlp_2.12:6.0.3
pyspark --packages com.johnsnowlabs.nlp:spark-nlp_2.12:6.0.3
GPU
spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:6.0.3
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:6.0.3
Apple Silicon
spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-silicon_2.12:6.0.3
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-silicon_2.12:6.0.3
AArch64
spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:6.0.3
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:6.0.3
Maven
spark-nlp on Apache Spark 3.0.x, 3.1.x, 3.2.x, 3.3.x, and 3.4.x:
<dependency>
<groupId>com.johnsnowlabs.nlp</groupId>
<artifactId>spark-nlp_2.12</artifactId>
<version>6.0.3</version>
</dependency>
spark-nlp-gpu:
<dependency>
<groupId>com.johnsnowlabs.nlp</groupId>
<artifactId>spark-nlp-gpu_2.12</artifactId>
<version>6.0.3</version>
</dependency>
spark-nlp-silicon:
<dependency>
<groupId>com.johnsnowlabs.nlp</groupId>
<artifactId>spark-nlp-silicon_2.12</artifactId>
<version>6.0.3</version>
</dependency>
spark-nlp-aarch64:
<dependency>
<groupId>com.johnsnowlabs.nlp</groupId>
<artifactId>spark-nlp-aarch64_2.12</artifactId>
<version>6.0.3</version>
</dependency>
FAT JARs
- CPU on Apache Spark 3.x/3.1.x/3.2.x/3.3.x/3.4.x: https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/jars/spark-nlp-assembly-6.0.3.jar
- GPU on Apache Spark 3.0.x/3.1.x/3.2.x/3.3.x/3.4.x: https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/jars/spark-nlp-gpu-assembly-6.0.3.jar
- M1 on Apache Spark 3.0.x/3.1.x/3.2.x/3.3.x/3.4.x: https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/jars/spark-nlp-silicon-assembly-6.0.3.jar
- AArch64 on Apache Spark 3.0.x/3.1.x/3.2.x/3.3.x/3.4.x: https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/jars/spark-nlp-aarch64-assembly-6.0.3.jar
What's Changed
- [SPARKNLP-1138] Adding semantic chunking to partition #14593 by @danilojsl
- [SPARKNLP-1163] Adding title chunking strategy #14594 by @danilojsl
- SparkNLP 1143 - Introducing e5-v universal embeddings with multimodal large language models #14597 by @prabod
- Fix reference copy pasted from Excel reader #14591 by @thec0dewriter
- [SPARKNLP-1119] Adding XML reader #14598 by @danilojsl
Full Changelog: 6.0.2...6.0.3
6.0.2
📢 Spark NLP 6.0.2: Advancing Multimodal Capabilities and Streamlining Document Processing
We are thrilled to announce the release of Spark NLP 6.0.2! This version introduces powerful new multimodal models and significantly enhances document processing workflows. Upgrade to 6.0.2 to leverage these cutting-edge features and expand your NLP and vision task capabilities at scale.
Stay updated with our latest examples and tutorials by visiting our Medium - Spark NLP blog!
🔥 Highlights
- Introducing InternVL: Support for the state-of-the-art
InternVLForMultiModal
model, enabling advanced visual question answering with InternVL 2, 2.5, and 3 series models. - Introducing Florence-2: Integration of Florence-2 in
Florance2Transformer
, a sophisticated vision foundation model for diverse prompt-based vision and vision-language tasks like captioning, object detection, and segmentation. - New Document Partitioning Feature: Added the
Partition
andPartitionTransformer
annotator for a unified and configurable interface with Spark NLP readers, simplifying unstructured data loading.
🚀 New Features & Enhancements
Advanced Multimodal Model Integrations
This release significantly boosts Spark NLP's multimodal processing power with the integration of two new visual language models:
- InternVL:
InternVLForMultiModal
is a powerful multimodal large language model is specifically designed for visual question answering. This annotator is versatile, supporting the InternVL 2, 2.5, and 3 families of models, allowing users to tackle complex visual-linguistic tasks. (Link to notebook) - Florence-2: Introducing
Florance2Transformer
, an advanced vision foundation model. Florence-2 utilizes a prompt-based approach, enabling it to perform a wide array of vision and vision-language tasks. Users can leverage simple text prompts to execute tasks such as image captioning, object detection, and image segmentation with high accuracy. (Link to notebook)
Enhanced Unstructured Document Processing
- Partitioning Documents: This release introduces the new
Partition
andPartitionTransformer
annotator.Partition
provides a unified interface for extracting structured content from various document formats into Spark DataFrames. It supports input from files, URLs, in-memory strings, or byte arrays and handles formats such as text, HTML, Word, Excel, PowerPoint, emails, and PDFs. It automatically selects the appropriate reader based on file extension or MIME type and allows customization via parameters. (Link to notebook)- The
PartitionTransformer
annotator allows you to use thePartition
feature more smoothly within existing Spark NLP workflows, enabling seamless reuse of your pipelines.PartitionTransformer
can be used for extracting structured content from various document types using Spark NLP readers. It supports reading from files, URLs, in-memory strings, or byte arrays, and returns parsed output as a structured Spark DataFrame. (Link to notebook)
- Key Improvements:
- Simplifies integration with Spark NLP readers through a unified interface.
- Adds flexibility by enabling more reader-specific configurations.
- Enhances the maintainability and scalability of data loading workflows.
🐛 Bug Fixes
- Adjusted python type annotations for the
AutoGGUFModel
(#14576)
❤️ Community Support
- Slack For live discussion with the Spark NLP community and the team
- GitHub Bug reports, feature requests, and contributions
- Discussions Engage with other community members, share ideas, and show off how you use Spark NLP!
- Medium Spark NLP articles
- JohnSnowLabs official Medium
- YouTube Spark NLP video tutorials
⚙️ Installation
Python
#PyPI
pip install spark-nlp==6.0.2
Spark Packages
spark-nlp on Apache Spark 3.0.x, 3.1.x, 3.2.x, 3.3.x, and 3.4.x (Scala 2.12):
spark-shell --packages com.johnsnowlabs.nlp:spark-nlp_2.12:6.0.2
pyspark --packages com.johnsnowlabs.nlp:spark-nlp_2.12:6.0.2
GPU
spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:6.0.2
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:6.0.2
Apple Silicon
spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-silicon_2.12:6.0.2
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-silicon_2.12:6.0.2
AArch64
spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:6.0.2
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:6.0.2
Maven
spark-nlp on Apache Spark 3.0.x, 3.1.x, 3.2.x, 3.3.x, and 3.4.x:
<dependency>
<groupId>com.johnsnowlabs.nlp</groupId>
<artifactId>spark-nlp_2.12</artifactId>
<version>6.0.2</version>
</dependency>
spark-nlp-gpu:
<dependency>
<groupId>com.johnsnowlabs.nlp</groupId>
<artifactId>spark-nlp-gpu_2.12</artifactId>
<version>6.0.2</version>
</dependency>
spark-nlp-silicon:
<dependency>
<groupId>com.johnsnowlabs.nlp</groupId>
<artifactId>spark-nlp-silicon_2.12</artifactId>
<version>6.0.2</version>
</dependency>
spark-nlp-aarch64:
<dependency>
<groupId>com.johnsnowlabs.nlp</groupId>
<artifactId>spark-nlp-aarch64_2.12</artifactId>
<version>6.0.2</version>
</dependency>
FAT JARs
- CPU on Apache Spark 3.x/3.1.x/3.2.x/3.3.x/3.4.x: https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/jars/spark-nlp-assembly-6.0.2.jar
- GPU on Apache Spark 3.0.x/3.1.x/3.2.x/3.3.x/3.4.x: https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/jars/spark-nlp-gpu-assembly-6.0.2.jar
- M1 on Apache Spark 3.0.x/3.1.x/3.2.x/3.3.x/3.4.x: https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/jars/spark-nlp-silicon-assembly-6.0.2.jar
- AArch64 on Apache Spark 3.0.x/3.1.x/3.2.x/3.3.x/3.4.x: https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/jars/spark-nlp-aarch64-assembly-6.0.2.jar
What's Changed
- SparkNLP - 1123 Introducing InternVL #14578 by @prabod
- Documentation for SparkNLP Readers and Partition class #14581 by @paulamib123
- Sparknlp-1174 Adding Partitioning Documents Feature #14579 by @danilojsl
- SparkNLP 1131 - Introducing Florance-2 #14585 by @prabod
Full Changelog: 6.0.1...6.0.2
Spark NLP 6.0.1: SmolVLM, PaliGemma 2, Gemma 3, PDF Reader enhancements
📢 Spark NLP 6.0.1: Introducing New State-of-the-Art Vision-Language Models and Enhanced Document Processing
We are pleased to announce the release of Spark NLP 6.0.1, bringing exciting new vision features and continued enhancements. Expand your NLP capabilities at scale for a wide range of tasks by upgrading to 6.0.1 and leverage these powerful new additions and improvements!
We also have been adding blog posts covering various examples for our newest features. Check them out at Medium - Spark NLP!
🔥 Highlights
- Added support for several new State-of-the-Art vision language models (VLM) including Gemma 3, PaliGemma, PaliGemma2, and SmolVLM.
- Introduced new parameter options for the PDF Reader for enhanced document ingestion control.
🚀 New Features & Enhancements
New VLM Implementations
This release adds support for several cutting-edge VLMs, significantly expanding the range of tasks you can tackle with Spark NLP:
- Gemma 3: The latest version of Google's lightweight, state-of-the-art open models. (link to notebook)
- PaliGemma and PaliGemma 2: Integration of the original PaliGemma vision-language model by Gogle. This annotator can also read PaliGemma2 models. (link to notebook)
- SmolVLM: small, fast, memory-efficient, and fully open-source 2B VLM (link to notebook)
PDF Reader Enhancements
The PDF Reader now includes additional parameters and options, providing users with more flexible and controlled ingestion of PDF documents, improving handling of various PDF structures. (link to notebook)
You can now
- Add
splitPage
parameter to identify the correct number of pages - Add
onlyPageNum
parameter to display only the number of pages of the document - Add
textStripper
parameter used for output layout and formatting - Add
sort
parameter to enable or disable sorting lines
🐛 Bug Fixes
This release also includes fixes for several issues:
- Fixed a python error in
RoBERtaMultipleChoice
, preventing these types of annotators to be loaded in Python - Fixed various typos and issues in our Jupyter notebook examples
❤️ Community Support
- Slack For live discussion with the Spark NLP community and the team
- GitHub Bug reports, feature requests, and contributions
- Discussions Engage with other community members, share ideas, and show off how you use Spark NLP!
- Medium Spark NLP articles
- JohnSnowLabs official Medium
- YouTube Spark NLP video tutorials
⚙️ Installation
Python
#PyPI
pip install spark-nlp==6.0.1
Spark Packages
spark-nlp on Apache Spark 3.0.x, 3.1.x, 3.2.x, 3.3.x, and 3.4.x (Scala 2.12):
spark-shell --packages com.johnsnowlabs.nlp:spark-nlp_2.12:6.0.1
pyspark --packages com.johnsnowlabs.nlp:spark-nlp_2.12:6.0.1
GPU
spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:6.0.1
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:6.0.1
Apple Silicon
spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-silicon_2.12:6.0.1
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-silicon_2.12:6.0.1
AArch64
spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:6.0.1
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:6.0.1
Maven
spark-nlp on Apache Spark 3.0.x, 3.1.x, 3.2.x, 3.3.x, and 3.4.x:
<dependency>
<groupId>com.johnsnowlabs.nlp</groupId>
<artifactId>spark-nlp_2.12</artifactId>
<version>6.0.1</version>
</dependency>
spark-nlp-gpu:
<dependency>
<groupId>com.johnsnowlabs.nlp</groupId>
<artifactId>spark-nlp-gpu_2.12</artifactId>
<version>6.0.1</version>
</dependency>
spark-nlp-silicon:
<dependency>
<groupId>com.johnsnowlabs.nlp</groupId>
<artifactId>spark-nlp-silicon_2.12</artifactId>
<version>6.0.1</version>
</dependency>
spark-nlp-aarch64:
<dependency>
<groupId>com.johnsnowlabs.nlp</groupId>
<artifactId>spark-nlp-aarch64_2.12</artifactId>
<version>6.0.1</version>
</dependency>
FAT JARs
- CPU on Apache Spark 3.x/3.1.x/3.2.x/3.3.x/3.4.x: https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/jars/spark-nlp-assembly-6.0.1.jar
- GPU on Apache Spark 3.0.x/3.1.x/3.2.x/3.3.x/3.4.x: https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/jars/spark-nlp-gpu-assembly-6.0.1.jar
- M1 on Apache Spark 3.0.x/3.1.x/3.2.x/3.3.x/3.4.x: https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/jars/spark-nlp-silicon-assembly-6.0.1.jar
- AArch64 on Apache Spark 3.0.x/3.1.x/3.2.x/3.3.x/3.4.x: https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/jars/spark-nlp-aarch64-assembly-6.0.1.jar
What's Changed
- [SPARKNLP-1164] Updating python 3.7 to 3.8 for jobs spark34 and spark33 by @danilojsl #14565
- [SPARKNLP-1177] Solving typo in Python for RoBertaForMultipleChoice by @danilojsl #14567
- Sparknlp 1115 Introducing SmolVLM by @prabod #14552
- Fixing typos in notebooks by @ahmedlone127 #14570
- SparkNLP 1121 Introducing PaliGemma by @prabod #14551
- SparkNLP 1124 Introducing Gemma 3 by @prabod #14556
- Sparknlp-1158 Adding Parameters Options to PDF Reader by @danilojsl #14562
Full Changelog: 6.0.0...6.0.1