You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
**MindHF** stands for **MindSpore + HuggingFace**, representing seamless compatibility with the HuggingFace ecosystem. The name also embodies **Harmonious & Fluid**, symbolizing our commitment to balancing compatibility with high performance. MindHF enables you to leverage the best of both worlds: the rich HuggingFace model ecosystem and MindSpore's powerful acceleration capabilities.
22
+
23
+
> **Note**: MindHF (formerly MindNLP) is the new name for this project. The `mindnlp` package name is still available for backward compatibility, but we recommend using `mindhf` going forward.
24
+
21
25
## Table of Contents
22
26
23
-
-[MindNLP](#-mindnlp)
27
+
-[MindHF](#-mindhf)
24
28
-[Table of Contents](#table-of-contents)
25
-
-[News 📢](#news-)
29
+
-[Features ✨](#features-)
26
30
-[Installation](#installation)
27
31
-[Install from Pypi](#install-from-pypi)
28
32
-[Daily build](#daily-build)
@@ -37,98 +41,116 @@
37
41
-[Acknowledgement](#acknowledgement)
38
42
-[Citation](#citation)
39
43
40
-
## News 📢
44
+
## Features ✨
45
+
46
+
### 1. 🤗 Full HuggingFace Compatibility
47
+
48
+
MindHF provides seamless compatibility with the HuggingFace ecosystem, enabling you to run any Transformers/Diffusers models on MindSpore across all hardware platforms (GPU/Ascend/CPU) without code modifications.
49
+
50
+
#### Direct HuggingFace Library Usage
51
+
52
+
You can directly use native HuggingFace libraries (transformers, diffusers, etc.) with MindSpore acceleration:
53
+
54
+
**For HuggingFace Transformers:**
55
+
56
+
```python
57
+
import mindspore
58
+
import mindhf
59
+
from transformers import pipeline
60
+
61
+
chat = [
62
+
{"role": "system", "content": "You are a sassy, wise-cracking robot as imagined by Hollywood circa 1986."},
63
+
{"role": "user", "content": "Hey, can you tell me any fun things to do in New York?"}
* ⚡ **MindNLP Core support Pytorch compatible:** To meet ecosystem compatibility requirements, we provide the `mindnlp.core` module to support compatibility with PyTorch interfaces. This module is built upon MindSpore's foundational APIs and operators, enabling model development using syntax similar to PyTorch. It also supports taking over torch interfaces through a Proxy, allowing the use of MindSpore for acceleration on Ascend hardware without the need for code modifications. The specific usage is as follows:
73
+
```python
74
+
import mindspore
75
+
import mindhf
76
+
from diffusers import DiffusionPipeline
43
77
44
-
```python
45
-
import mindnlp # import mindnlp lib will enable proxy automaticlly
pipeline("An image of a squirrel in Picasso style").images[0]
80
+
```
81
+
82
+
#### MindHF Native Interface
48
83
49
-
# all torch.xx apis will be mapped to mindnlp.core.xx
50
-
net = nn.Linear(10, 5)
51
-
x = torch.randn(3, 10)
52
-
out = net(x)
53
-
print(out.shape)
54
-
# core.Size([3, 5])
55
-
```
84
+
You can also use MindHF's native interface for better integration:
56
85
57
-
It is particularly noteworthy that MindNLP supports several features not yet available in MindSpore, which enables better support for model serialization, heterogeneous computing, and other scenarios:
58
-
1. Dispatch Mechanism Support: Operators are dispatched to the appropriate backend based on Tensor.device.
59
-
2. Meta Device Support: Allows for shape inference without performing actual computations.
60
-
3. Numpy asCPU Backend: Supports using NumPy as a CPU backend for acceleration.
61
-
4. Tensor.to for Heterogeneous Data Movement: Facilitates the movement of data across different devices using `Tensor.to`.
86
+
```python
87
+
from mindhf.transformers import AutoTokenizer, AutoModel
62
88
63
-
* 🔥 **Fully compatible with 🤗HuggingFace:** It enables seamless execution of any Transformers/Diffusers models on MindSpore across all hardware platforms (GPU/Ascend/CPU).
64
-
65
-
You may still invoke models through MindNLP as shown in the example code below:
model = AutoModel.from_pretrained("bert-base-uncased")
96
+
> **Note**: Due to differences in autograd and parallel execution mechanisms, any training or distributed execution code must utilize the interfaces provided by MindHF.
### 2. ⚡ High-Performance Features Powered by MindSpore
76
99
77
-
You can also directly use the native HuggingFace library(like transformers, diffusers, etc.) via the following approach as demonstrated in the example code:
100
+
MindHF leverages MindSpore's powerful capabilities to deliver exceptional performance and unique features:
78
101
79
-
- For huggingface transformers:
102
+
#### PyTorch-Compatible API with MindSpore Acceleration
80
103
81
-
```python
82
-
import mindspore
83
-
import mindnlp
84
-
from transformers import pipeline
104
+
MindHF provides `mindtorch` (accessible via `mindhf.core`) for PyTorch-compatible interfaces, enabling seamless migration from PyTorch code while benefiting from MindSpore's acceleration on Ascend hardware:
85
105
86
-
chat = [
87
-
{"role": "system", "content": "You are a sassy, wise-cracking robot as imagined by Hollywood circa 1986."},
88
-
{"role": "user", "content": "Hey, can you tell me any fun things to do in New York?"}
89
-
]
106
+
```python
107
+
import mindhf # Automatically enables proxy for torch APIs
pipeline("An image of a squirrel in Picasso style").images[0]
105
-
```
122
+
1.**Dispatch Mechanism**: Operators are automatically dispatched to the appropriate backend based on `Tensor.device`, enabling seamless multi-device execution.
123
+
2.**Meta Device Support**: Perform shape inference and memory planning without actual computations, significantly speeding up model development and debugging.
124
+
3.**NumPy as CPU Backend**: Use NumPy as a CPU backend for acceleration, providing better compatibility and performance on CPU devices.
125
+
4.**Heterogeneous Data Movement**: Enhanced `Tensor.to()` for efficient data movement across different devices (CPU/GPU/Ascend).
106
126
107
-
Notice ⚠️: Due to differences in autograd and parallel execution mechanisms, any training or distributed execution code must utilize the interfaces provided by MindNLP.
127
+
These features enable better support for model serialization, heterogeneous computing, and complex deployment scenarios.
108
128
109
129
## Installation
110
130
111
131
#### Install from Pypi
112
132
113
-
You can install the official version of MindNLP which is uploaded to pypi.
133
+
You can install the official version of MindHF which is uploaded to pypi.
114
134
115
135
```bash
116
-
pip install mindnlp
136
+
pip install mindhf
117
137
```
118
138
139
+
> **Note**: The `mindnlp` package name is still available for backward compatibility, but we recommend using `mindhf` going forward.
140
+
119
141
#### Daily build
120
142
121
-
You can download MindNLP daily wheel from [here](https://repo.mindspore.cn/mindspore-lab/mindnlp/newest/any/).
143
+
You can download MindHF daily wheel from [here](https://repo.mindspore.cn/mindspore-lab/mindhf/newest/any/).
MindNLP is an open source NLP library based on MindSpore. It supports a platform for solving natural language processing tasks, containing many common approaches inNLP. It can help researchers and developers to construct and train models more conveniently and rapidly.
150
-
151
-
The master branch works with**MindSpore master**.
152
-
153
-
#### Major Features
154
-
155
-
-**Comprehensive data processing**: Several classical NLP datasets are packaged into friendly module for easy use, such as Multi30k, SQuAD, CoNLL, etc.
156
-
-**Friendly NLP model toolset**: MindNLP provides various configurable components. It is friendly to customize models using MindNLP.
157
-
-**Easy-to-use engine**: MindNLP simplified the complicated training process in MindSpore. It supports Trainer and Evaluator interfaces to train and evaluate models easily.
168
+
| MindHF version | MindSpore version | Supported Python version |
Since there are too many supported models, please check [here](https://mindnlp.cqu.ai/supported_models)
175
+
Since there are too many supported models, please check [here](https://mindhf.cqu.ai/supported_models)
163
176
164
177
<!-- ## Tutorials
165
178
@@ -178,7 +191,7 @@ The dynamic version is still under development, if you find any issue or have an
178
191
179
192
## MindSpore NLP SIG
180
193
181
-
MindSpore NLP SIG (Natural Language Processing Special Interest Group) is the main development team of the MindNLP framework. It aims to collaborate with developers from both industry and academia who are interested in research, application development, and the practical implementation of natural language processing. Our goal is to create the best NLP framework based on the domestic framework MindSpore. Additionally, we regularly hold NLP technology sharing sessions and offline events. Interested developers can join our SIG group using the QR code below.
194
+
MindSpore NLP SIG (Natural Language Processing Special Interest Group) is the main development team of the MindHF framework. It aims to collaborate with developers from both industry and academia who are interested in research, application development, and the practical implementation of natural language processing. Our goal is to create the best NLP framework based on the domestic framework MindSpore. Additionally, we regularly hold NLP technology sharing sessions and offline events. Interested developers can join our SIG group using the QR code below.
0 commit comments