Skip to content

Commit b74e0d0

Browse files
committed
Editorial updates
1 parent 96bb51f commit b74e0d0

File tree

3 files changed

+15
-16
lines changed

3 files changed

+15
-16
lines changed

en_US/data-integration/azure-blob-storage.md

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -104,8 +104,8 @@ This section demonstrates how to create a rule in EMQX to process messages from
104104

105105
8. Select the **Upload Method**. The differences between the two methods are as follows:
106106

107-
- **Direct Upload**: Each time the rule is triggered, data is uploaded directly to Azure Storage according to the preset object key and content. This method is suitable for storing binary or large text data. However, it may generate a large number of files.
108-
- **Aggregated Upload**: This method packages the results of multiple rule triggers into a single file (such as a CSV file) and uploads it to Azure Storage, making it suitable for storing structured data. It can reduce the number of files and improve write efficiency.
107+
- **Direct Upload**: Each time the rule is triggered, data is uploaded directly to Azure Blob Storage according to the preset object key and content. This method is suitable for storing binary or large text data. However, it may generate a large number of files.
108+
- **Aggregated Upload**: This method packages the results of multiple rule triggers into a single file (such as a CSV file) and uploads it to Azure Blob Storage, making it suitable for storing structured data. It can reduce the number of files and improve write efficiency.
109109

110110
The configuration parameters differ for each method. Please configure according to the selected method:
111111

@@ -138,12 +138,12 @@ This section demonstrates how to create a rule in EMQX to process messages from
138138
Note that if all placeholders marked as required are not used in the template, these placeholders will be automatically added to the Blob Name as path suffixes to avoid duplication. All other placeholders are considered invalid.
139139

140140
- **Aggregation Type**: Defines the format of the data file used to store batched MQTT messages in Azure Storage. Supported values:
141-
142-
- `CSV`: Data will be written to Azure Storage in comma-separated CSV format.
143141

144-
- `JSON Lines`: Data will be written to Azure Storage in [JSON Lines](https://jsonlines.org/) format.
142+
- `CSV`: Data will be written to Azure Blob Storage in comma-separated CSV format.
143+
144+
- `JSON Lines`: Data will be written to Azure Blob Storage in [JSON Lines](https://jsonlines.org/) format.
145145

146-
- `parquet`: Data will be written to Azure Storage in [Apache Parquet](https://parquet.apache.org/) format, which is column-based and optimized for analytical queries over large datasets.
146+
- `parquet`: Data will be written to Azure Blob Storage in [Apache Parquet](https://parquet.apache.org/) format, which is column-based and optimized for analytical queries over large datasets.
147147

148148
> For detailed configuration options, including schema definition, compression, and row group settings, see [Parquet Format Options](#parquet-format-options).
149149
@@ -181,7 +181,7 @@ This option defines how MQTT message fields are mapped to the columns in the Par
181181

182182
You can choose one of the following options:
183183

184-
- **Avro Schema That Lives in Schema Registry**: Use an existing [Avro schema](./schema-registry-example-avro.md) managed in EMQX [Schema Registry](./schema-registry).
184+
- **Avro Schema That Lives in Schema Registry**: Use an existing [Avro schema](./schema-registry-example-avro.md) managed in EMQX [Schema Registry](./schema-registry.md).
185185

186186
When this option is chosen, you must also specify a **Schema Name**, which identifies the schema to use for serialization.
187187

en_US/data-integration/s3.md

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -213,7 +213,6 @@ This section demonstrates how to create a rule in EMQX to process messages from
213213
Note that if all placeholders marked as required are not used in the template, these placeholders will be automatically added to the S3 object key as path suffixes to avoid duplication. All other placeholders are considered invalid.
214214

215215
- **Aggregation Type**: Defines the format of the data file used to store batched MQTT messages in S3. Supported values:
216-
217216
- `CSV`: Data will be written to S3 in comma-separated CSV format.
218217

219218
- `JSON Lines`: Data will be written to S3 in [JSON Lines](https://jsonlines.org/) format.
@@ -256,7 +255,7 @@ This option defines how MQTT message fields are mapped to the columns in the Par
256255

257256
You can choose one of the following options:
258257

259-
- **Avro Schema That Lives in Schema Registry**: Use an existing [Avro schema](./schema-registry-example-avro.md) managed in EMQX [Schema Registry](./schema-registry).
258+
- **Avro Schema That Lives in Schema Registry**: Use an existing [Avro schema](./schema-registry-example-avro.md) managed in EMQX [Schema Registry](./schema-registry.md).
260259

261260
When this option is chosen, you must also specify a **Schema Name**, which identifies the schema to use for serialization.
262261

zh_CN/data-integration/azure-blob-storage.md

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -60,7 +60,7 @@ EMQX 利用规则引擎和数据接收器将设备事件和数据转发到 Azure
6060

6161
在添加 Azure Blob Storage 数据 Sink 之前,您需要创建相应的连接器。
6262

63-
1. 转到 Dashboard **集成** -> **连接器** 页面。
63+
1. 转到 Dashboard **集成** -> **连接器**页面。
6464
2. 点击右上角的**创建**按钮。
6565
3. 选择 **Azure Blob Storage** 作为连接器类型,然后点击**下一步**
6666
4. 输入连接器名称,名称应为大小写字母和数字的组合。在这里,输入 `my-azure`
@@ -74,7 +74,7 @@ EMQX 利用规则引擎和数据接收器将设备事件和数据转发到 Azure
7474

7575
## 创建 Azure Blob Storage Sink 规则
7676

77-
本节演示如何在 EMQX 中创建规则,以处理来自源 MQTT 主题 `t/#` 的消息,并通过配置的 Sink 将处理结果写入 Azure Storage 中的 `iot-data` 容器。
77+
本节演示如何在 EMQX 中创建规则,以处理来自源 MQTT 主题 `t/#` 的消息,并通过配置的 Sink 将处理结果写入 Azure Blob Storage 中的 `iot-data` 容器。
7878

7979
1. 转到 Dashboard **集成** -> **规则**页面。
8080

@@ -105,8 +105,8 @@ EMQX 利用规则引擎和数据接收器将设备事件和数据转发到 Azure
105105

106106
8. 选择 **上传方式**。两种方式的区别如下:
107107

108-
- **直接上传**:每次触发规则时,数据会根据预设的对象键和值直接上传到 Azure Storage。这种方式适合存储二进制或大型文本数据,但可能会生成大量文件。
109-
- **聚合上传**:此方式将多个规则触发结果打包到一个文件(如 CSV 文件)中,并上传到 Azure Storage,适合存储结构化数据。它可以减少文件数量并提高写入效率。
108+
- **直接上传**:每次触发规则时,数据会根据预设的对象键和值直接上传到 Azure Blog Storage。这种方式适合存储二进制或大型文本数据,但可能会生成大量文件。
109+
- **聚合上传**:此方式将多个规则触发结果打包到一个文件(如 CSV 文件)中,并上传到 Azure Blob Storage,适合存储结构化数据。它可以减少文件数量并提高写入效率。
110110

111111
每种方式的配置参数不同。请根据选择的方式进行配置:
112112

@@ -140,11 +140,11 @@ EMQX 利用规则引擎和数据接收器将设备事件和数据转发到 Azure
140140

141141
- **聚合上传文件格式**:定义用于在 Azure Storage 中存储批量 MQTT 消息的数据文件格式。支持以下取值:
142142

143-
- `CSV`:数据将以逗号分隔的 CSV 格式写入 Azure Storage。
143+
- `CSV`:数据将以逗号分隔的 CSV 格式写入 Azure Blob Storage。
144144

145-
- `JSON Lines`:数据将以 [JSON Lines](https://jsonlines.org/) 格式写入 Azure Storage。
145+
- `JSON Lines`:数据将以 [JSON Lines](https://jsonlines.org/) 格式写入 Azure Blob Storage。
146146

147-
- `Parquet`: 数据将以 [Apache Parquet](https://parquet.apache.org/) 格式写入 Azure Storage。该格式是一种列式存储格式,专为大规模数据集的分析型查询进行优化。
147+
- `Parquet`: 数据将以 [Apache Parquet](https://parquet.apache.org/) 格式写入 Azure Blob Storage。该格式是一种列式存储格式,专为大规模数据集的分析型查询进行优化。
148148

149149
> 如需了解详细的配置选项(包括 **Schema 定义****压缩方式****行组大小设置**等),请参阅 [Parquet 格式选项](#parquet-格式选项)
150150

0 commit comments

Comments
 (0)