You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: en_US/data-integration/azure-blob-storage.md
+7-7Lines changed: 7 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -104,8 +104,8 @@ This section demonstrates how to create a rule in EMQX to process messages from
104
104
105
105
8. Select the **Upload Method**. The differences between the two methods are as follows:
106
106
107
-
-**Direct Upload**: Each time the rule is triggered, data is uploaded directly to Azure Storage according to the preset object key and content. This method is suitable for storing binary or large text data. However, it may generate a large number of files.
108
-
-**Aggregated Upload**: This method packages the results of multiple rule triggers into a single file (such as a CSV file) and uploads it to Azure Storage, making it suitable for storing structured data. It can reduce the number of files and improve write efficiency.
107
+
-**Direct Upload**: Each time the rule is triggered, data is uploaded directly to Azure Blob Storage according to the preset object key and content. This method is suitable for storing binary or large text data. However, it may generate a large number of files.
108
+
-**Aggregated Upload**: This method packages the results of multiple rule triggers into a single file (such as a CSV file) and uploads it to Azure Blob Storage, making it suitable for storing structured data. It can reduce the number of files and improve write efficiency.
109
109
110
110
The configuration parameters differ for each method. Please configure according to the selected method:
111
111
@@ -138,12 +138,12 @@ This section demonstrates how to create a rule in EMQX to process messages from
138
138
Note that if all placeholders marked as required are not used in the template, these placeholders will be automatically added to the Blob Name as path suffixes to avoid duplication. All other placeholders are considered invalid.
139
139
140
140
-**Aggregation Type**: Defines the format of the data file used to store batched MQTT messages in Azure Storage. Supported values:
141
-
142
-
-`CSV`: Data will be written to Azure Storage in comma-separated CSV format.
143
141
144
-
-`JSON Lines`: Data will be written to Azure Storage in [JSON Lines](https://jsonlines.org/) format.
142
+
-`CSV`: Data will be written to Azure Blob Storage in comma-separated CSV format.
143
+
144
+
-`JSON Lines`: Data will be written to Azure Blob Storage in [JSON Lines](https://jsonlines.org/) format.
145
145
146
-
-`parquet`: Data will be written to Azure Storage in [Apache Parquet](https://parquet.apache.org/) format, which is column-based and optimized for analytical queries over large datasets.
146
+
-`parquet`: Data will be written to Azure Blob Storage in [Apache Parquet](https://parquet.apache.org/) format, which is column-based and optimized for analytical queries over large datasets.
147
147
148
148
> For detailed configuration options, including schema definition, compression, and row group settings, see [Parquet Format Options](#parquet-format-options).
149
149
@@ -181,7 +181,7 @@ This option defines how MQTT message fields are mapped to the columns in the Par
181
181
182
182
You can choose one of the following options:
183
183
184
-
-**Avro Schema That Lives in Schema Registry**: Use an existing [Avro schema](./schema-registry-example-avro.md) managed in EMQX [Schema Registry](./schema-registry).
184
+
-**Avro Schema That Lives in Schema Registry**: Use an existing [Avro schema](./schema-registry-example-avro.md) managed in EMQX [Schema Registry](./schema-registry.md).
185
185
186
186
When this option is chosen, you must also specify a **Schema Name**, which identifies the schema to use for serialization.
Copy file name to clipboardExpand all lines: en_US/data-integration/s3.md
+1-2Lines changed: 1 addition & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -213,7 +213,6 @@ This section demonstrates how to create a rule in EMQX to process messages from
213
213
Note that if all placeholders marked as required are not used in the template, these placeholders will be automatically added to the S3 object key as path suffixes to avoid duplication. All other placeholders are considered invalid.
214
214
215
215
-**Aggregation Type**: Defines the format of the data file used to store batched MQTT messages in S3. Supported values:
216
-
217
216
-`CSV`: Data will be written to S3 in comma-separated CSV format.
218
217
219
218
-`JSON Lines`: Data will be written to S3 in [JSON Lines](https://jsonlines.org/) format.
@@ -256,7 +255,7 @@ This option defines how MQTT message fields are mapped to the columns in the Par
256
255
257
256
You can choose one of the following options:
258
257
259
-
-**Avro Schema That Lives in Schema Registry**: Use an existing [Avro schema](./schema-registry-example-avro.md) managed in EMQX [Schema Registry](./schema-registry).
258
+
-**Avro Schema That Lives in Schema Registry**: Use an existing [Avro schema](./schema-registry-example-avro.md) managed in EMQX [Schema Registry](./schema-registry.md).
260
259
261
260
When this option is chosen, you must also specify a **Schema Name**, which identifies the schema to use for serialization.
0 commit comments