Skip to content

Commit 57cd540

Browse files
committed
fix
1 parent 0e91339 commit 57cd540

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

_posts/2025-11-19-encryption-in-duckdb.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -292,7 +292,7 @@ SUMMARIZE encrypted.lineitem;
292292

293293
Here are the results on a fairly recent MacBook: `SUMMARIZE` on the unencrypted table took ca. 5.4 seconds. Using Mbed TLS, this went up to around 6.2 s. However, when enabling OpenSSL the end-to-end time went straight back to 5.4 s. How is this possible? Is decryption not expensive? Well, there is a lot more happening in query processing than reading blocks from storage. So the impact of decryption is not all that huge, even when using a slow implementation. Secondly, when using hardware acceleration in OpenSSL, the overall overhead of encryption and decryption becomes almost negligible.
294294

295-
But just running summarization is overly simplistic. Real™ database workloads include modifications to data, insertion of new rows, updates of rows, deletion of rows etc. Also, multiple clients will be updating and querying at the same time. So we re-surrected the full TPC-H “Power” test from our previous blog post “[Changing Data with Confidence and ACID](https://duckdb.org/2024/09/25/changing-data-with-confidence-and-acid)”. We slightly tweaked the [benchmark script](https://github.com/duckdb/duckdb-tpch-power-test/blob/main/benchmark.py) to enable the new database encryption. For this experiment, we used the OpenSSL encryption implementation due to the issues outlined above. We observe Power@Size” and “Throughput@Size”. The former is raw sequential query performance, while the latter measures multiple parallel query streams in the presence of updates.
295+
But just running summarization is overly simplistic. Real™ database workloads include modifications to data, insertion of new rows, updates of rows, deletion of rows etc. Also, multiple clients will be updating and querying at the same time. So we re-surrected the full TPC-H “Power” test from our previous blog post “[Changing Data with Confidence and ACID](https://duckdb.org/2024/09/25/changing-data-with-confidence-and-acid)”. We slightly tweaked the [benchmark script](https://github.com/duckdb/duckdb-tpch-power-test/blob/main/benchmark.py) to enable the new database encryption. For this experiment, we used the OpenSSL encryption implementation due to the issues outlined above. We observe Power@Size” and “Throughput@Size”. The former is raw sequential query performance, while the latter measures multiple parallel query streams in the presence of updates.
296296

297297
When running on the same MacBook with DuckDB 1.4.1 and a “scale factor” of 100, we get a Power@Size metric of 624,296 and a Throughput@Size metric of 450,409 *without* encryption.
298298

0 commit comments

Comments
 (0)