ZFS performance on NVME drives #17892
Replies: 2 comments 3 replies
-
|
I don't know what can be a limiting factor in your case, just keep in mind that scrub speeds reported by |
Beta Was this translation helpful? Give feedback.
-
|
For another data point, this is with 7 NVMe devices, they show around 2GB/s reads per device during a scrub, hardware is nothing special: AMD EPYC 7232P 8-Core @ 3100 MHz
|
Beta Was this translation helpful? Give feedback.

Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi all.
Probably a general question, though I wanted to ask what can limit the ZFS performance on NVME drives.
I have a test rig that consists of 6 rather slow-ish NVME disks that can do maybe 2-3GB/s linear reads each.
On each device I have a LUKS encryption and then ZFS pool on top of LUKS device-mapper devices - this is all due to the fact that built-in ZFS encryption is still 3-4x time slower than LUKS sadly.
The pool is
raidz1.Overall this works very well and fast, but when doing scrub I see that its performance is limited to around 4GB/s:
(this grows to 4GB/s+ eventually)
The CPU load in process is quite low, around 90% is idle and no single core is saturated (CPU is
Ryzen 9 9900xwith DDR5-ECC @ 4200Mhz)iostatshows that ZFS reads evenly around 700MB/s from each device and that's it.Not that I need higher performance (network is 10Gbit anyway), but more like out of curiosity - what can be the bottleneck here? Memory bandwidth/latency? Doesn't look like.
Thanks in advance. Ah, yes, OpenZFS is latest 2.3.4 on Ubuntu 24.
Beta Was this translation helpful? Give feedback.
All reactions