-
Notifications
You must be signed in to change notification settings - Fork 28
Description
there is a practical maximum data transfer size, which is 1024 pages currently: PRPs translate into iovec and IOV_MAX is 1024. if a guest NVMe driver submits three PRP lists worth of read/write pages we'll have >1024 iovec, p{read,write}v will return EINVAL, and eventually this will get back to the guest as an error for the NVMe command.
since we have MDTS==0 we've not communicated any such limit to the guest, so the fault is on us as an NVMe controller. from the spec:
The value is in units of the minimum memory page size (CAP.MPSMIN) and is reported as a power of two (2^n). A value of 0h indicates that there is no maximum data transfer size.
so MDTS == 10 is more reflective of reality. I'm mildly curious what the maximum sizes a guest would choose to use, because empirically Linux seems to limit to 256KiB, well below both 4MiB and "no limit". that observation could, of course, be that fio broke up larger I/Os, I dunno.