Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
140 commits
Select commit Hold shift + click to select a range
ae91197
dma-mapping: introduce new DMA attribute to indicate MMIO memory
rleon Sep 9, 2025
2e63968
iommu/dma: implement DMA_ATTR_MMIO for dma_iova_link().
rleon Sep 9, 2025
98dfe06
dma-debug: refactor to use physical addresses for page mapping
rleon Sep 9, 2025
39efa7d
dma-mapping: rename trace_dma_*map_page to trace_dma_*map_phys
rleon Sep 9, 2025
b57d34c
iommu/dma: rename iommu_dma_*map_page to iommu_dma_*map_phys
rleon Sep 9, 2025
16e4aab
iommu/dma: implement DMA_ATTR_MMIO for iommu_dma_(un)map_phys()
rleon Sep 9, 2025
868d2de
dma-mapping: convert dma_direct_*map_page to be phys_addr_t based
rleon Sep 9, 2025
789bb4e
kmsan: convert kmsan_handle_dma to use physical addresses
rleon Sep 9, 2025
e3de449
dma-mapping: implement DMA_ATTR_MMIO for dma_(un)map_page_attrs()
rleon Sep 9, 2025
33c4b93
xen: swiotlb: Open code map_resource callback
rleon Sep 9, 2025
efedcf3
dma-mapping: export new dma_*map_phys() interface
rleon Sep 9, 2025
5ca28b5
mm/hmm: migrate to physical address-based DMA mapping API
rleon Sep 9, 2025
681d6fe
mm/hmm: properly take MMIO path
rleon Sep 9, 2025
e29dfd8
Revert "NVIDIA: SAUCE: Patch NVMe/NVMeoF driver to support GDS on Lin…
tdavenvidia Jan 27, 2026
8ad50b8
blk-mq-dma: create blk_map_iter type
keithbusch Aug 13, 2025
fc92bde
blk-mq-dma: provide the bio_vec array being iterated
keithbusch Aug 13, 2025
a2dc51c
blk-mq-dma: require unmap caller provide p2p map type
keithbusch Aug 13, 2025
f6909c1
blk-mq: remove REQ_P2PDMA flag
keithbusch Aug 13, 2025
cf3d1ee
blk-mq-dma: move common dma start code to a helper
keithbusch Aug 13, 2025
8fb77fc
blk-mq-dma: add scatter-less integrity data DMA mapping
keithbusch Aug 13, 2025
06d5919
blk-integrity: use iterator for mapping sg
keithbusch Aug 13, 2025
82605c5
nvme-pci: create common sgl unmapping helper
keithbusch Aug 13, 2025
4aee52d
nvme-pci: convert metadata mapping to dma iter
keithbusch Aug 13, 2025
35b5bc9
blk-mq-dma: bring back p2p request flags
keithbusch Sep 3, 2025
e2894e6
nvme-pci: migrate to dma_map_phys instead of map_page
rleon Nov 14, 2025
eb18307
block-dma: properly take MMIO path
rleon Nov 14, 2025
886fa00
NVIDIA: SAUCE: Patch NVMe/NVMeoF driver to support GDS on Linux 6.17 …
sourabgupta3 Jan 28, 2026
2118b8e
virtio_balloon: Remove redundant __GFP_NOWARN
qianfengrong Aug 7, 2025
3555063
virtio_ring: constify virtqueue pointer for DMA helpers
jasowang Aug 21, 2025
fc20a5f
virtio_ring: switch to use dma_{map|unmap}_page()
jasowang Aug 21, 2025
aeb87ae
virtio: rename dma helpers
jasowang Aug 21, 2025
4cb9e93
virtio: introduce virtio_map container union
jasowang Aug 21, 2025
a8700bd
virtio_ring: rename dma_handle to map_handle
jasowang Aug 21, 2025
90ce457
virtio: introduce map ops in virtio core
jasowang Aug 21, 2025
69eb8a6
vdpa: support virtio_map
jasowang Aug 21, 2025
25c06de
vdpa: introduce map ops
jasowang Sep 24, 2025
3742313
vduse: switch to use virtio map API instead of DMA API
jasowang Sep 24, 2025
2d0afad
vduse: Use fixed 4KB bounce pages for non-4KB page size
zhaoshane Sep 25, 2025
8a4dc50
virtio-vdpa: Drop redundant conversion to bool
Aug 18, 2025
fa948ef
dma-mapping: prepare dma_map_ops to conversion to physical address
rleon Oct 15, 2025
0352b42
dma-mapping: convert dummy ops to physical address mapping
rleon Oct 15, 2025
363f125
ARM: dma-mapping: Reduce struct page exposure in arch_sync_dma*()
rleon Oct 15, 2025
b6ae6cc
ARM: dma-mapping: Switch to physical address mapping callbacks
rleon Oct 15, 2025
95fa8c4
xen: swiotlb: Switch to physical address mapping callbacks
rleon Oct 15, 2025
c237dde
dma-mapping: remove unused mapping resource callbacks
rleon Oct 15, 2025
a8a9520
alpha: Convert mapping routine to rely on physical address
rleon Oct 15, 2025
d863e7c
MIPS/jazzdma: Provide physical address directly
rleon Oct 15, 2025
d707446
parisc: Convert DMA map_page to map_phys interface
rleon Oct 15, 2025
946a252
powerpc: Convert to physical address DMA mapping
rleon Oct 15, 2025
6a0c116
sparc: Use physical address DMA mapping
rleon Oct 15, 2025
648e809
x86: Use physical address for DMA mapping
rleon Oct 15, 2025
e4ecd65
xen: swiotlb: Convert mapping routine to rely on physical address
rleon Oct 15, 2025
9ade007
dma-mapping: remove unused map_page callback
rleon Oct 15, 2025
6b1060f
PCI/P2PDMA: Separate the mmap() support from the core logic
rleon Nov 20, 2025
d53e0a7
PCI/P2PDMA: Simplify bus address mapping API
rleon Nov 20, 2025
a8e76a9
PCI/P2PDMA: Refactor to separate core P2P functionality from memory a…
rleon Nov 20, 2025
a7060da
PCI/P2PDMA: Provide an access to pci_p2pdma_map_type() function
rleon Nov 20, 2025
7b8435a
PCI/P2PDMA: Document DMABUF model
jgunthorpe Nov 20, 2025
90954fd
dma-buf: provide phys_vec to scatter-gather mapping routine
rleon Nov 20, 2025
66c4d9e
vfio: Export vfio device get and put registration helpers
vivekkreddy Nov 20, 2025
7ed539a
vfio/pci: Share the core device pointer while invoking feature functions
vivekkreddy Nov 20, 2025
806d086
vfio/pci: Enable peer-to-peer DMA transactions by default
rleon Nov 20, 2025
d8b0d78
vfio/pci: Add dma-buf export support for MMIO regions
rleon Nov 20, 2025
c8303ac
vfio/nvgrace: Support get_dmabuf_phys
jgunthorpe Nov 20, 2025
5d8f2e0
iommu/dma: add missing support for DMA_ATTR_MMIO for dma_iova_unlink()
mszyprow Nov 24, 2025
e93044c
kmsan: fix missed kmsan_handle_dma() signature conversion
rleon Sep 17, 2025
84ec4b0
kmsan: fix kmsan_handle_dma() to avoid false positives
Oct 2, 2025
da264fe
nvme-pci: DMA unmap the correct regions in nvme_free_sgls
royger Jan 27, 2026
2828f1e
parisc: Set valid bit in high byte of 64‑bit physical address
rleon Dec 18, 2025
323f40a
dma-buf: fix integer overflow in fill_sg_entry() for buffers >= 8GiB
Nov 26, 2025
a389c1b
vfio: Prevent from pinned DMABUF importers to attach to VFIO DMABUF
rleon Jan 21, 2026
4199f96
NVIDIA: SAUCE: [Config] Add CONFIG_VFIO_PCI_DMABUF to annotations
tdavenvidia Feb 9, 2026
052216f
cpufreq/amd-pstate: Call cppc_set_auto_sel() only for online CPUs
gautshen Nov 7, 2025
1b9d889
NVIDIA: VR: SAUCE: ACPI: CPPC: Fix remaining for_each_possible_cpu() …
Feb 11, 2026
b491166
kernel/kexec: change the prototype of kimage_map_segment()
Dec 16, 2025
f6f8dbe
kernel/kexec: fix IMA when allocation happens in CMA area
Dec 16, 2025
e1e8d04
NVIDIA: VR: SAUCE: kvm: arm64: Include kvm_emulate.h in kvm/arm_psci.h
Aug 20, 2025
0031881
NVIDIA: VR: SAUCE: arm64: RME: Handle Granule Protection Faults (GPFs)
stevenprice-arm Aug 20, 2025
3f3cf45
NVIDIA: VR: SAUCE: arm64: RME: Add SMC definitions for calling the RMM
stevenprice-arm Aug 20, 2025
f14d4e2
NVIDIA: VR: SAUCE: arm64: RME: Add wrappers for RMI calls
stevenprice-arm Aug 20, 2025
48d3554
NVIDIA: VR: SAUCE: arm64: RME: Check for RME support at KVM init
stevenprice-arm Aug 20, 2025
ab23b25
NVIDIA: VR: SAUCE: arm64: RME: Define the user ABI
stevenprice-arm Aug 20, 2025
1216fa8
NVIDIA: VR: SAUCE: arm64: RME: ioctls to create and configure realms
stevenprice-arm Aug 20, 2025
d68c93c
NVIDIA: VR: SAUCE: kvm: arm64: Don't expose debug capabilities for re…
Aug 20, 2025
ad38f33
NVIDIA: VR: SAUCE: KVM: arm64: Allow passing machine type in KVM crea…
stevenprice-arm Aug 20, 2025
b18e1d7
NVIDIA: VR: SAUCE: arm64: RME: RTT tear down
stevenprice-arm Aug 20, 2025
b73ff9a
NVIDIA: VR: SAUCE: arm64: RME: Allocate/free RECs to match vCPUs
stevenprice-arm Aug 20, 2025
ec660d7
NVIDIA: VR: SAUCE: KVM: arm64: vgic: Provide helper for number of lis…
stevenprice-arm Aug 20, 2025
774c58c
NVIDIA: VR: SAUCE: arm64: RME: Support for the VGIC in realms
stevenprice-arm Aug 20, 2025
0c28591
NVIDIA: VR: SAUCE: KVM: arm64: Support timers in realm RECs
stevenprice-arm Aug 20, 2025
7f3c232
NVIDIA: VR: SAUCE: arm64: RME: Allow VMM to set RIPAS
stevenprice-arm Aug 20, 2025
5f36be4
NVIDIA: VR: SAUCE: arm64: RME: Handle realm enter/exit
stevenprice-arm Aug 20, 2025
e83e668
NVIDIA: VR: SAUCE: arm64: RME: Handle RMI_EXIT_RIPAS_CHANGE
stevenprice-arm Aug 20, 2025
d248daf
NVIDIA: VR: SAUCE: KVM: arm64: Handle realm MMIO emulation
stevenprice-arm Aug 20, 2025
33d9de0
NVIDIA: VR: SAUCE: arm64: RME: Allow populating initial contents
stevenprice-arm Aug 20, 2025
b45ca1e
NVIDIA: VR: SAUCE: arm64: RME: Runtime faulting of memory
stevenprice-arm Aug 20, 2025
0b54738
NVIDIA: VR: SAUCE: KVM: arm64: Handle realm VCPU load
stevenprice-arm Aug 20, 2025
4111834
NVIDIA: VR: SAUCE: KVM: arm64: Validate register access for a Realm VM
stevenprice-arm Aug 20, 2025
a2b046a
NVIDIA: VR: SAUCE: KVM: arm64: Handle Realm PSCI requests
stevenprice-arm Aug 20, 2025
2aae1c0
NVIDIA: VR: SAUCE: KVM: arm64: WARN on injected undef exceptions
stevenprice-arm Aug 20, 2025
0f1e03a
NVIDIA: VR: SAUCE: arm64: Don't expose stolen time for realm guests
stevenprice-arm Aug 20, 2025
bccc26b
NVIDIA: VR: SAUCE: arm64: RME: allow userspace to inject aborts
jgouly Aug 20, 2025
7048984
NVIDIA: VR: SAUCE: arm64: RME: support RSI_HOST_CALL
jgouly Aug 20, 2025
a8cb1c3
NVIDIA: VR: SAUCE: arm64: RME: Allow checking SVE on VM instance
Aug 20, 2025
4aeb834
NVIDIA: VR: SAUCE: arm64: RME: Always use 4k pages for realms
stevenprice-arm Aug 20, 2025
6418079
NVIDIA: VR: SAUCE: arm64: RME: Prevent Device mappings for Realms
stevenprice-arm Aug 20, 2025
b016423
NVIDIA: VR: SAUCE: arm_pmu: Provide a mechanism for disabling the phy…
stevenprice-arm Aug 20, 2025
7670333
NVIDIA: VR: SAUCE: arm64: RME: Enable PMU support with a realm guest
stevenprice-arm Aug 20, 2025
8e12fe4
NVIDIA: VR: SAUCE: arm64: RME: Hide KVM_CAP_READONLY_MEM for realm gu…
stevenprice-arm Aug 20, 2025
e7815a4
NVIDIA: VR: SAUCE: arm64: RME: Propagate number of breakpoints and wa…
Aug 20, 2025
919486e
NVIDIA: VR: SAUCE: arm64: RME: Set breakpoint parameters through SET_…
Aug 20, 2025
5eb7cde
NVIDIA: VR: SAUCE: arm64: RME: Initialize PMCR.N with number counter …
Aug 20, 2025
c99c23a
NVIDIA: VR: SAUCE: arm64: RME: Propagate max SVE vector length from RMM
Aug 20, 2025
8464db7
NVIDIA: VR: SAUCE: arm64: RME: Configure max SVE vector length for a …
Aug 20, 2025
c0748cd
NVIDIA: VR: SAUCE: arm64: RME: Provide register list for unfinalized …
Aug 20, 2025
0a21dcf
NVIDIA: VR: SAUCE: arm64: RME: Provide accurate register list
Aug 20, 2025
00c0d48
NVIDIA: VR: SAUCE: KVM: arm64: Expose support for private memory
stevenprice-arm Aug 20, 2025
1156b72
NVIDIA: VR: SAUCE: KVM: arm64: Expose KVM_ARM_VCPU_REC to user space
stevenprice-arm Aug 20, 2025
90bf40a
NVIDIA: VR: SAUCE: KVM: arm64: Allow activating realms
stevenprice-arm Aug 20, 2025
e60824f
NVIDIA: VR: SAUCE: arm64: RME: Add MECID support
raghuncstate Oct 1, 2025
b2cb3f7
NVIDIA: VR: SAUCE: arm64: RME: Add bounds check
raghuncstate Nov 9, 2025
9476d83
NVIDIA: VR: SAUCE: KVM: arm64: Expose KVM_CAP_ARM_RME via module para…
ianm-nv Jan 30, 2026
f05301a
arm64: realm: ioremap: Allow mapping memory as encrypted
Sep 18, 2025
14fe8b7
arm64: Enable EFI secret area Securityfs support
Sep 18, 2025
3b73053
NVIDIA: VR: SAUCE: [Config] Update annotations for ARM CCA
ianm-nv Jan 20, 2026
092c9af
UBUNTU: [Packaging] Depend on 580 NVIDIA graphics driver components e…
jacobmartin0 Feb 13, 2026
bd231c6
UBUNTU: Start new release
jacobmartin0 Feb 13, 2026
b24d96e
UBUNTU: link-to-tracker: update tracking bug
jacobmartin0 Feb 13, 2026
c3d2374
UBUNTU: [Packaging] debian.nvidia-6.17/dkms-versions -- update from k…
jacobmartin0 Feb 13, 2026
7e69d1c
UBUNTU: [Packaging] Add libopencsd-dev as a build dependency
jacobmartin0 Feb 14, 2026
9c65dde
UBUNTU: Ubuntu-nvidia-6.17-6.17.0-1010.10
jacobmartin0 Feb 14, 2026
3be4f59
vfio/pci: Add vfio_pci_dma_buf_iommufd_map()
jgunthorpe Nov 21, 2025
c323831
iommufd: Add DMABUF to iopt_pages
jgunthorpe Nov 21, 2025
b672ff7
iommufd: Do not map/unmap revoked DMABUFs
jgunthorpe Nov 21, 2025
1455798
iommufd: Allow a DMABUF to be revoked
jgunthorpe Nov 21, 2025
fab38ee
iommufd: Allow MMIO pages in a batch
jgunthorpe Nov 21, 2025
18bbce5
iommufd: Have pfn_reader process DMABUF iopt_pages
jgunthorpe Nov 21, 2025
9b18f33
iommufd: Have iopt_map_file_pages convert the fd to a file
jgunthorpe Nov 21, 2025
0c23504
iommufd: Accept a DMABUF through IOMMU_IOAS_MAP_FILE
jgunthorpe Nov 21, 2025
bced406
iommufd/selftest: Add some tests for the dmabuf flow
jgunthorpe Nov 21, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions Documentation/core-api/dma-api.rst
Original file line number Diff line number Diff line change
Expand Up @@ -761,7 +761,7 @@ example warning message may look like this::
[<ffffffff80235177>] find_busiest_group+0x207/0x8a0
[<ffffffff8064784f>] _spin_lock_irqsave+0x1f/0x50
[<ffffffff803c7ea3>] check_unmap+0x203/0x490
[<ffffffff803c8259>] debug_dma_unmap_page+0x49/0x50
[<ffffffff803c8259>] debug_dma_unmap_phys+0x49/0x50
[<ffffffff80485f26>] nv_tx_done_optimized+0xc6/0x2c0
[<ffffffff80486c13>] nv_nic_irq_optimized+0x73/0x2b0
[<ffffffff8026df84>] handle_IRQ_event+0x34/0x70
Expand Down Expand Up @@ -855,7 +855,7 @@ that a driver may be leaking mappings.
dma-debug interface debug_dma_mapping_error() to debug drivers that fail
to check DMA mapping errors on addresses returned by dma_map_single() and
dma_map_page() interfaces. This interface clears a flag set by
debug_dma_map_page() to indicate that dma_mapping_error() has been called by
debug_dma_map_phys() to indicate that dma_mapping_error() has been called by
the driver. When driver does unmap, debug_dma_unmap() checks the flag and if
this flag is still set, prints warning message that includes call trace that
leads up to the unmap. This interface can be called from dma_mapping_error()
Expand Down
18 changes: 18 additions & 0 deletions Documentation/core-api/dma-attributes.rst
Original file line number Diff line number Diff line change
Expand Up @@ -130,3 +130,21 @@ accesses to DMA buffers in both privileged "supervisor" and unprivileged
subsystem that the buffer is fully accessible at the elevated privilege
level (and ideally inaccessible or at least read-only at the
lesser-privileged levels).

DMA_ATTR_MMIO
-------------

This attribute indicates the physical address is not normal system
memory. It may not be used with kmap*()/phys_to_virt()/phys_to_page()
functions, it may not be cacheable, and access using CPU load/store
instructions may not be allowed.

Usually this will be used to describe MMIO addresses, or other non-cacheable
register addresses. When DMA mapping this sort of address we call
the operation Peer to Peer as a one device is DMA'ing to another device.
For PCI devices the p2pdma APIs must be used to determine if
DMA_ATTR_MMIO is appropriate.

For architectures that require cache flushing for DMA coherence
DMA_ATTR_MMIO will not perform any cache flushing. The address
provided must never be mapped cacheable into the CPU.
97 changes: 74 additions & 23 deletions Documentation/driver-api/pci/p2pdma.rst
Original file line number Diff line number Diff line change
Expand Up @@ -9,22 +9,48 @@ between two devices on the bus. This type of transaction is henceforth
called Peer-to-Peer (or P2P). However, there are a number of issues that
make P2P transactions tricky to do in a perfectly safe way.

One of the biggest issues is that PCI doesn't require forwarding
transactions between hierarchy domains, and in PCIe, each Root Port
defines a separate hierarchy domain. To make things worse, there is no
simple way to determine if a given Root Complex supports this or not.
(See PCIe r4.0, sec 1.3.1). Therefore, as of this writing, the kernel
only supports doing P2P when the endpoints involved are all behind the
same PCI bridge, as such devices are all in the same PCI hierarchy
domain, and the spec guarantees that all transactions within the
hierarchy will be routable, but it does not require routing
between hierarchies.

The second issue is that to make use of existing interfaces in Linux,
memory that is used for P2P transactions needs to be backed by struct
pages. However, PCI BARs are not typically cache coherent so there are
a few corner case gotchas with these pages so developers need to
be careful about what they do with them.
For PCIe the routing of Transaction Layer Packets (TLPs) is well-defined up
until they reach a host bridge or root port. If the path includes PCIe switches
then based on the ACS settings the transaction can route entirely within
the PCIe hierarchy and never reach the root port. The kernel will evaluate
the PCIe topology and always permit P2P in these well-defined cases.

However, if the P2P transaction reaches the host bridge then it might have to
hairpin back out the same root port, be routed inside the CPU SOC to another
PCIe root port, or routed internally to the SOC.

The PCIe specification doesn't define the forwarding of transactions between
hierarchy domains and kernel defaults to blocking such routing. There is an
allow list to allow detecting known-good HW, in which case P2P between any
two PCIe devices will be permitted.

Since P2P inherently is doing transactions between two devices it requires two
drivers to be co-operating inside the kernel. The providing driver has to convey
its MMIO to the consuming driver. To meet the driver model lifecycle rules the
MMIO must have all DMA mapping removed, all CPU accesses prevented, all page
table mappings undone before the providing driver completes remove().

This requires the providing and consuming driver to actively work together to
guarantee that the consuming driver has stopped using the MMIO during a removal
cycle. This is done by either a synchronous invalidation shutdown or waiting
for all usage refcounts to reach zero.

At the lowest level the P2P subsystem offers a naked struct p2p_provider that
delegates lifecycle management to the providing driver. It is expected that
drivers using this option will wrap their MMIO memory in DMABUF and use DMABUF
to provide an invalidation shutdown. These MMIO addresess have no struct page, and
if used with mmap() must create special PTEs. As such there are very few
kernel uAPIs that can accept pointers to them; in particular they cannot be used
with read()/write(), including O_DIRECT.

Building on this, the subsystem offers a layer to wrap the MMIO in a ZONE_DEVICE
pgmap of MEMORY_DEVICE_PCI_P2PDMA to create struct pages. The lifecycle of
pgmap ensures that when the pgmap is destroyed all other drivers have stopped
using the MMIO. This option works with O_DIRECT flows, in some cases, if the
underlying subsystem supports handling MEMORY_DEVICE_PCI_P2PDMA through
FOLL_PCI_P2PDMA. The use of FOLL_LONGTERM is prevented. As this relies on pgmap
it also relies on architecture support along with alignment and minimum size
limitations.


Driver Writer's Guide
Expand Down Expand Up @@ -114,14 +140,39 @@ allocating scatter-gather lists with P2P memory.
Struct Page Caveats
-------------------

Driver writers should be very careful about not passing these special
struct pages to code that isn't prepared for it. At this time, the kernel
interfaces do not have any checks for ensuring this. This obviously
precludes passing these pages to userspace.
While the MEMORY_DEVICE_PCI_P2PDMA pages can be installed in VMAs,
pin_user_pages() and related will not return them unless FOLL_PCI_P2PDMA is set.

P2P memory is also technically IO memory but should never have any side
effects behind it. Thus, the order of loads and stores should not be important
and ioreadX(), iowriteX() and friends should not be necessary.
The MEMORY_DEVICE_PCI_P2PDMA pages require care to support in the kernel. The
KVA is still MMIO and must still be accessed through the normal
readX()/writeX()/etc helpers. Direct CPU access (e.g. memcpy) is forbidden, just
like any other MMIO mapping. While this will actually work on some
architectures, others will experience corruption or just crash in the kernel.
Supporting FOLL_PCI_P2PDMA in a subsystem requires scrubbing it to ensure no CPU
access happens.


Usage With DMABUF
=================

DMABUF provides an alternative to the above struct page-based
client/provider/orchestrator system and should be used when struct page
doesn't exist. In this mode the exporting driver will wrap
some of its MMIO in a DMABUF and give the DMABUF FD to userspace.

Userspace can then pass the FD to an importing driver which will ask the
exporting driver to map it to the importer.

In this case the initiator and target pci_devices are known and the P2P subsystem
is used to determine the mapping type. The phys_addr_t-based DMA API is used to
establish the dma_addr_t.

Lifecycle is controlled by DMABUF move_notify(). When the exporting driver wants
to remove() it must deliver an invalidation shutdown to all DMABUF importing
drivers through move_notify() and synchronously DMA unmap all the MMIO.

No importing driver can continue to have a DMA map to the MMIO after the
exporting driver has destroyed its p2p_provider.


P2P DMA Support Library
Expand Down
92 changes: 90 additions & 2 deletions Documentation/virt/kvm/api.rst
Original file line number Diff line number Diff line change
Expand Up @@ -181,8 +181,20 @@ flag KVM_VM_MIPS_VZ.
ARM64:
^^^^^^

On arm64, the physical address size for a VM (IPA Size limit) is limited
to 40bits by default. The limit can be configured if the host supports the
On arm64, the machine type identifier is used to encode a type and the
physical address size for the VM. The lower byte (bits[7-0]) encode the
address size and the upper bits[11-8] encode a machine type. The machine
types that might be available are:

====================== ============================================
KVM_VM_TYPE_ARM_NORMAL A standard VM
KVM_VM_TYPE_ARM_REALM A "Realm" VM using the Arm Confidential
Compute extensions, the VM's memory is
protected from the host.
====================== ============================================

The physical address size for a VM (IPA Size limit) is limited to 40bits
by default. The limit can be configured if the host supports the
extension KVM_CAP_ARM_VM_IPA_SIZE. When supported, use
KVM_VM_TYPE_ARM_IPA_SIZE(IPA_Bits) to set the size in the machine type
identifier, where IPA_Bits is the maximum width of any physical
Expand Down Expand Up @@ -1295,6 +1307,8 @@ User space may need to inject several types of events to the guest.
Set the pending SError exception state for this VCPU. It is not possible to
'cancel' an Serror that has been made pending.

User space cannot inject SErrors into Realms.

If the guest performed an access to I/O memory which could not be handled by
userspace, for example because of missing instruction syndrome decode
information or because there is no device mapped at the accessed IPA, then
Expand Down Expand Up @@ -3550,6 +3564,11 @@ Possible features:
Depends on KVM_CAP_ARM_EL2_E2H0.
KVM_ARM_VCPU_HAS_EL2 must also be set.

- KVM_ARM_VCPU_REC: Allocate a REC (Realm Execution Context) for this
VCPU. This must be specified on all VCPUs created in a Realm VM.
Depends on KVM_CAP_ARM_RME.
Requires KVM_ARM_VCPU_FINALIZE(KVM_ARM_VCPU_REC).

4.83 KVM_ARM_PREFERRED_TARGET
-----------------------------

Expand Down Expand Up @@ -5123,6 +5142,7 @@ Recognised values for feature:

===== ===========================================
arm64 KVM_ARM_VCPU_SVE (requires KVM_CAP_ARM_SVE)
arm64 KVM_ARM_VCPU_REC (requires KVM_CAP_ARM_RME)
===== ===========================================

Finalizes the configuration of the specified vcpu feature.
Expand Down Expand Up @@ -6477,6 +6497,30 @@ the capability to be present.

`flags` must currently be zero.

4.144 KVM_ARM_VCPU_RMM_PSCI_COMPLETE
------------------------------------

:Capability: KVM_CAP_ARM_RME
:Architectures: arm64
:Type: vcpu ioctl
:Parameters: struct kvm_arm_rmm_psci_complete (in)
:Returns: 0 if successful, < 0 on error

::

struct kvm_arm_rmm_psci_complete {
__u64 target_mpidr;
__u32 psci_status;
__u32 padding[3];
};

Where PSCI functions are handled by user space, the RMM needs to be informed of
the target of the operation using `target_mpidr`, along with the status
(`psci_status`). The RMM v1.0 specification defines two functions that require
this call: PSCI_CPU_ON and PSCI_AFFINITY_INFO.

If the kernel is handling PSCI then this is done automatically and the VMM
doesn't need to call this ioctl.

.. _kvm_run:

Expand Down Expand Up @@ -8726,6 +8770,47 @@ This capability indicate to the userspace whether a PFNMAP memory region
can be safely mapped as cacheable. This relies on the presence of
force write back (FWB) feature support on the hardware.

7.44 KVM_CAP_ARM_RME
--------------------

:Architectures: arm64
:Target: VM
:Parameters: args[0] provides an action, args[1] points to a structure in
memory for the action.
:Returns: 0 on success, negative value on error

Used to configure and set up the memory for a Realm. The available actions are:

================================= =============================================
KVM_CAP_ARM_RME_CONFIG_REALM Takes struct arm_rme_config as args[1] and
configures realm parameters prior to it being
created.

Options are ARM_RME_CONFIG_RPV to set the
"Realm Personalization Value" and
ARM_RME_CONFIG_HASH_ALGO to set the hash
algorithm.

KVM_CAP_ARM_RME_CREATE_REALM Request the RMM to create the realm. The
realm's configuration parameters must be set
first.

KVM_CAP_ARM_RME_INIT_RIPAS_REALM Takes struct arm_rme_init_ripas as args[1]
and sets the RIPAS (Realm IPA State) to
RIPAS_RAM of a specified area of the realm's
IPA.

KVM_CAP_ARM_RME_POPULATE_REALM Takes struct arm_rme_populate_realm as
args[1] and populates a region of protected
address space by copying the data from the
shared alias.

KVM_CAP_ARM_RME_ACTIVATE_REALM Request the RMM to activate the realm. No
changes can be made to the Realm's populated
memory, IPA state, configuration parameters
or vCPU additions after this step.
================================= =============================================

7.45 KVM_CAP_ARM_SEA_TO_USER
----------------------------

Expand Down Expand Up @@ -9005,6 +9090,9 @@ is supported, than the other should as well and vice versa. For arm64
see Documentation/virt/kvm/devices/vcpu.rst "KVM_ARM_VCPU_PVTIME_CTRL".
For x86 see Documentation/virt/kvm/x86/msr.rst "MSR_KVM_STEAL_TIME".

Note that steal time accounting is not available when a guest is running
within a Arm CCA realm (machine type KVM_VM_TYPE_ARM_REALM).

8.25 KVM_CAP_S390_DIAG318
-------------------------

Expand Down
Loading