pub struct RiscvCoherentDmaFence { /* private fields */ }Expand description
An implementation of DmaFence for RISC-V systems with cache-coherent DMA
memory accesses.
The provided release and acquire methods use opaque assembly blocks and
RISC-V FENCE instructions to make prior writes to DMA buffers visible to
DMA devices, and DMA writes visible to prior memory reads, respectively.
These primitives are sufficient to implement the release / acquire
semantics of DmaFence for cache-coherent platforms where all memory
writes written back are immediately visible to DMA devices, and all DMA
writes are immediately visible to CPU fetches.
For platforms where explicit cache-flush instructions are required, this implementation alone is insufficient and must be extended with the necessary platform-specific instructions.
Implementations§
Source§impl RiscvCoherentDmaFence
impl RiscvCoherentDmaFence
Sourcepub unsafe fn new() -> Self
pub unsafe fn new() -> Self
Construct a new RiscvCoherentDmaFence.
Refer to the type-level documentation and the documentation of
the DmaFence trait and its implementation for
RiscvCoherentDmaFence for more
details.
§Safety
This RiscvCoherentDmaFence implementation is insufficient for
platforms with non-coherent DMA, where explicit cache-flush instructions
are required. By using unsafe, callers of this function promise that
the resulting instance is not used for non-coherent DMA mappings.
Trait Implementations§
Source§impl Clone for RiscvCoherentDmaFence
impl Clone for RiscvCoherentDmaFence
Source§fn clone(&self) -> RiscvCoherentDmaFence
fn clone(&self) -> RiscvCoherentDmaFence
1.0.0 · Source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source. Read moreSource§impl Debug for RiscvCoherentDmaFence
impl Debug for RiscvCoherentDmaFence
Source§impl DmaFence for RiscvCoherentDmaFence
Available on RISC-V RV32 or RISC-V RV64 only.
impl DmaFence for RiscvCoherentDmaFence
Source§fn release<T>(self, slice_ptr: *mut [T])
fn release<T>(self, slice_ptr: *mut [T])
Expose prior writes to in-memory buffers to subsequent DMA operations.
This assembly block ensures that neither the compiler, not the CPU reorder any memory writes beyond the point at which a subsequent memory or I/O access is made (e.g., to start a DMA transaction).
Conventionally, we’d use the built-in core::sync::atomic::fence for
this, but that explicitly cannot be used to establish synchronization
among non-atomic accesses.
Instead, to deal with any potential compiler re-ordering, we use an
asm!() that does not have the nomem clobber set. This block is
opaque to the compiler. We further explicitly pass in a pointer
originating from, and thus carrying provenance of our DMA buffer. This
should be sufficient to make the compiler assume that this function may
read the entire DMA buffer, and thus cause it to commit all pending
writes before this asm!() block.
To deal with any hardware re-ordering, use manually issue RISC-V fence instruction with a predecessor set including all memory writes (to make the buffer contents visible to hardware), and a successor set of all memory reads and memory writes, and I/O reads or writes. Then, all updates to the buffer are guaranteed to be written out to memory before starting a DMA operation by reading or writing an MMIO register. We include memory reads or writes in the successor set too, in case the memory containing the MMIO registers is incorrectly not mapped as I/O memory.
This is only sufficient for platforms or devices with coherent DMA. As per the RISC-V unprivileged spec (version 20250508), “[n]on-coherent DMA may need additional synchronization (such as cache flush or invalidate mechanisms); currently any such extra synchronization will be device-specific” 1.
Source§fn acquire<T>(self, slice_ptr: *mut [T])
fn acquire<T>(self, slice_ptr: *mut [T])
Expose prior writes by DMA peripherals to subsequent memory reads.
This assembly block ensures that neither the compiler, not the CPU reorder any memory reads before the point at which a subsequent memory or I/O access is made (e.g., to start a DMA transaction).
Conventionally, we’d use the built-in core::sync::atomic::fence for
this, but that explicitly cannot be used to establish synchronization
among non-atomic accesses.
Instead, to deal with any potential compiler re-ordering, we use an
asm!() that does not have the nomem clobber set. This block
is opaque to the compiler. We further explicitly pass in a pointer
originating from, and thus carrying provenance of our DMA
buffer. This should be sufficient to make the compiler assume that
this function may write to the entire DMA buffer, and thus prevent it
from moving reads to before this asm!() block.
To deal with any hardware re-ordering, use manually issue RISC-V fence instruction with a predecessor set including all memory reads and I/O reads (to ensure that the DMA data is only read after a prior status register or in-memory descriptor indicates that the DMA data is ready, and a successor set of all memory reads. This prevents the CPU from issuing read instructions to the DMA buffer before a prior read confirmed that the data was ready.
This is only sufficient for platforms or devices with coherent DMA. As per the RISC-V unprivileged spec (version 20250508), “[n]on-coherent DMA may need additional synchronization (such as cache flush or invalidate mechanisms); currently any such extra synchronization will be device-specific” 1.