device_memory_resource derived class that uses cudaMallocAsync/cudaFreeAsync for allocation/deallocation.
More...
#include <cuda_async_memory_resource.hpp>


Public Types | |
| enum class | allocation_handle_type : std::int32_t { none = cudaMemHandleTypeNone , posix_file_descriptor , win32 , win32_kmt = cudaMemHandleTypeWin32Kmt , fabric = 0x8 } |
| Flags for specifying memory allocation handle types. More... | |
| enum class | mempool_usage : unsigned short { hw_decompress = 0x2 } |
| Flags for specifying memory pool usage. More... | |
Public Member Functions | |
| cuda_async_memory_resource (std::optional< std::size_t > initial_pool_size={}, std::optional< std::size_t > release_threshold={}, std::optional< allocation_handle_type > export_handle_type={}) | |
| Constructs a cuda_async_memory_resource with the optionally specified initial pool size and release threshold. More... | |
| cudaMemPool_t | pool_handle () const noexcept |
| Returns the underlying native handle to the CUDA pool. More... | |
| cuda_async_memory_resource (cuda_async_memory_resource const &)=delete | |
| cuda_async_memory_resource (cuda_async_memory_resource &&)=delete | |
| cuda_async_memory_resource & | operator= (cuda_async_memory_resource const &)=delete |
| cuda_async_memory_resource & | operator= (cuda_async_memory_resource &&)=delete |
Public Member Functions inherited from rmm::mr::device_memory_resource | |
| device_memory_resource (device_memory_resource const &)=default | |
| Default copy constructor. | |
| device_memory_resource (device_memory_resource &&) noexcept=default | |
| Default move constructor. | |
| device_memory_resource & | operator= (device_memory_resource const &)=default |
| Default copy assignment operator. More... | |
| device_memory_resource & | operator= (device_memory_resource &&) noexcept=default |
| Default move assignment operator. More... | |
| void * | allocate_sync (std::size_t bytes, std::size_t alignment=rmm::CUDA_ALLOCATION_ALIGNMENT) |
Allocates memory of size at least bytes. More... | |
| void | deallocate_sync (void *ptr, std::size_t bytes, [[maybe_unused]] std::size_t alignment=rmm::CUDA_ALLOCATION_ALIGNMENT) noexcept |
Deallocate memory pointed to by p. More... | |
| void * | allocate (cuda_stream_view stream, std::size_t bytes, std::size_t alignment=rmm::CUDA_ALLOCATION_ALIGNMENT) |
Allocates memory of size at least bytes on the specified stream. More... | |
| void | deallocate (cuda_stream_view stream, void *ptr, std::size_t bytes, [[maybe_unused]] std::size_t alignment=rmm::CUDA_ALLOCATION_ALIGNMENT) noexcept |
Deallocate memory pointed to by ptr on the specified stream. More... | |
| bool | is_equal (device_memory_resource const &other) const noexcept |
| Compare this resource to another. More... | |
| bool | operator== (device_memory_resource const &other) const noexcept |
| Comparison operator with another device_memory_resource. More... | |
| bool | operator!= (device_memory_resource const &other) const noexcept |
| Comparison operator with another device_memory_resource. More... | |
device_memory_resource derived class that uses cudaMallocAsync/cudaFreeAsync for allocation/deallocation.
|
strong |
Flags for specifying memory allocation handle types.
cudaMemAllocationHandleType. We need a placeholder that can be used consistently in the constructor of cuda_async_memory_resource with all supported versions of CUDA. See the cudaMemAllocationHandleType docs at https://docs.nvidia.com/cuda/cuda-runtime-api/group__CUDART__TYPES.html and ensure the enum values are kept in sync with the CUDA documentation.
|
strong |
Flags for specifying memory pool usage.
cudaMemPoolProps docs at https://docs.nvidia.com/cuda/cuda-runtime-api/structcudaMemPoolProps.html and ensure the enum values are kept in sync with the CUDA documentation. cudaMemPoolCreateUsageHwDecompress is currently the only supported usage flag, introduced in CUDA 12.8 and documented in https://docs.nvidia.com/cuda/cuda-runtime-api/group__CUDART__TYPES.html | Enumerator | |
|---|---|
| hw_decompress | If set indicates that the memory can be used as a buffer for hardware accelerated decompression. |
|
inline |
Constructs a cuda_async_memory_resource with the optionally specified initial pool size and release threshold.
If the pool size grows beyond the release threshold, unused memory held by the pool will be released at the next synchronization event.
| rmm::logic_error | if the CUDA version does not support cudaMallocAsync |
| initial_pool_size | Optional initial size in bytes of the pool. If provided, the pool will be primed by allocating and immediately deallocating this amount of memory on the default CUDA stream. |
| release_threshold | Optional release threshold size in bytes of the pool. If no value is provided, the release threshold is set to the total amount of memory on the current device. |
| export_handle_type | Optional cudaMemAllocationHandleType that allocations from this resource should support interprocess communication (IPC). Default is cudaMemHandleTypeNone for no IPC support. |
|
inlinenoexcept |
Returns the underlying native handle to the CUDA pool.