Public Member Functions | List of all members
rmm::mr::cuda_async_memory_resource Class Referencefinal

device_memory_resource derived class that uses cudaMallocAsync/cudaFreeAsync for allocation/deallocation. More...

#include <cuda_async_memory_resource.hpp>

Inheritance diagram for rmm::mr::cuda_async_memory_resource:
Inheritance graph
[legend]
Collaboration diagram for rmm::mr::cuda_async_memory_resource:
Collaboration graph
[legend]

Public Member Functions

 cuda_async_memory_resource (thrust::optional< std::size_t > initial_pool_size={}, thrust::optional< std::size_t > release_threshold={})
 Constructs a cuda_async_memory_resource with the optionally specified initial pool size and release threshold. More...
 
 cuda_async_memory_resource (cuda_async_memory_resource const &)=delete
 
 cuda_async_memory_resource (cuda_async_memory_resource &&)=delete
 
cuda_async_memory_resourceoperator= (cuda_async_memory_resource const &)=delete
 
cuda_async_memory_resourceoperator= (cuda_async_memory_resource &&)=delete
 
bool supports_streams () const noexcept override
 Query whether the resource supports use of non-null CUDA streams for allocation/deallocation. cuda_memory_resource does not support streams. More...
 
bool supports_get_mem_info () const noexcept override
 Query whether the resource supports the get_mem_info API. More...
 
- Public Member Functions inherited from rmm::mr::device_memory_resource
 device_memory_resource (device_memory_resource const &)=default
 
device_memory_resourceoperator= (device_memory_resource const &)=default
 
 device_memory_resource (device_memory_resource &&) noexcept=default
 
device_memory_resourceoperator= (device_memory_resource &&) noexcept=default
 
void * allocate (std::size_t bytes, cuda_stream_view stream=cuda_stream_view{})
 Allocates memory of size at least bytes. More...
 
void deallocate (void *ptr, std::size_t bytes, cuda_stream_view stream=cuda_stream_view{})
 Deallocate memory pointed to by p. More...
 
bool is_equal (device_memory_resource const &other) const noexcept
 Compare this resource to another. More...
 
std::pair< std::size_t, std::size_t > get_mem_info (cuda_stream_view stream) const
 Queries the amount of free and total memory for the resource. More...
 

Detailed Description

device_memory_resource derived class that uses cudaMallocAsync/cudaFreeAsync for allocation/deallocation.

Constructor & Destructor Documentation

◆ cuda_async_memory_resource()

rmm::mr::cuda_async_memory_resource::cuda_async_memory_resource ( thrust::optional< std::size_t >  initial_pool_size = {},
thrust::optional< std::size_t >  release_threshold = {} 
)
inline

Constructs a cuda_async_memory_resource with the optionally specified initial pool size and release threshold.

If the pool size grows beyond the release threshold, unused memory held by the pool will be released at the next synchronization event.

Exceptions
rmm::runtime_errorif the CUDA version does not support cudaMallocAsync
Parameters
initial_pool_sizeOptional initial size in bytes of the pool. If no value is provided, initial pool size is half of the available GPU memory.
release_thresholdOptional release threshold size in bytes of the pool. If no value is provided, the release threshold is set to the total amount of memory on the current device.

Member Function Documentation

◆ supports_get_mem_info()

bool rmm::mr::cuda_async_memory_resource::supports_get_mem_info ( ) const
inlineoverridevirtualnoexcept

Query whether the resource supports the get_mem_info API.

Returns
false

Implements rmm::mr::device_memory_resource.

◆ supports_streams()

bool rmm::mr::cuda_async_memory_resource::supports_streams ( ) const
inlineoverridevirtualnoexcept

Query whether the resource supports use of non-null CUDA streams for allocation/deallocation. cuda_memory_resource does not support streams.

Returns
bool true

Implements rmm::mr::device_memory_resource.


The documentation for this class was generated from the following file: