Files | Classes | Typedefs | Functions | Variables
Memory Resources

Files

file  arena_memory_resource.hpp
 
file  binning_memory_resource.hpp
 
file  cuda_async_managed_memory_resource.hpp
 
file  cuda_async_memory_resource.hpp
 
file  cuda_async_view_memory_resource.hpp
 
file  cuda_memory_resource.hpp
 
file  fixed_size_memory_resource.hpp
 
file  is_resource_adaptor.hpp
 
file  managed_memory_resource.hpp
 
file  pinned_host_memory_resource.hpp
 
file  polymorphic_allocator.hpp
 
file  pool_memory_resource.hpp
 
file  sam_headroom_memory_resource.hpp
 
file  system_memory_resource.hpp
 
file  process_is_exiting.hpp
 
file  resource_ref.hpp
 

Classes

class  rmm::mr::arena_memory_resource
 A suballocator that emphasizes fragmentation avoidance and scalable concurrency support. More...
 
class  rmm::mr::binning_memory_resource
 Allocates memory from upstream resources associated with bin sizes. More...
 
class  rmm::mr::callback_memory_resource
 A device memory resource that uses the provided callbacks for memory allocation and deallocation. More...
 
class  rmm::mr::cuda_async_managed_memory_resource
 Memory resource that uses cudaMallocFromPoolAsync/cudaFreeFromPoolAsync with a managed memory pool for allocation/deallocation. More...
 
class  rmm::mr::cuda_async_memory_resource
 Memory resource that uses cudaMallocAsync/cudaFreeAsync for allocation/deallocation. More...
 
class  rmm::mr::cuda_async_view_memory_resource
 Memory resource that uses cudaMallocAsync/cudaFreeAsync for allocation/deallocation. More...
 
class  rmm::mr::cuda_memory_resource
 Memory resource that uses cudaMalloc/Free for allocation/deallocation. More...
 
class  rmm::mr::fixed_size_memory_resource
 A memory resource which allocates memory blocks of a single fixed size. More...
 
class  rmm::mr::managed_memory_resource
 Memory resource that uses cudaMallocManaged/Free for allocation/deallocation. More...
 
class  rmm::mr::pinned_host_memory_resource
 Memory resource class for allocating pinned host memory. More...
 
class  rmm::mr::polymorphic_allocator< T >
 A stream ordered Allocator using a device_async_resource_ref to satisfy (de)allocations. More...
 
class  rmm::mr::stream_allocator_adaptor< Allocator >
 Adapts a stream ordered allocator to provide a standard Allocator interface. More...
 
class  rmm::mr::pool_memory_resource
 A coalescing best-fit suballocator which uses a pool of memory allocated from an upstream memory_resource. More...
 
class  rmm::mr::sam_headroom_memory_resource
 Resource that uses system memory resource to allocate memory with a headroom. More...
 
class  rmm::mr::system_memory_resource
 Memory resource that uses malloc/free for allocation/deallocation. More...
 

Typedefs

using rmm::mr::allocate_callback_t = std::function< void *(std::size_t, cuda_stream_view, void *)>
 Callback function type used by callback memory resource for allocation. More...
 
using rmm::mr::deallocate_callback_t = std::function< void(void *, std::size_t, cuda_stream_view, void *)>
 Callback function type used by callback_memory_resource for deallocation. More...
 
using rmm::device_resource_ref = cuda::mr::synchronous_resource_ref< cuda::mr::device_accessible >
 Alias for a cuda::mr::synchronous_resource_ref with the property cuda::mr::device_accessible.
 
using rmm::device_async_resource_ref = cuda::mr::resource_ref< cuda::mr::device_accessible >
 Alias for a cuda::mr::resource_ref with the property cuda::mr::device_accessible.
 
using rmm::host_resource_ref = cuda::mr::synchronous_resource_ref< cuda::mr::host_accessible >
 Alias for a cuda::mr::synchronous_resource_ref with the property cuda::mr::host_accessible.
 
using rmm::host_async_resource_ref = cuda::mr::resource_ref< cuda::mr::host_accessible >
 Alias for a cuda::mr::resource_ref with the property cuda::mr::host_accessible.
 
using rmm::host_device_resource_ref = cuda::mr::synchronous_resource_ref< cuda::mr::host_accessible, cuda::mr::device_accessible >
 Alias for a cuda::mr::synchronous_resource_ref with the properties cuda::mr::host_accessible and cuda::mr::device_accessible.
 
using rmm::host_device_async_resource_ref = cuda::mr::resource_ref< cuda::mr::host_accessible, cuda::mr::device_accessible >
 Alias for a cuda::mr::resource_ref with the properties cuda::mr::host_accessible and cuda::mr::device_accessible.
 

Functions

device_async_resource_ref rmm::mr::get_per_device_resource_ref (cuda_device_id device_id)
 Get the device_async_resource_ref for the specified device. More...
 
cuda::mr::any_resource< cuda::mr::device_accessible > rmm::mr::set_per_device_resource (cuda_device_id device_id, cuda::mr::any_resource< cuda::mr::device_accessible > new_resource)
 Set the memory resource for the specified device. More...
 
cuda::mr::any_resource< cuda::mr::device_accessible > rmm::mr::set_per_device_resource_ref (cuda_device_id device_id, device_async_resource_ref new_resource_ref)
 Set the device_async_resource_ref for the specified device to new_resource_ref More...
 
device_async_resource_ref rmm::mr::get_current_device_resource_ref ()
 Get the device_async_resource_ref for the current device. More...
 
cuda::mr::any_resource< cuda::mr::device_accessible > rmm::mr::set_current_device_resource (cuda::mr::any_resource< cuda::mr::device_accessible > new_resource)
 Set the memory resource for the current device. More...
 
cuda::mr::any_resource< cuda::mr::device_accessible > rmm::mr::set_current_device_resource_ref (device_async_resource_ref new_resource_ref)
 Set the device_async_resource_ref for the current device. More...
 
cuda::mr::any_resource< cuda::mr::device_accessible > rmm::mr::reset_per_device_resource (cuda_device_id device_id)
 Reset the memory resource for the specified device to the initial resource. More...
 
cuda::mr::any_resource< cuda::mr::device_accessible > rmm::mr::reset_current_device_resource ()
 Reset the memory resource for the current device to the initial resource. More...
 
cuda::mr::any_resource< cuda::mr::device_accessible > rmm::mr::reset_per_device_resource_ref (cuda_device_id device_id)
 Reset the device_async_resource_ref for the specified device to the initial resource. More...
 
cuda::mr::any_resource< cuda::mr::device_accessible > rmm::mr::reset_current_device_resource_ref ()
 Reset the device_async_resource_ref for the current device to the initial resource. More...
 
template<typename T , typename U >
bool rmm::mr::operator== (polymorphic_allocator< T > const &lhs, polymorphic_allocator< U > const &rhs)
 Compare two polymorphic_allocators for equality. More...
 
template<typename T , typename U >
bool rmm::mr::operator!= (polymorphic_allocator< T > const &lhs, polymorphic_allocator< U > const &rhs)
 Compare two polymorphic_allocators for inequality. More...
 
template<typename A , typename O >
bool rmm::mr::operator== (stream_allocator_adaptor< A > const &lhs, stream_allocator_adaptor< O > const &rhs)
 Compare two stream_allocator_adaptors for equality. More...
 
template<typename A , typename O >
bool rmm::mr::operator!= (stream_allocator_adaptor< A > const &lhs, stream_allocator_adaptor< O > const &rhs)
 Compare two stream_allocator_adaptors for inequality. More...
 
bool rmm::process_is_exiting () noexcept
 Returns true if the process has entered exit() / atexit handler execution. More...
 
template<class Resource >
device_async_resource_ref rmm::to_device_async_resource_ref_checked (Resource *res)
 Convert pointer to memory resource into device_async_resource_ref, checking for nullptr More...
 

Variables

template<class Resource , class = void>
constexpr bool rmm::mr::is_resource_adaptor = false
 Concept to check whether a resource is a resource adaptor by checking for get_upstream_resource.
 

Detailed Description

Typedef Documentation

◆ allocate_callback_t

using rmm::mr::allocate_callback_t = typedef std::function<void*(std::size_t, cuda_stream_view, void*)>

Callback function type used by callback memory resource for allocation.

The signature of the callback function is: void* allocate_callback_t(std::size_t bytes, cuda_stream_view stream, void* arg);

  • Returns a pointer to an allocation of at least bytes usable immediately on stream. The stream-ordered behavior requirements are identical to allocate.
  • The arg is provided to the constructor of the callback_memory_resource and will be forwarded along to every invocation of the callback function.

◆ deallocate_callback_t

using rmm::mr::deallocate_callback_t = typedef std::function<void(void*, std::size_t, cuda_stream_view, void*)>

Callback function type used by callback_memory_resource for deallocation.

The signature of the callback function is: void deallocate_callback_t(void* ptr, std::size_t bytes, cuda_stream_view stream, void* arg);

  • Deallocates memory pointed to by ptr. bytes specifies the size of the allocation in bytes, and must equal the value of bytes that was passed to the allocate callback function. The stream-ordered behavior requirements are identical to deallocate.
  • The arg is provided to the constructor of the callback_memory_resource and will be forwarded along to every invocation of the callback function.

Function Documentation

◆ get_current_device_resource_ref()

device_async_resource_ref rmm::mr::get_current_device_resource_ref ( )
inline

Get the device_async_resource_ref for the current device.

Returns the device_async_resource_ref set for the current device. The initial resource_ref references a cuda_memory_resource.

The "current device" is the device returned by cudaGetDevice.

This function is thread-safe with respect to concurrent calls to set_per_device_resource_ref, get_per_device_resource_ref, get_current_device_resource_ref, set_current_device_resource_ref and `reset_current_device_resource_ref. Concurrent calls to any of these functions will result in a valid state, but the order of execution is undefined.

Note
The returned device_async_resource_ref should only be used with the current CUDA device. Changing the current device (e.g. using cudaSetDevice()) and then using the returned resource_ref can result in undefined behavior. The behavior of a device_async_resource_ref is undefined if used while the active CUDA device is a different device from the one that was active when the memory resource was created.
Returns
device_async_resource_ref active for the current device

◆ get_per_device_resource_ref()

device_async_resource_ref rmm::mr::get_per_device_resource_ref ( cuda_device_id  device_id)
inline

Get the device_async_resource_ref for the specified device.

Returns a device_async_resource_ref for the specified device. The initial resource_ref references a cuda_memory_resource.

device_id.value() must be in the range [0, cudaGetDeviceCount()), otherwise behavior is undefined.

This function is thread-safe with respect to concurrent calls to set_per_device_resource_ref, get_per_device_resource_ref, get_current_device_resource_ref, set_current_device_resource_ref and reset_current_device_resource_ref. Concurrent calls to any of these functions will result in a valid state, but the order of execution is undefined.

Note
The returned device_async_resource_ref should only be used when CUDA device device_id is the current device (e.g. set using cudaSetDevice()). The behavior of a device_async_resource_ref is undefined if used while the active CUDA device is a different device from the one that was active when the memory resource was created.
Parameters
device_idThe id of the target device
Returns
The current device_async_resource_ref for device device_id

◆ operator!=() [1/2]

template<typename T , typename U >
bool rmm::mr::operator!= ( polymorphic_allocator< T > const &  lhs,
polymorphic_allocator< U > const &  rhs 
)

Compare two polymorphic_allocators for inequality.

Two polymorphic_allocators are not equal if their underlying memory resources compare not equal.

Template Parameters
TType of the first allocator
UType of the second allocator
Parameters
lhsThe first allocator to compare
rhsThe second allocator to compare
Returns
true if the two allocators are not equal, false otherwise

◆ operator!=() [2/2]

template<typename A , typename O >
bool rmm::mr::operator!= ( stream_allocator_adaptor< A > const &  lhs,
stream_allocator_adaptor< O > const &  rhs 
)

Compare two stream_allocator_adaptors for inequality.

Two stream_allocator_adaptors are not equal if their underlying allocators compare not equal.

Template Parameters
AType of the first allocator
OType of the second allocator
Parameters
lhsThe first allocator to compare
rhsThe second allocator to compare
Returns
true if the two allocators are not equal, false otherwise

◆ operator==() [1/2]

template<typename T , typename U >
bool rmm::mr::operator== ( polymorphic_allocator< T > const &  lhs,
polymorphic_allocator< U > const &  rhs 
)

Compare two polymorphic_allocators for equality.

Two polymorphic_allocators are equal if their underlying memory resources compare equal.

Template Parameters
TType of the first allocator
UType of the second allocator
Parameters
lhsThe first allocator to compare
rhsThe second allocator to compare
Returns
true if the two allocators are equal, false otherwise

◆ operator==() [2/2]

template<typename A , typename O >
bool rmm::mr::operator== ( stream_allocator_adaptor< A > const &  lhs,
stream_allocator_adaptor< O > const &  rhs 
)

Compare two stream_allocator_adaptors for equality.

Two stream_allocator_adaptors are equal if their underlying allocators compare equal.

Template Parameters
AType of the first allocator
OType of the second allocator
Parameters
lhsThe first allocator to compare
rhsThe second allocator to compare
Returns
true if the two allocators are equal, false otherwise

◆ process_is_exiting()

bool rmm::process_is_exiting ( )
noexcept

Returns true if the process has entered exit() / atexit handler execution.

Destructors of static objects, as well as atexit handlers registered by other DSOs, run during process termination after main() has returned. At that point calling into the CUDA runtime or driver is undefined behavior: the primary context may already be destroyed, and CUDA API calls may dereference released state and crash inside libcuda rather than returning an error.

Use this function from a memory resource destructor (or a helper invoked by a destructor, such as a release() method) when the resource may be held in RMM's internal per-device resource map and destroyed during process termination. In that case the destructor may run after the CUDA primary context has been destroyed, and calling into the CUDA runtime is undefined behavior. Destructors can avoid that by:

  1. Never calling CUDA APIs from the destructor at all, or
  2. Consulting rmm::process_is_exiting() in the destructor (and in any helper invoked by the destructor, such as a release() method) and skipping CUDA API calls when it returns true. In that case, resources that would have been explicitly released should be leaked; the OS reclaims them when the process exits.

Storing RMM objects with static or thread-local scope is unsupported. Users should not create their own static containers of RMM objects and rely on rmm::process_is_exiting() to make those destructors safe.

Calling rmm::process_is_exiting() from a resource destructor is always safe: it performs a single atomic load (acquire semantics) and never calls into CUDA.

Example:

class my_resource final : public ... {
~my_resource() override
{
return;
}
RMM_ASSERT_CUDA_SUCCESS_SAFE_SHUTDOWN(cudaFree(ptr_));
}
};
bool process_is_exiting() noexcept
Returns true if the process has entered exit() / atexit handler execution.
Returns
true if exit() has begun; false otherwise.

◆ reset_current_device_resource()

cuda::mr::any_resource<cuda::mr::device_accessible> rmm::mr::reset_current_device_resource ( )
inline

Reset the memory resource for the current device to the initial resource.

Resets to the initial cuda_memory_resource. The "current device" is the device returned by cudaGetDevice.

This function is thread-safe with respect to concurrent calls to set_per_device_resource, set_per_device_resource_ref, get_per_device_resource_ref, get_current_device_resource_ref, set_current_device_resource, set_current_device_resource_ref and reset_current_device_resource_ref. Concurrent calls to any of these functions will result in a valid state, but the order of execution is undefined.

Returns
An owning any_resource holding the previous resource for the current device

◆ reset_current_device_resource_ref()

cuda::mr::any_resource<cuda::mr::device_accessible> rmm::mr::reset_current_device_resource_ref ( )
inline

Reset the device_async_resource_ref for the current device to the initial resource.

Deprecated:
Use reset_current_device_resource instead.

Resets to a reference to the initial cuda_memory_resource. The "current device" is the device returned by cudaGetDevice.

This function is thread-safe with respect to concurrent calls to set_per_device_resource, set_per_device_resource_ref, get_per_device_resource_ref, get_current_device_resource_ref, set_current_device_resource, set_current_device_resource_ref and reset_current_device_resource_ref. Concurrent calls to any of these functions will result in a valid state, but the order of execution is undefined.

Returns
An owning any_resource holding the previous resource for the current device

◆ reset_per_device_resource()

cuda::mr::any_resource<cuda::mr::device_accessible> rmm::mr::reset_per_device_resource ( cuda_device_id  device_id)
inline

Reset the memory resource for the specified device to the initial resource.

Resets to the initial cuda_memory_resource.

device_id.value() must be in the range [0, cudaGetDeviceCount()), otherwise behavior is undefined.

This function is thread-safe with respect to concurrent calls to set_per_device_resource, set_per_device_resource_ref, get_per_device_resource_ref, get_current_device_resource_ref, set_current_device_resource, set_current_device_resource_ref and reset_current_device_resource_ref. Concurrent calls to any of these functions will result in a valid state, but the order of execution is undefined.

Parameters
device_idThe id of the target device
Returns
An owning any_resource holding the previous resource for device_id

◆ reset_per_device_resource_ref()

cuda::mr::any_resource<cuda::mr::device_accessible> rmm::mr::reset_per_device_resource_ref ( cuda_device_id  device_id)
inline

Reset the device_async_resource_ref for the specified device to the initial resource.

Deprecated:
Use reset_per_device_resource instead.

Resets to a reference to the initial cuda_memory_resource.

device_id.value() must be in the range [0, cudaGetDeviceCount()), otherwise behavior is undefined.

This function is thread-safe with respect to concurrent calls to set_per_device_resource, set_per_device_resource_ref, get_per_device_resource_ref, get_current_device_resource_ref, set_current_device_resource, set_current_device_resource_ref and reset_current_device_resource_ref. Concurrent calls to any of these functions will result in a valid state, but the order of execution is undefined.

Parameters
device_idThe id of the target device
Returns
An owning any_resource holding the previous resource for device_id

◆ set_current_device_resource()

cuda::mr::any_resource<cuda::mr::device_accessible> rmm::mr::set_current_device_resource ( cuda::mr::any_resource< cuda::mr::device_accessible >  new_resource)
inline

Set the memory resource for the current device.

Takes ownership of the provided resource by value. The "current device" is the device returned by cudaGetDevice.

This function is thread-safe with respect to concurrent calls to set_per_device_resource, set_per_device_resource_ref, get_per_device_resource_ref, get_current_device_resource_ref, set_current_device_resource, set_current_device_resource_ref and reset_current_device_resource_ref. Concurrent calls to any of these functions will result in a valid state, but the order of execution is undefined.

Note
The resource passed in new_resource must have been created for the current CUDA device. The behavior of a memory resource is undefined if used while the active CUDA device is a different device from the one that was active when the memory resource was created.
The per-device resource map keeps the provided resource alive until process exit. Its destructor may therefore run during process termination. If the destructor may call CUDA APIs, it must consult rmm::process_is_exiting() and skip those calls when it returns true.
Parameters
new_resourceNew resource to use for the current device
Returns
An owning any_resource holding the previous resource for the current device

◆ set_current_device_resource_ref()

cuda::mr::any_resource<cuda::mr::device_accessible> rmm::mr::set_current_device_resource_ref ( device_async_resource_ref  new_resource_ref)
inline

Set the device_async_resource_ref for the current device.

Deprecated:
Use set_current_device_resource instead.

The "current device" is the device returned by cudaGetDevice.

The referenced resource is copied into an owning any_resource and moved into the per-device resource map.

This function is thread-safe with respect to concurrent calls to set_per_device_resource, set_per_device_resource_ref, get_per_device_resource_ref, get_current_device_resource_ref, set_current_device_resource, set_current_device_resource_ref and reset_current_device_resource_ref. Concurrent calls to any of these functions will result in a valid state, but the order of execution is undefined.

Note
The resource passed in new_resource_ref must have been created for the current CUDA device. The behavior of a device_async_resource_ref is undefined if used while the active CUDA device is a different device from the one that was active when the memory resource was created.
The per-device resource map keeps the underlying resource alive until process exit. Its destructor may therefore run during process termination. If it may call CUDA APIs, it must consult rmm::process_is_exiting() and skip those calls when it returns true.
Parameters
new_resource_refNew device_async_resource_ref to use for the current device
Returns
An owning any_resource holding the previous resource for the current device

◆ set_per_device_resource()

cuda::mr::any_resource<cuda::mr::device_accessible> rmm::mr::set_per_device_resource ( cuda_device_id  device_id,
cuda::mr::any_resource< cuda::mr::device_accessible >  new_resource 
)
inline

Set the memory resource for the specified device.

Takes ownership of the provided resource by value. The resource is moved into the per-device resource map.

device_id.value() must be in the range [0, cudaGetDeviceCount()), otherwise behavior is undefined.

This function is thread-safe with respect to concurrent calls to set_per_device_resource, set_per_device_resource_ref, get_per_device_resource_ref, get_current_device_resource_ref, set_current_device_resource, set_current_device_resource_ref and reset_current_device_resource_ref. Concurrent calls to any of these functions will result in a valid state, but the order of execution is undefined.

Note
The resource passed in new_resource must have been created when device device_id was the current CUDA device (e.g. set using cudaSetDevice()). The behavior of a memory resource is undefined if used while the active CUDA device is a different device from the one that was active when the memory resource was created.
The per-device resource map keeps the provided resource alive until process exit. Its destructor may therefore run during process termination. If the destructor may call CUDA APIs, it must consult rmm::process_is_exiting() and skip those calls when it returns true.
Parameters
device_idThe id of the target device
new_resourceNew resource to use for device_id
Returns
An owning any_resource holding the previous resource for device_id

◆ set_per_device_resource_ref()

cuda::mr::any_resource<cuda::mr::device_accessible> rmm::mr::set_per_device_resource_ref ( cuda_device_id  device_id,
device_async_resource_ref  new_resource_ref 
)
inline

Set the device_async_resource_ref for the specified device to new_resource_ref

Deprecated:
Use set_per_device_resource instead.

device_id.value() must be in the range [0, cudaGetDeviceCount()), otherwise behavior is undefined.

The referenced resource is copied into an owning any_resource and moved into the per-device resource map.

This function is thread-safe with respect to concurrent calls to set_per_device_resource, set_per_device_resource_ref, get_per_device_resource_ref, get_current_device_resource_ref, set_current_device_resource, set_current_device_resource_ref and reset_current_device_resource_ref. Concurrent calls to any of these functions will result in a valid state, but the order of execution is undefined.

Note
The resource passed in new_resource_ref must have been created when device device_id was the current CUDA device (e.g. set using cudaSetDevice()). The behavior of a device_async_resource_ref is undefined if used while the active CUDA device is a different device from the one that was active when the memory resource was created.
The per-device resource map keeps the underlying resource alive until process exit. Its destructor may therefore run during process termination. If it may call CUDA APIs, it must consult rmm::process_is_exiting() and skip those calls when it returns true.
Parameters
device_idThe id of the target device
new_resource_refnew device_async_resource_ref to use as new resource for device_id
Returns
An owning any_resource holding the previous resource for device_id

◆ to_device_async_resource_ref_checked()

template<class Resource >
device_async_resource_ref rmm::to_device_async_resource_ref_checked ( Resource *  res)

Convert pointer to memory resource into device_async_resource_ref, checking for nullptr

Template Parameters
ResourceThe type of the memory resource.
Parameters
resA pointer to the memory resource.
Returns
A device_async_resource_ref to the memory resource.
Exceptions
std::logic_errorif the memory resource pointer is null.