API Reference

High-level API

exception rmm.rmm.RMMError(errcode, msg)

Bases: Exception

class rmm.rmm.RMMNumbaManager(*args, **kwargs)

Bases: numba.cuda.cudadrv.driver.HostOnlyCUDAMemoryManager

External Memory Management Plugin implementation for Numba. Provides on-device allocation only.

See http://numba.pydata.org/numba-doc/latest/cuda/external-memory.html for details of the interface being implemented here.

Attributes
interface_version

Returns an integer specifying the version of the EMM Plugin interface supported by the plugin implementation.

Methods

defer_cleanup()

Returns a context manager that disables cleanup of mapped or pinned host memory in the current context whilst it is active.

get_ipc_handle(memory)

Get an IPC handle for the MemoryPointer memory with offset modified by the RMM memory pool.

get_memory_info()

Returns (free, total) memory in bytes in the context.

initialize()

Perform any initialization required for the EMM plugin instance to be ready to use.

memalloc(size)

Allocate an on-device array from the RMM pool.

memhostalloc(size[, mapped, portable, wc])

Implements the allocation of pinned host memory.

mempin(owner, pointer, size[, mapped])

Implements the pinning of host memory.

reset()

Clears up all host memory (mapped and/or pinned) in the current context.

memallocmanaged

get_ipc_handle(memory)

Get an IPC handle for the MemoryPointer memory with offset modified by the RMM memory pool.

get_memory_info()

Returns (free, total) memory in bytes in the context. May raise NotImplementedError, if returning such information is not practical (e.g. for a pool allocator).

Returns

Memory info

Return type

MemoryInfo

initialize()

Perform any initialization required for the EMM plugin instance to be ready to use.

Returns

None

property interface_version

Returns an integer specifying the version of the EMM Plugin interface supported by the plugin implementation. Should always return 1 for implementations of this version of the specification.

memalloc(size)

Allocate an on-device array from the RMM pool.

rmm.rmm.is_initialized()

Returns true if RMM has been initialized, false otherwise

rmm.rmm.reinitialize(pool_allocator=False, managed_memory=False, initial_pool_size=None, maximum_pool_size=None, devices=0, logging=False, log_file_name=None)

Finalizes and then initializes RMM using the options passed. Using memory from a previous initialization of RMM is undefined behavior and should be avoided.

Parameters
pool_allocatorbool, default False

If True, use a pool allocation strategy which can greatly improve performance.

managed_memorybool, default False

If True, use managed memory for device memory allocation

initial_pool_sizeint, default None

When pool_allocator is True, this indicates the initial pool size in bytes. By default, 1/2 of the total GPU memory is used. When pool_allocator is False, this argument is ignored if provided.

maximum_pool_sizeint, default None

When pool_allocator is True, this indicates the maximum pool size in bytes. By default, the total available memory on the GPU is used. When pool_allocator is False, this argument is ignored if provided.

devicesint or List[int], default 0

GPU device IDs to register. By default registers only GPU 0.

loggingbool, default False

If True, enable run-time logging of all memory events (alloc, free, realloc). This has significant performance impact.

log_file_namestr

Name of the log file. If not specified, the environment variable RMM_LOG_FILE is used. A TypeError is thrown if neither is available.

rmm.rmm.rmm_cupy_allocator(nbytes)

A CuPy allocator that makes use of RMM.

Examples

>>> import rmm
>>> import cupy
>>> cupy.cuda.set_allocator(rmm.rmm_cupy_allocator)

Memory Resources

class rmm.mr.BinningMemoryResource(MemoryResource upstream_mr, int8_t min_size_exponent=-1, int8_t max_size_exponent=-1)

Bases: rmm._lib.memory_resource.MemoryResource

Allocates memory from a set of specified “bin” sizes based on a specified allocation size.

If min_size_exponent and max_size_exponent are specified, initializes with one or more FixedSizeMemoryResource bins in the range [2^min_size_exponent, 2^max_size_exponent].

Call add_bin to add additional bin allocators.

Parameters
upstream_mrMemoryResource

The memory resource to use for allocations larger than any of the bins

min_size_exponentsize_t

The base-2 exponent of the minimum size FixedSizeMemoryResource bin to create.

max_size_exponentsize_t

The base-2 exponent of the maximum size FixedSizeMemoryResource bin to create.

Methods

add_bin(self, size_t allocation_size[, …])

Adds a bin of the specified maximum allocation size to this memory resource.

add_bin(self, size_t allocation_size, bin_resource=None)

Adds a bin of the specified maximum allocation size to this memory resource. If specified, uses bin_resource for allocation for this bin. If not specified, creates and uses a FixedSizeMemoryResource for allocation for this bin.

Allocations smaller than allocation_size and larger than the next smaller bin size will use this fixed-size memory resource.

Parameters
allocation_sizesize_t

The maximum allocation size in bytes for the created bin

bin_resourceMemoryResource

The resource to use for this bin (optional)

class rmm.mr.CudaMemoryResource(device=None)

Bases: rmm._lib.memory_resource.MemoryResource

Memory resource that uses cudaMalloc/Free for allocation/deallocation

class rmm.mr.FixedSizeMemoryResource(MemoryResource upstream, size_t block_size=1048576, size_t blocks_to_preallocate=128)

Bases: rmm._lib.memory_resource.MemoryResource

Memory resource which allocates memory blocks of a single fixed size.

Parameters
upstreamMemoryResource

The MemoryResource from which to allocate blocks for the pool.

block_sizeint, optional

The size of blocks to allocate (default is 1MiB).

blocks_to_preallocateint, optional

The number of blocks to allocate to initialize the pool.

Notes

Supports only allocations of size smaller than the configured block_size.

class rmm.mr.LoggingResourceAdaptor(MemoryResource upstream, log_file_name=None)

Bases: rmm._lib.memory_resource.MemoryResource

Memory resource that logs information about allocations/deallocations performed by an upstream memory resource.

Parameters
upstreamMemoryResource

The upstream memory resource.

log_file_namestr

Path to the file to which logs are written.

Methods

flush(self)

get_file_name(self)

get_upstream(self)

flush(self)
get_file_name(self)
get_upstream(self)MemoryResource
class rmm.mr.ManagedMemoryResource

Bases: rmm._lib.memory_resource.MemoryResource

Memory resource that uses cudaMallocManaged/Free for allocation/deallocation.

class rmm.mr.MemoryResource

Bases: object

class rmm.mr.PoolMemoryResource(MemoryResource upstream, initial_pool_size=None, maximum_pool_size=None)

Bases: rmm._lib.memory_resource.MemoryResource

Coalescing best-fit suballocator which uses a pool of memory allocated from an upstream memory resource.

Parameters
upstreamMemoryResource

The MemoryResource from which to allocate blocks for the pool.

initial_pool_sizeint,optional

Initial pool size in bytes. By default, an implementation defined pool size is used.

maximum_pool_sizeint, optional

Maximum size in bytes, that the pool can grow to.

rmm.mr.disable_logging()

Disable logging if it was enabled previously using rmm.initialize() or rmm.enable_logging().

rmm.mr.enable_logging(log_file_name=None)

Enable logging of run-time events.

rmm.mr.get_current_device_resource()MemoryResource

Get the memory resource used for RMM device allocations on the current device.

If the returned memory resource is used when a different device is the active CUDA device, behavior is undefined.

rmm.mr.get_current_device_resource_type()

Get the memory resource type used for RMM device allocations on the current device.

rmm.mr.get_per_device_resource(int device)

Get the default memory resource for the specified device.

If the returned memory resource is used when a different device is the active CUDA device, behavior is undefined.

Parameters
deviceint

The ID of the device for which to get the memory resource.

rmm.mr.get_per_device_resource_type(int device)

Get the memory resource type used for RMM device allocations on the specified device.

Parameters
deviceint

The device ID

rmm.mr.is_initialized()

Check whether RMM is initialized

rmm.mr.set_current_device_resource(MemoryResource mr)

Set the default memory resource for the current device.

Parameters
mrMemoryResource

The memory resource to set. Must have been created while the current device is the active CUDA device.

rmm.mr.set_per_device_resource(int device, MemoryResource mr)

Set the default memory resource for the specified device.

Parameters
deviceint

The ID of the device for which to get the memory resource.

mrMemoryResource

The memory resource to set. Must have been created while device was the active CUDA device.

Module contents