rmm (top-level)#
- class rmm.DeviceBuffer#
Bases:
object- Attributes:
Methods
capacity(self)copy(self)Returns a copy of this
DeviceBuffer.copy_from_device(self, cuda_ary, ...)Copy from a buffer on host to
selfcopy_from_host(self, ary, ...)Copy from a buffer on host to
selfcopy_to_host(self[, ary])Copy from a
DeviceBufferto a buffer on host.prefetch(self[, device, stream])Prefetch buffer data to the specified device on the specified stream.
reserve(self, size_t new_capacity, ...)resize(self, size_t new_size, ...)to_device(const unsigned char[, ...)Calls
to_devicefunction on arguments provided.tobytes(self, Stream stream=DEFAULT_STREAM)- capacity(self) size_t#
- copy(self)#
Returns a copy of this
DeviceBuffer.- Returns:
- DeviceBuffer
A deep copy of this
DeviceBuffer
Examples
>>> import rmm >>> db = rmm.DeviceBuffer.to_device(b"abc") >>> db_copy = db.copy() >>> db.copy_to_host() array([97, 98, 99], dtype=uint8) >>> db_copy.copy_to_host() array([97, 98, 99], dtype=uint8) >>> assert db is not db_copy >>> assert db.ptr != db_copy.ptr
- copy_from_device(self, cuda_ary, Stream stream=DEFAULT_STREAM)#
Copy from a buffer on host to
self- Parameters:
- cuda_ary
Object to copy from that has
__cuda_array_interface__- streamoptional
CUDA stream to use for copying, defaults to the default stream
Examples
>>> import rmm >>> db = rmm.DeviceBuffer(size=5) >>> db2 = rmm.DeviceBuffer.to_device(b"abc") >>> db.copy_from_device(db2) >>> hb = db.copy_to_host() >>> print(hb) array([97, 98, 99, 0, 0], dtype=uint8)
- copy_from_host(self, ary, Stream stream=DEFAULT_STREAM)#
Copy from a buffer on host to
self- Parameters:
- ary
bytes-like buffer to copy from- streamoptional
CUDA stream to use for copying, defaults to the default stream
Examples
>>> import rmm >>> db = rmm.DeviceBuffer(size=10) >>> hb = b"abcdef" >>> db.copy_from_host(hb) >>> hb = db.copy_to_host() >>> print(hb) array([97, 98, 99, 0, 0, 0, 0, 0, 0, 0], dtype=uint8)
- copy_to_host(self, ary=None, Stream stream=DEFAULT_STREAM)#
Copy from a
DeviceBufferto a buffer on host.- Parameters:
- ary
bytes-like buffer to write into- streamoptional
CUDA stream to use for copying, defaults to the default stream
Examples
>>> import rmm >>> db = rmm.DeviceBuffer.to_device(b"abc") >>> hb = bytearray(db.nbytes) >>> db.copy_to_host(hb) >>> print(hb) bytearray(b'abc') >>> hb = db.copy_to_host() >>> print(hb) bytearray(b'abc')
- nbytes#
Gets the size of the buffer in bytes.
- prefetch(self, device=None, stream=None)#
Prefetch buffer data to the specified device on the specified stream.
Assumes the storage for this DeviceBuffer is CUDA managed memory (unified memory). If it is not, this function is a no-op.
- Parameters:
- deviceoptional
The CUDA device to which to prefetch the memory for this buffer. Defaults to the current CUDA device. To prefetch to the CPU, pass
cudaCpuDeviceIdas the device.- streamoptional
CUDA stream to use for prefetching. Defaults to self.stream
- ptr#
Gets a pointer to the underlying data.
- reserve(self, size_t new_capacity, Stream stream=DEFAULT_STREAM) void#
- resize(self, size_t new_size, Stream stream=DEFAULT_STREAM) void#
- size#
Gets the size of the buffer in bytes.
- static to_device(const unsigned char[::1] b, Stream stream=DEFAULT_STREAM)#
Calls
to_devicefunction on arguments provided.
- rmm.disable_logging()#
Disable logging if it was enabled previously using
rmm.reinitialize()orrmm.enable_logging().
- rmm.enable_logging(log_file_name=None)#
Enable logging of runtime events for all devices.
- Parameters:
- log_file_namestr, optional
Name of the log file. If not specified, the environment variable
RMM_LOG_FILEis used. AValueErroris thrown if neither is available. A separate log file is produced for each device, and the suffix".dev{id}"is automatically added to the log file name.
Notes
Note that if you use the environment variable
CUDA_VISIBLE_DEVICESwith logging enabled, the suffix may not be what you expect. For example, if you setCUDA_VISIBLE_DEVICES=1, the log file produced will still have suffix0. Similarly, if you setCUDA_VISIBLE_DEVICES=1,0and use devices 0 and 1, the log file with suffix0will correspond to the GPU with device ID1. Usermm.get_log_filenames()to get the log file names corresponding to each device.
- rmm.flush_logger()#
Flush the debug logger. This will cause any buffered log messages to be written to the log file.
Debug logging prints messages to a log file. See Debug Logging for more information.
See also
set_flush_levelSet the flush level for the debug logger.
get_flush_levelGet the current debug logging flush level.
Examples
>>> import rmm >>> rmm.flush_logger() # flush the logger
- rmm.get_flush_level()#
Get the current debug logging flush level for the RMM logger. Messages of this level or higher will automatically flush to the file.
Debug logging prints messages to a log file. See Debug Logging for more information.
- Returns:
- logging_level
The current flush level, an instance of the
logging_levelenum.
See also
set_flush_levelSet the flush level for the logger.
flush_loggerFlush the logger.
Examples
>>> import rmm >>> rmm.flush_level() # get current flush level <logging_level.INFO: 2>
- rmm.get_log_filenames()#
Returns the log filename (or
Noneif not writing logs) for each device in use.Examples
>>> import rmm >>> rmm.reinitialize(devices=[0, 1], logging=True, log_file_name="rmm.log") >>> rmm.get_log_filenames() {0: '/home/user/workspace/rapids/rmm/python/rmm.dev0.log', 1: '/home/user/workspace/rapids/rmm/python/rmm.dev1.log'}
- rmm.get_logging_level()#
Get the current debug logging level.
Debug logging prints messages to a log file. See Debug Logging for more information.
- Returns:
- levellogging_level
The current debug logging level, an instance of the
logging_levelenum.
See also
set_logging_levelSet the debug logging level.
Examples
>>> import rmm >>> rmm.get_logging_level() # get current logging level <logging_level.INFO: 2>
- rmm.is_initialized()#
Returns True if RMM has been initialized, False otherwise.
- class rmm.level_enum(*values)#
Bases:
IntEnum- Attributes:
denominatorthe denominator of a rational number in lowest terms
imagthe imaginary part of a complex number
numeratorthe numerator of a rational number in lowest terms
realthe real part of a complex number
Methods
as_integer_ratio(/)Return a pair of integers, whose ratio is equal to the original int.
bit_count(/)Number of ones in the binary representation of the absolute value of self.
bit_length(/)Number of bits necessary to represent self in binary.
conjugate(/)Returns self, the complex conjugate of any int.
from_bytes(/, bytes[, byteorder, signed])Return the integer represented by the given array of bytes.
is_integer(/)Returns True.
to_bytes(/[, length, byteorder, signed])Return an array of bytes representing an integer.
- critical = 5#
- debug = 1#
- error = 4#
- info = 2#
- n_levels = 7#
- off = 6#
- trace = 0#
- warn = 3#
- rmm.register_reinitialize_hook(func, *args, **kwargs)#
Add a function to the list of functions (“hooks”) that will be called before
reinitialize().A user or library may register hooks to perform any necessary cleanup before RMM is reinitialized. For example, a library with an internal cache of objects that use device memory allocated by RMM can register a hook to release those references before RMM is reinitialized, thus ensuring that the relevant device memory resource can be deallocated.
Hooks are called in the reverse order they are registered. This is useful, for example, when a library registers multiple hooks and needs them to run in a specific order for cleanup to be safe. Hooks cannot rely on being registered in a particular order relative to hooks registered by other packages, since that is determined by package import ordering.
- Parameters:
- funccallable
Function to be called before
reinitialize()- args, kwargs
Positional and keyword arguments to be passed to func
- rmm.reinitialize(pool_allocator=False, managed_memory=False, initial_pool_size=None, maximum_pool_size=None, devices=0, logging=False, log_file_name=None)#
Finalizes and then initializes RMM using the options passed. Using memory from a previous initialization of RMM is undefined behavior and should be avoided.
- Parameters:
- pool_allocatorbool, default False
If True, use a pool allocation strategy which can greatly improve performance.
- managed_memorybool, default False
If True, use managed memory for device memory allocation
- initial_pool_sizeint | str, default None
When pool_allocator is True, this indicates the initial pool size in bytes. By default, 1/2 of the total GPU memory is used. When pool_allocator is False, this argument is ignored if provided. A string argument is parsed using parse_bytes.
- maximum_pool_sizeint | str, default None
When pool_allocator is True, this indicates the maximum pool size in bytes. By default, the total available memory on the GPU is used. When pool_allocator is False, this argument is ignored if provided. A string argument is parsed using parse_bytes.
- devicesint or List[int], default 0
GPU device IDs to register. By default registers only GPU 0.
- loggingbool, default False
If True, enable run-time logging of all memory events (alloc, free, realloc). This has a significant performance impact.
- log_file_namestr
Name of the log file. If not specified, the environment variable
RMM_LOG_FILEis used. AValueErroris thrown if neither is available. A separate log file is produced for each device, and the suffix “.dev{id}” is automatically added to the log file name.
Notes
Note that if you use the environment variable
CUDA_VISIBLE_DEVICESwith logging enabled, the suffix may not be what you expect. For example, if you setCUDA_VISIBLE_DEVICES=1, the log file produced will still have suffix0. Similarly, if you setCUDA_VISIBLE_DEVICES=1,0and use devices 0 and 1, the log file with suffix0will correspond to the GPU with device ID1. Use rmm.get_log_filenames() to get the log file names corresponding to each device.
- rmm.set_flush_level(level)#
Set the flush level for the debug logger. Messages of this level or higher will automatically flush to the file.
Debug logging prints messages to a log file. See Debug Logging for more information.
- Parameters:
- levellogging_level
The debug logging level. Valid values are instances of the
logging_levelenum.
- Raises:
- TypeError
If the logging level is not an instance of the
logging_levelenum.
See also
get_flush_levelGet the current debug logging flush level.
flush_loggerFlush the logger.
Examples
>>> import rmm >>> rmm.flush_on(rmm.logging_level.WARN) # set flush level to warn
- rmm.set_logging_level(level)#
Set the debug logging level.
Debug logging prints messages to a log file. See Debug Logging for more information.
- Parameters:
- levellogging_level
The debug logging level. Valid values are instances of the
logging_levelenum.
- Raises:
- TypeError
If the logging level is not an instance of the
logging_levelenum.
See also
get_logging_levelGet the current debug logging level.
Examples
>>> import rmm >>> rmm.set_logging_level(rmm.logging_level.WARN) # set logging level to warn
- rmm.should_log(level)#
Check if a message at the given level would be logged.
A message at the given level would be logged if the current debug logging level is set to a level that is at least as verbose than the given level, and the RMM module is compiled for a logging level at least as verbose. If these conditions are not both met, this function will return false.
Debug logging prints messages to a log file. See Debug Logging for more information.
- Parameters:
- levellogging_level
The debug logging level. Valid values are instances of the
logging_levelenum.
- Returns:
- should_logbool
True if a message at the given level would be logged, False otherwise.
- Raises:
- TypeError
If the logging level is not an instance of the
logging_levelenum.
- rmm.unregister_reinitialize_hook(func)#
Remove func from list of hooks that will be called before
reinitialize().If func was registered more than once, every instance of it will be removed from the list of hooks.