Package ai.rapids.cudf
Class CudaMemoryBuffer
java.lang.Object
ai.rapids.cudf.MemoryBuffer
ai.rapids.cudf.BaseDeviceMemoryBuffer
ai.rapids.cudf.CudaMemoryBuffer
- All Implemented Interfaces:
AutoCloseable
This class represents data allocated using `cudaMalloc` directly instead of the default RMM
memory resource. Closing this object will effectively release the memory held by the buffer.
Note that because of reference counting if a buffer is sliced it may not actually result in the
memory being released.
-
Nested Class Summary
Nested classes/interfaces inherited from class ai.rapids.cudf.MemoryBuffer
MemoryBuffer.EventHandler, MemoryBuffer.MemoryBufferCleaner
-
Field Summary
-
Constructor Summary
ConstructorsConstructorDescriptionCudaMemoryBuffer
(long address, long lengthInBytes, Cuda.Stream stream) Wrap an existing CUDA allocation in a device memory buffer. -
Method Summary
Modifier and TypeMethodDescriptionstatic CudaMemoryBuffer
allocate
(long bytes) Allocate memory for use on the GPU.static CudaMemoryBuffer
allocate
(long bytes, Cuda.Stream stream) Allocate memory for use on the GPU.final CudaMemoryBuffer
slice
(long offset, long len) Slice off a part of the device buffer.Methods inherited from class ai.rapids.cudf.BaseDeviceMemoryBuffer
copyFromDeviceBufferAsync, copyFromHostBuffer, copyFromHostBuffer, copyFromHostBuffer, copyFromHostBuffer, copyFromHostBuffer, copyFromHostBufferAsync, copyFromHostBufferAsync, sliceWithCopy
Methods inherited from class ai.rapids.cudf.MemoryBuffer
addressOutOfBoundsCheck, close, copyFromMemoryBuffer, copyFromMemoryBufferAsync, getAddress, getEventHandler, getLength, getRefCount, incRefCount, noWarnLeakExpected, setEventHandler, toString
-
Constructor Details
-
CudaMemoryBuffer
Wrap an existing CUDA allocation in a device memory buffer. The CUDA allocation will be freed when the resulting device memory buffer instance frees its memory resource (i.e.: when its reference count goes to zero).- Parameters:
address
- device address of the CUDA memory allocationlengthInBytes
- length of the CUDA allocation in bytesstream
- CUDA stream to use for synchronization when freeing the allocation
-
-
Method Details
-
allocate
Allocate memory for use on the GPU. You must close it when done.- Parameters:
bytes
- size in bytes to allocate- Returns:
- the buffer
-
allocate
Allocate memory for use on the GPU. You must close it when done.- Parameters:
bytes
- size in bytes to allocatestream
- The stream in which to synchronize this command- Returns:
- the buffer
-
slice
Slice off a part of the device buffer. Note that this is a zero copy operation and all slices must be closed along with the original buffer before the memory is released to RMM. So use this with some caution.- Specified by:
slice
in classMemoryBuffer
- Parameters:
offset
- where to start the slice at.len
- how many bytes to slice- Returns:
- a device buffer that will need to be closed independently from this buffer.
-