Public Member Functions | List of all members
rmm::mr::arena_memory_resource< Upstream > Class Template Referencefinal

A suballocator that emphasizes fragmentation avoidance and scalable concurrency support. More...

#include <arena_memory_resource.hpp>

Inheritance diagram for rmm::mr::arena_memory_resource< Upstream >:
Inheritance graph
[legend]
Collaboration diagram for rmm::mr::arena_memory_resource< Upstream >:
Collaboration graph
[legend]

Public Member Functions

 arena_memory_resource (Upstream *upstream_mr, std::size_t initial_size=global_arena::default_initial_size, std::size_t maximum_size=global_arena::default_maximum_size)
 Construct an arena_memory_resource. More...
 
 arena_memory_resource (arena_memory_resource const &)=delete
 
arena_memory_resourceoperator= (arena_memory_resource const &)=delete
 
bool supports_streams () const noexcept override
 Queries whether the resource supports use of non-null CUDA streams for allocation/deallocation. More...
 
bool supports_get_mem_info () const noexcept override
 Query whether the resource supports the get_mem_info API. More...
 
- Public Member Functions inherited from rmm::mr::device_memory_resource
void * allocate (std::size_t bytes, cuda_stream_view stream=cuda_stream_view{})
 Allocates memory of size at least bytes. More...
 
void deallocate (void *p, std::size_t bytes, cuda_stream_view stream=cuda_stream_view{})
 Deallocate memory pointed to by p. More...
 
bool is_equal (device_memory_resource const &other) const noexcept
 Compare this resource to another. More...
 
std::pair< std::size_t, std::size_t > get_mem_info (cuda_stream_view stream) const
 Queries the amount of free and total memory for the resource. More...
 

Detailed Description

template<typename Upstream>
class rmm::mr::arena_memory_resource< Upstream >

A suballocator that emphasizes fragmentation avoidance and scalable concurrency support.

Allocation (do_allocate()) and deallocation (do_deallocate()) are thread-safe. Also, this class is compatible with CUDA per-thread default stream.

GPU memory is divided into a global arena, per-thread arenas for default streams, and per-stream arenas for non-default streams. Each arena allocates memory from the global arena in chunks called superblocks.

Blocks in each arena are allocated using address-ordered first fit. When a block is freed, it is coalesced with neighbouring free blocks if the addresses are contiguous. Free superblocks are returned to the global arena.

In real-world applications, allocation sizes tend to follow a power law distribution in which large allocations are rare, but small ones quite common. By handling small allocations in the per-thread arena, adequate performance can be achieved without introducing excessive memory fragmentation under high concurrency.

This design is inspired by several existing CPU memory allocators targeting multi-threaded applications (glibc malloc, Hoard, jemalloc, TCMalloc), albeit in a simpler form. Possible future improvements include using size classes, allocation caches, and more fine-grained locking or lock-free approaches.

See also
Wilson, P. R., Johnstone, M. S., Neely, M., & Boles, D. (1995, September). Dynamic storage allocation: A survey and critical review. In International Workshop on Memory Management (pp. 1-116). Springer, Berlin, Heidelberg.
Berger, E. D., McKinley, K. S., Blumofe, R. D., & Wilson, P. R. (2000). Hoard: A scalable memory allocator for multithreaded applications. ACM Sigplan Notices, 35(11), 117-128.
Evans, J. (2006, April). A scalable concurrent malloc (3) implementation for FreeBSD. In Proc. of the bsdcan conference, ottawa, canada.
https://sourceware.org/glibc/wiki/MallocInternals
http://hoard.org/
http://jemalloc.net/
https://github.com/google/tcmalloc
Template Parameters
UpstreamMemory resource to use for allocating memory for the global arena. Implements rmm::mr::device_memory_resource interface.

Constructor & Destructor Documentation

◆ arena_memory_resource()

template<typename Upstream >
rmm::mr::arena_memory_resource< Upstream >::arena_memory_resource ( Upstream *  upstream_mr,
std::size_t  initial_size = global_arena::default_initial_size,
std::size_t  maximum_size = global_arena::default_maximum_size 
)
inlineexplicit

Construct an arena_memory_resource.

Exceptions
rmm::logic_errorif upstream_mr == nullptr.
rmm::logic_errorif initial_size is neither the default nor aligned to a multiple of 256 bytes.
rmm::logic_errorif maximum_size is neither the default nor aligned to a multiple of 256 bytes.
Parameters
upstream_mrThe memory resource from which to allocate blocks for the pool
initial_sizeMinimum size, in bytes, of the initial global arena. Defaults to half of the available memory on the current device.
maximum_sizeMaximum size, in bytes, that the global arena can grow to. Defaults to all of the available memory on the current device.

Member Function Documentation

◆ supports_get_mem_info()

template<typename Upstream >
bool rmm::mr::arena_memory_resource< Upstream >::supports_get_mem_info ( ) const
inlineoverridevirtualnoexcept

Query whether the resource supports the get_mem_info API.

Returns
bool false.

Implements rmm::mr::device_memory_resource.

◆ supports_streams()

template<typename Upstream >
bool rmm::mr::arena_memory_resource< Upstream >::supports_streams ( ) const
inlineoverridevirtualnoexcept

Queries whether the resource supports use of non-null CUDA streams for allocation/deallocation.

Returns
bool true.

Implements rmm::mr::device_memory_resource.


The documentation for this class was generated from the following file: