pylibwholegraph.torch.embedding.create_wholememory_cache_policy#
- pylibwholegraph.torch.embedding.create_wholememory_cache_policy(cache_comm: WholeMemoryCommunicator, *, memory_type: str = 'chunked', memory_location: str = 'cuda', access_type: str = 'readonly', ratio: float = 0.5)#
Create WholeMemoryCachePolicy NOTE: in most cases,
create_builtin_cache_policy()
can support. This function is a more flexible interface :param cache_comm: WholeMemory communicator of the cache :param memory_type: WholeMemory type of cache :param memory_location: WholeMemory location of cache :param access_type: Access type needed :param ratio: Ratio of cache :return: WholeMemoryCachePolicy