pylibwholegraph.torch.embedding.create_wholememory_cache_policy#

pylibwholegraph.torch.embedding.create_wholememory_cache_policy(cache_comm: WholeMemoryCommunicator, *, memory_type: str = 'chunked', memory_location: str = 'cuda', access_type: str = 'readonly', ratio: float = 0.5)#

Create WholeMemoryCachePolicy NOTE: in most cases, create_builtin_cache_policy()

can support. This function is a more flexible interface

Parameters:
  • cache_comm – WholeMemory communicator of the cache

  • memory_type – WholeMemory type of cache

  • memory_location – WholeMemory location of cache

  • access_type – Access type needed

  • ratio – Ratio of cache

Returns:

WholeMemoryCachePolicy