pylibwholegraph API doc#

APIs#

torch.initialize.init_torch_env(world_rank, ...)

Init WholeGraph environment for PyTorch.

torch.initialize.init_torch_env_and_create_wm_comm(...)

Init WholeGraph environment for PyTorch and create single communicator for all ranks.

torch.initialize.finalize()

Finalize WholeGraph.

torch.comm.WholeMemoryCommunicator(wmb_comm)

WholeMemory Communicator.

torch.comm.set_world_info(world_rank, ...)

Set the global world's information.

torch.comm.create_group_communicator([...])

Create WholeMemory Communicator.

torch.comm.destroy_communicator(wm_comm)

Destroy WholeMemoryCommunicator :param wm_comm: WholeMemoryCommunicator to destroy :return: None

torch.comm.get_global_communicator([...])

Get the global communicator of this job :return: WholeMemoryCommunicator that has all GPUs in it.

torch.comm.get_local_node_communicator()

Get the local node communicator of this job :return: WholeMemoryCommunicator that has GPUs in the same node.

torch.comm.get_local_device_communicator()

Get the local device communicator of this job :return: WholeMemoryCommunicator that has only the GPU belonging to current process.

torch.tensor.WholeMemoryTensor(wmb_tensor)

WholeMemory Tensor

torch.tensor.create_wholememory_tensor(comm, ...)

Create empty WholeMemory Tensor.

torch.tensor.create_wholememory_tensor_from_filelist(...)

Create WholeMemory Tensor from list of binary files.

torch.tensor.destroy_wholememory_tensor(...)

Destroy allocated WholeMemory Tensor :param wm_tensor: WholeMemory Tensor :return: None

torch.embedding.WholeMemoryOptimizer(global_comm)

Sparse Optimizer for WholeMemoryEmbedding.

torch.embedding.create_wholememory_optimizer(...)

Create WholeMemoryOptimizer.

torch.embedding.destroy_wholememory_optimizer(...)

Destroy WholeMemoryOptimizer :param optimizer: WholeMemoryOptimizer to destroy :return: None

torch.embedding.WholeMemoryCachePolicy(...)

Cache policy to create WholeMemoryEmbedding.

torch.embedding.create_wholememory_cache_policy(...)

Create WholeMemoryCachePolicy NOTE: in most cases, create_builtin_cache_policy() can support.

torch.embedding.create_builtin_cache_policy(...)

Create builtin cache policy

torch.embedding.destroy_wholememory_cache_policy(...)

Destroy WholeMemoryCachePolicy :param cache_policy: WholeMemoryCachePolicy to destroy :return: None

torch.embedding.WholeMemoryEmbedding(...)

WholeMemory Embedding

torch.embedding.create_embedding(comm, ...)

Create embedding :param comm: WholeMemoryCommunicator :param memory_type: WholeMemory type, should be continuous, chunked or distributed :param memory_location: WholeMemory location, should be cpu or cuda :param dtype: data type :param sizes: size of the embedding, must be 2D :param optimizer: optimizer :param cache_policy: cache policy :param gather_sms: the number of SMs used in gather process :return: WholeMemoryEmbedding

torch.embedding.create_embedding_from_filelist(...)

Create embedding from file list :param comm: WholeMemoryCommunicator :param memory_type: WholeMemory type, should be continuous, chunked or distributed :param memory_location: WholeMemory location, should be cpu or cuda :param filelist: list of files :param dtype: data type :param last_dim_size: size of last dim :param optimizer: optimizer :param cache_policy: cache policy :param gather_sms: the number of SMs used in gather process :return:

torch.embedding.destroy_embedding(wm_embedding)

Destroy WholeMemoryEmbedding :param wm_embedding: WholeMemoryEmbedding to destroy :return: None

torch.embedding.WholeMemoryEmbeddingModule(...)

torch.nn.Module wrapper of WholeMemoryEmbedding

torch.graph_structure.GraphStructure()

Graph structure storage Actually, it is the graph structure of one relation, represented in CSR format.