pylibwholegraph.torch.tensor.create_wholememory_tensor#
- pylibwholegraph.torch.tensor.create_wholememory_tensor(comm: WholeMemoryCommunicator, memory_type: str, memory_location: str, sizes: List[int], dtype: dtype, strides: List[int], tensor_entry_partition: Optional[List[int]] = None)#
Create empty WholeMemory Tensor. Now only support dim = 1 or 2 :param comm: WholeMemoryCommunicator :param memory_type: WholeMemory type, should be continuous, chunked or distributed :param memory_location: WholeMemory location, should be cpu or cuda :param sizes: size of the tensor :param dtype: data type of the tensor :param strides: strides of the tensor :param tensor_entry_partition: rank partition based on entry;
tensor_entry_partition[i] determines the entry count of rank i and shoud be a positive integer; the sum of tensor_entry_partition should equal to total entry count; entries will be equally partitioned if None
- Returns:
Allocated WholeMemoryTensor