pylibwholegraph.torch.tensor.WholeMemoryTensor#

class pylibwholegraph.torch.tensor.WholeMemoryTensor(wmb_tensor: PyWholeMemoryTensor)#

WholeMemory Tensor

Attributes:
dtype
shape

Methods

from_file_prefix(file_prefix[, part_count])

Load WholeMemory tensor from files with same prefix, files has format

from_filelist(filelist[, round_robin_size])

Load WholeMemory Tensor from file lists :param filelist: file list to load from :return: None

get_all_chunked_tensor([host_view])

Get all chunked tensor of WholeMemory Tensor :param host_view: Get host view or not, if True, return host tensor, else return device tensor :return: Tuple of DLPack Tensors and element offsets.

get_global_tensor([host_view])

Get global tensor of WholeMemory Tensor :param host_view: Get host view or not, if True, return host tensor, else return device tensor :return: Tuple of DLPack Tensor and element offset (0 for global tensor).

get_local_tensor([host_view])

Get local tensor of WholeMemory Tensor :param host_view: Get host view or not, if True, return host tensor, else return device tensor :return: Tuple of DLPack Tensor and element offset.

get_sub_tensor(starts, ends)

Get sub tensor of WholeMemory Tensor :param starts: An array of the start indices of each dim :param ends: An array of the end indices of each dim, -1 means to the last element :return: WholeMemory Tensor

local_to_file(filename)

Store local tensor of WholeMemory Tensor to file, all ranks should call this together with different filename :param filename: file name of local tensor file.

to_file_prefix(file_prefix)

Store WholeMemory Tensor to files with same prefix.

dim

gather

get_comm

scatter

storage_offset

stride

__init__(wmb_tensor: PyWholeMemoryTensor)#

Methods

__init__(wmb_tensor)

dim()

from_file_prefix(file_prefix[, part_count])

Load WholeMemory tensor from files with same prefix, files has format

from_filelist(filelist[, round_robin_size])

Load WholeMemory Tensor from file lists :param filelist: file list to load from :return: None

gather(indice, *[, force_dtype])

get_all_chunked_tensor([host_view])

Get all chunked tensor of WholeMemory Tensor :param host_view: Get host view or not, if True, return host tensor, else return device tensor :return: Tuple of DLPack Tensors and element offsets.

get_comm()

get_global_tensor([host_view])

Get global tensor of WholeMemory Tensor :param host_view: Get host view or not, if True, return host tensor, else return device tensor :return: Tuple of DLPack Tensor and element offset (0 for global tensor).

get_local_tensor([host_view])

Get local tensor of WholeMemory Tensor :param host_view: Get host view or not, if True, return host tensor, else return device tensor :return: Tuple of DLPack Tensor and element offset.

get_sub_tensor(starts, ends)

Get sub tensor of WholeMemory Tensor :param starts: An array of the start indices of each dim :param ends: An array of the end indices of each dim, -1 means to the last element :return: WholeMemory Tensor

local_to_file(filename)

Store local tensor of WholeMemory Tensor to file, all ranks should call this together with different filename :param filename: file name of local tensor file.

scatter(input_tensor, indice)

storage_offset()

stride()

to_file_prefix(file_prefix)

Store WholeMemory Tensor to files with same prefix.

Attributes

dtype

shape