pylibwholegraph.torch.embedding.create_embedding_from_filelist#
- pylibwholegraph.torch.embedding.create_embedding_from_filelist(comm: WholeMemoryCommunicator, memory_type: str, memory_location: str, filelist: Union[List[str], str], dtype: dtype, last_dim_size: int, *, cache_policy: Optional[WholeMemoryCachePolicy] = None, embedding_entry_partition: Optional[List[int]] = None, gather_sms: int = -1, round_robin_size: int = 0)#
Create embedding from file list :param comm: WholeMemoryCommunicator :param memory_type: WholeMemory type, should be continuous, chunked or distributed :param memory_location: WholeMemory location, should be cpu or cuda :param filelist: list of files :param dtype: data type :param last_dim_size: size of last dim :param cache_policy: cache policy :param embedding_entry_partition: rank partition based on entry; embedding_entry_partition[i] determines the entry count of rank i and shoud be a positive integer; the sum of embedding_entry_partition should equal to total entry count; entries will be equally partitioned if None :param gather_sms: the number of SMs used in gather process :param round_robin_size: continuous embedding size of a rank using round robin shard strategy :return: