pylibwholegraph.torch.comm.get_global_communicator# pylibwholegraph.torch.comm.get_global_communicator(distributed_backend='nccl')# Get the global communicator of this job :return: WholeMemoryCommunicator that has all GPUs in it.