pylibwholegraph.torch.embedding.WholeMemoryOptimizer#

class pylibwholegraph.torch.embedding.WholeMemoryOptimizer(global_comm: WholeMemoryCommunicator)#

Sparse Optimizer for WholeMemoryEmbedding. Many WholeMemoryEmbedding can share same WholeMemoryOptimizer You should not create WholeMemoryOptimizer object directly, but use create_wholememory_optimizer() instead.

Methods

add_embedding(wm_embedding)

Add WholeMemory Embedding to this optimizer NOTE: you don't need to call this method, it is automatic called when WholeMemory Embedding is created.

step(lr)

Apply gradients to all WholeMemory Embedding that use this optimizer.

__init__(global_comm: WholeMemoryCommunicator)#

Methods

__init__(global_comm)

add_embedding(wm_embedding)

Add WholeMemory Embedding to this optimizer NOTE: you don't need to call this method, it is automatic called when WholeMemory Embedding is created.

step(lr)

Apply gradients to all WholeMemory Embedding that use this optimizer.