pylibcugraphops.operators.agg_simple_n2n_e2n_bwd#

pylibcugraphops.operators.agg_simple_n2n_e2n_bwd = <nanobind.nb_func object>#
Computes the backward pass for both a simple aggregation (agg_simple) using node features in an

node-to-node reduction (n2n) and a simple aggregation using edge features in an edge-to-node reduction (e2n) on a graph where the results are concatenated.

agg_simple_n2n_e2n_bwd(
    grad_input_node: device array, grad_input_edge: device array,
    grad_output: device array,
    graph: Union[pylibcugraphops.csc_int[32|64], pylibcugraphops.bipartite_csc_int[32|64]],
    aggregation_operation: pylibcugraphops.operators.AggOp = pylibcugraphops.operators.AggOp.Sum,
    output_extrema_location: Optional[device array] = None, stream_id: int = 0
) -> None
Parameters:
grad_input_nodedevice array type

Device array containing the output gradient on input node embeddings of forward. Shape: (graph.n_dst_nodes, dim_in_node).

grad_input_edgedevice array type

Device array containing the output gradient on input edge embeddings of forward. Shape: (n_edges, dim_in_edge).

grad_outputdevice array type

Device array containing the input gradient on output embeddings of forward. Shape: (graph.n_dst_nodes, dim_in_node + dim_in_edge).

graphThe opaque graph type

graph used for the operation.

aggregation_operationAggOp, default=AggOp.Sum

The kind of aggregation operation.

output_extrema_locationdevice array type | None

Device array containing the location of the min/max embeddings. This is required for min/max aggregation only, and can be None otherwise. Shape: (graph.n_dst_nodes, dim_in_node + dim_in_edge) if set.

stream_idint, default=0

CUDA stream pointer as a python int.