pylibcugraphops.operators.agg_concat_e2n_bwd#

pylibcugraphops.operators.agg_concat_e2n_bwd = <nanobind.nb_func object>#
Computes the backward pass for a simple aggregation using edge features in an

edge-to-node reduction (e2n) on a graph while concatenating the original features of output nodes features at the end (agg_concat).

agg_simple_e2n_bwd(
    grad_input_edge: device array, grad_input_node: device array,
    grad_output: device array, graph: pylibcugraphops.csc_int[32|64],
    dim_edge: int, dim_node: int,
    aggregation_operation: pylibcugraphops.operators.AggOp = pylibcugraphops.operators.AggOp.Sum,
    output_extrema_location: Optional[device array] = None, stream_id: int = 0
) -> None
Parameters:
grad_input_edgedevice array type

Device array containing the output gradient on input edge embeddings of forward. Shape: (n_edges, dim_edge).

grad_input_nodedevice array type

Device array containing the output gradient on input node embeddings of forward. Shape: (graph.n_dst_nodes, dim_node).

grad_outputdevice array type

Device array containing the input gradient on output embeddings of forward. Shape: (graph.n_dst_nodes, dim_node + dim_edge).

graphopaque graph type

The graph used for the operation.

dim_edgeint

Edge feature dimensionality.

dim_nodeint

Node featuer dimensionality.

aggregation_operationAggOp, default=AggOp.Sum

The kind of aggregation operation.

output_extrema_locationdevice array type | None

Device array containing the location of the min/max embeddings. This is required for min/max aggregation only, and can be None otherwise. Shape: (graph.n_dst_nodes, dim_edge) if set.

stream_idint, default=0

CUDA stream pointer as a python int.