pylibcugraphops.operators.agg_concat_n2n_e2n_bwd = <nanobind.nb_func object>#
Computes the backward pass for both a simple aggregation (agg_simple) using node features in an

node-to-node reduction (n2n) and a simple aggregation using edge features in an edge-to-node reduction (e2n) on a full graph where the results are concatenated. Moreover, the original features of output nodes are concatenated at the end (agg_concat).

    grad_input_node: device array, grad_input_edge: device array,
    grad_output: device array, graph: pylibcugraphops.csc_int[32|64],
    dim_node: int, dim_edge: int,
    aggregation_operation: pylibcugraphops.operators.AggOp = pylibcugraphops.operators.AggOp.Sum,
    output_extrema_location: Optional[device array] = None, stream_id: int = 0
) -> None
grad_input_nodedevice array type

Device array containing the output gradient on input node embeddings of forward. Shape: (graph.n_dst_nodes, dim_node).

grad_input_edgedevice array type

Device array containing the output gradient on input edge embeddings of forward. Shape: (n_edges, dim_edge).

grad_outputdevice array type

Device array containing the input gradient on output embeddings of forward. Shape: (graph.n_dst_nodes, dim_node + dim_edge + dim_node).

graphopaque graph type

The graph used for the operation.


Node feature dimensionality.


Edge Feature dimensionality.

aggregation_operationAggOp, default=AggOp.Sum

The kind of aggregation operation.

output_extrema_locationdevice array type | None

Device array containing the location of the min/max embeddings. This is required for min/max aggregation only, and can be None otherwise. Shape: (graph.n_dst_nodes, dim_node + dim_edge) if set.

stream_idint, default=0

CUDA stream pointer as a python int.