Graph, max_level: int = None, max_iter: int = None, resolution: float = 1.0, threshold: float = 1e-07) Tuple[dask_cudf.DataFrame, float][source]#

Compute the modularity optimizing partition of the input graph using the Louvain method

It uses the Louvain method described in:

VD Blondel, J-L Guillaume, R Lambiotte and E Lefebvre: Fast unfolding of community hierarchies in large networks, J Stat Mech P10008 (2008),


The graph descriptor should contain the connectivity information and weights. The adjacency list will be computed if not already present. The current implementation only supports undirected graphs.

max_levelinteger, optional (default=100)

This controls the maximum number of levels of the Louvain algorithm. When specified the algorithm will terminate after no more than the specified number of levels. No error occurs when the algorithm terminates early in this manner.

max_iterinteger, optional (default=None)

This parameter is deprecated in favor of max_level. Previously it was used to control the maximum number of levels of the Louvain algorithm.

resolution: float, optional (default=1.0)

Called gamma in the modularity formula, this changes the size of the communities. Higher resolutions lead to more smaller communities, lower resolutions lead to fewer larger communities.

threshold: float, optional (default=1e-7)

Modularity gain threshold for each level. If the gain of modularity between 2 levels of the algorithm is less than the given threshold then the algorithm stops and returns the resulting communities.


GPU data frame of size V containing two columns the vertex id and the partition id it is assigned to.


Contains the vertex identifiers


Contains the partition assigned to the vertices


a floating point number containing the global modularity score of the partitioning.


>>> import cugraph.dask as dcg
>>> import dask_cudf
>>> # ... Init a DASK Cluster
>>> #    see
>>> # Download dataset from
>>> chunksize = dcg.get_chunksize(datasets_path / "karate.csv")
>>> ddf = dask_cudf.read_csv(datasets_path / "karate.csv",
...                          chunksize=chunksize, delimiter=" ",
...                          names=["src", "dst", "value"],
...                          dtype=["int32", "int32", "float32"])
>>> dg = cugraph.Graph()
>>> dg.from_dask_cudf_edgelist(ddf, source='src', destination='dst')
>>> parts, modularity_score = dcg.louvain(dg)