cudf.Series.groupby#

Series.groupby(by=None, axis=0, level=None, as_index=True, sort=_NoDefault.no_default, group_keys=False, squeeze=False, observed=True, dropna=True)#

Group using a mapper or by a Series of columns.

A groupby operation involves some combination of splitting the object, applying a function, and combining the results. This can be used to group large amounts of data and compute operations on these groups.

Parameters:
bymapping, function, label, or list of labels

Used to determine the groups for the groupby. If by is a function, it’s called on each value of the object’s index. If a dict or Series is passed, the Series or dict VALUES will be used to determine the groups (the Series’ values are first aligned; see .align() method). If an cupy array is passed, the values are used as-is determine the groups. A label or list of labels may be passed to group by the columns in self. Notice that a tuple is interpreted as a (single) key.

levelint, level name, or sequence of such, default None

If the axis is a MultiIndex (hierarchical), group by a particular level or levels.

as_indexbool, default True

For aggregated output, return object with group labels as the index. Only relevant for DataFrame input. as_index=False is effectively “SQL-style” grouped output.

sortbool, default False

Sort result by group key. Differ from Pandas, cudf defaults to False for better performance. Note this does not influence the order of observations within each group. Groupby preserves the order of rows within each group.

group_keysbool, optional

When calling apply and the by argument produces a like-indexed result, add group keys to index to identify pieces. By default group keys are not included when the result’s index (and column) labels match the inputs, and are included otherwise. This argument has no effect if the result produced is not like-indexed with respect to the input.

Returns:
SeriesGroupBy

Returns a SeriesGroupBy object that contains information about the groups.

Examples

Series

>>> ser = cudf.Series([390., 350., 30., 20.],
...                 index=['Falcon', 'Falcon', 'Parrot', 'Parrot'],
...                 name="Max Speed")
>>> ser
Falcon    390.0
Falcon    350.0
Parrot     30.0
Parrot     20.0
Name: Max Speed, dtype: float64
>>> ser.groupby(level=0, sort=True).mean()
Falcon    370.0
Parrot     25.0
Name: Max Speed, dtype: float64
>>> ser.groupby(ser > 100, sort=True).mean()
Max Speed
False     25.0
True     370.0
Name: Max Speed, dtype: float64

DataFrame

>>> import cudf
>>> import pandas as pd
>>> df = cudf.DataFrame({
...     'Animal': ['Falcon', 'Falcon', 'Parrot', 'Parrot'],
...     'Max Speed': [380., 370., 24., 26.],
... })
>>> df
   Animal  Max Speed
0  Falcon      380.0
1  Falcon      370.0
2  Parrot       24.0
3  Parrot       26.0
>>> df.groupby(['Animal'], sort=True).mean()
        Max Speed
Animal
Falcon      375.0
Parrot       25.0
>>> arrays = [['Falcon', 'Falcon', 'Parrot', 'Parrot'],
...           ['Captive', 'Wild', 'Captive', 'Wild']]
>>> index = pd.MultiIndex.from_arrays(arrays, names=('Animal', 'Type'))
>>> df = cudf.DataFrame({'Max Speed': [390., 350., 30., 20.]},
...     index=index)
>>> df
                Max Speed
Animal Type
Falcon Captive      390.0
        Wild         350.0
Parrot Captive       30.0
        Wild          20.0
>>> df.groupby(level=0, sort=True).mean()
        Max Speed
Animal
Falcon      370.0
Parrot       25.0
>>> df.groupby(level="Type", sort=True).mean()
        Max Speed
Type
Captive      210.0
Wild         185.0
>>> df = cudf.DataFrame({'A': 'a a b'.split(),
...                      'B': [1,2,3],
...                      'C': [4,6,5]})
>>> g1 = df.groupby('A', group_keys=False, sort=True)
>>> g2 = df.groupby('A', group_keys=True, sort=True)

Notice that g1 have g2 have two groups, a and b, and only differ in their group_keys argument. Calling apply in various ways, we can get different grouping results:

>>> g1[['B', 'C']].apply(lambda x: x / x.sum())
          B    C
0  0.333333  0.4
1  0.666667  0.6
2  1.000000  1.0

In the above, the groups are not part of the index. We can have them included by using g2 where group_keys=True:

>>> g2[['B', 'C']].apply(lambda x: x / x.sum())
            B    C
A
a 0  0.333333  0.4
  1  0.666667  0.6
b 2  1.000000  1.0