API reference#

This page provides a list of all publicly accessible modules, methods, and classes in the dask_cudf namespace.

Creating and storing DataFrames#

Like Dask, Dask-cuDF supports creation of DataFrames from a variety of storage formats. For on-disk data that are not supported directly in Dask-cuDF, we recommend using Dask’s data reading facilities, followed by calling *.to_backend("cudf")() to obtain a Dask-cuDF object.

dask_cudf.from_cudf(data, npartitions=None, chunksize=None, sort=True, name=None)#

Create a DataFrame from a cudf.DataFrame.

This function is a thin wrapper around dask.dataframe.from_pandas(), accepting the same arguments (described below) excepting that it operates on cuDF rather than pandas objects.

Construct a Dask DataFrame from a Pandas DataFrame

This splits an in-memory Pandas dataframe into several parts and constructs a dask.dataframe from those parts on which Dask.dataframe can operate in parallel. By default, the input dataframe will be sorted by the index to produce cleanly-divided partitions (with known divisions). To preserve the input ordering, make sure the input index is monotonically-increasing. The sort=False option will also avoid reordering, but will not result in known divisions.

Parameters:
datapandas.DataFrame or pandas.Series

The DataFrame/Series with which to construct a Dask DataFrame/Series

npartitionsint, optional, default 1

The number of partitions of the index to create. Note that if there are duplicate values or insufficient elements in data.index, the output may have fewer partitions than requested.

chunksizeint, optional

The desired number of rows per index partition to use. Note that depending on the size and index of the dataframe, actual partition sizes may vary.

sort: bool

Sort the input by index first to obtain cleanly divided partitions (with known divisions). If False, the input will not be sorted, and all divisions will be set to None. Default is True.

name: string, optional

An optional keyname for the dataframe. Defaults to hashing the input

Returns:
dask.DataFrame or dask.Series

A dask DataFrame/Series partitioned along the index

Raises:
TypeError

If something other than a pandas.DataFrame or pandas.Series is passed in.

See also

from_array

Construct a dask.DataFrame from an array that has record dtype

read_csv

Construct a dask.DataFrame from a CSV file

Examples

>>> from dask.dataframe import from_pandas
>>> df = pd.DataFrame(dict(a=list('aabbcc'), b=list(range(6))),
...                   index=pd.date_range(start='20100101', periods=6))
>>> ddf = from_pandas(df, npartitions=3)
>>> ddf.divisions  
(Timestamp('2010-01-01 00:00:00', freq='D'),
 Timestamp('2010-01-03 00:00:00', freq='D'),
 Timestamp('2010-01-05 00:00:00', freq='D'),
 Timestamp('2010-01-06 00:00:00', freq='D'))
>>> ddf = from_pandas(df.a, npartitions=3)  # Works with Series too!
>>> ddf.divisions  
(Timestamp('2010-01-01 00:00:00', freq='D'),
 Timestamp('2010-01-03 00:00:00', freq='D'),
 Timestamp('2010-01-05 00:00:00', freq='D'),
 Timestamp('2010-01-06 00:00:00', freq='D'))
dask_cudf.from_delayed(dfs: Delayed | distributed.Future | Iterable[Delayed | distributed.Future], meta=None, divisions: tuple | None = None, prefix: str | None = None, verify_meta: bool = True)#

Create Dask DataFrame from many Dask Delayed objects

Warning

from_delayed should only be used if the objects that create the data are complex and cannot be easily represented as a single function in an embarassingly parallel fashion.

from_map is recommended if the query can be expressed as a single function like:

def read_xml(path):

return pd.read_xml(path)

ddf = dd.from_map(read_xml, paths)

from_delayed might be depreacted in the future.

Parameters:
dfs

A dask.delayed.Delayed, a distributed.Future, or an iterable of either of these objects, e.g. returned by client.submit. These comprise the individual partitions of the resulting dataframe. If a single object is provided (not an iterable), then the resulting dataframe will have only one partition.

$META
divisions

Partition boundaries along the index. For tuple, see https://docs.dask.org/en/latest/dataframe-design.html#partitions If None, then won’t use index information

prefix

Prefix to prepend to the keys.

verify_meta

If True check that the partitions have consistent metadata, defaults to True.

Grouping#

As discussed in the Dask documentation for groupby, groupby, join, and merge, and similar operations that require matching up rows of a DataFrame become significantly more challenging in a parallel setting than they are in serial. Dask-cuDF has the same challenges, however for certain groupby operations, we can take advantage of functionality in cuDF that allows us to compute multiple aggregations at once. There are therefore two interfaces to grouping in Dask-cuDF, the general DataFrame.groupby() which returns a CudfDataFrameGroupBy object, and a specialized groupby_agg(). Generally speaking, you should not need to call groupby_agg() directly, since Dask-cuDF will arrange to call it if possible.

class dask_cudf.groupby.CudfDataFrameGroupBy(*args, sort=None, **kwargs)#

Bases: DataFrameGroupBy

Attributes

index

Methods

agg([arg, split_every, split_out, ...])

Aggregate using one or more specified operations

aggregate(arg[, split_every, split_out, ...])

Aggregate using one or more specified operations

apply(func, *args, **kwargs)

Parallel version of pandas GroupBy.apply

bfill([limit])

Backward fill the values.

corr([ddof, split_every, split_out, ...])

Compute pairwise correlation of columns, excluding NA/null values.

count([split_every, split_out])

Compute count of group, excluding missing values.

cov([ddof, split_every, split_out, std, ...])

Compute pairwise covariance of columns, excluding NA/null values.

cumcount([axis])

Number each item in each group from 0 to the length of that group - 1.

cumprod([axis, numeric_only])

Cumulative product for each group.

cumsum([axis, numeric_only])

Cumulative sum for each group.

ffill([limit])

Forward fill the values.

fillna([value, method, limit, axis])

Fill NA/NaN values using the specified method.

first([split_every, split_out])

Compute the first entry of each column within each group.

get_group(key)

Construct DataFrame from group with provided name.

idxmax([split_every, split_out, ...])

Return index of first occurrence of maximum over requested axis.

idxmin([split_every, split_out, ...])

Return index of first occurrence of minimum over requested axis.

last([split_every, split_out])

Compute the last entry of each column within each group.

max([split_every, split_out])

Compute max of group values.

mean([split_every, split_out])

Compute mean of groups, excluding missing values.

median([split_every, split_out, ...])

Compute median of groups, excluding missing values.

min([split_every, split_out])

Compute min of group values.

prod([split_every, split_out, ...])

Compute prod of group values.

rolling(window[, min_periods, center, ...])

Provides rolling transformations.

shift([periods, freq, axis, fill_value, meta])

Parallel version of pandas GroupBy.shift

size([split_every, split_out, shuffle_method])

Compute group sizes.

std([split_every, split_out])

Compute standard deviation of groups, excluding missing values.

sum([split_every, split_out])

Compute sum of group values.

transform(func, *args, **kwargs)

Parallel version of pandas GroupBy.transform

var([split_every, split_out])

Compute variance of groups, excluding missing values.

collect

compute

agg(arg=None, split_every=None, split_out=1, shuffle_method=None, **kwargs)#

Aggregate using one or more specified operations

Based on pd.core.groupby.DataFrameGroupBy.agg

Parameters:
argcallable, str, list or dict, optional

Aggregation spec. Accepted combinations are:

  • callable function

  • string function name

  • list of functions and/or function names, e.g. [np.sum, 'mean']

  • dict of column names -> function, function name or list of such.

  • None only if named aggregation syntax is used

split_everyint, optional

Number of intermediate partitions that may be aggregated at once. This defaults to 8. If your intermediate partitions are likely to be small (either due to a small number of groups or a small initial partition size), consider increasing this number for better performance.

split_outint, optional

Number of output partitions. Default is 1.

shufflebool or str, optional

Whether a shuffle-based algorithm should be used. A specific algorithm name may also be specified (e.g. "tasks" or "p2p"). The shuffle-based algorithm is likely to be more efficient than shuffle=False when split_out>1 and the number of unique groups is large (high cardinality). Default is False when split_out = 1. When split_out > 1, it chooses the algorithm set by the shuffle option in the dask config system, or "tasks" if nothing is set.

kwargs: tuple or pd.NamedAgg, optional

Used for named aggregations where the keywords are the output column names and the values are tuples where the first element is the input column name and the second element is the aggregation function. pandas.NamedAgg can also be used as the value. To use the named aggregation syntax, arg must be set to None.

aggregate(arg, split_every=None, split_out=1, shuffle_method=None)#

Aggregate using one or more specified operations

Based on pd.core.groupby.DataFrameGroupBy.aggregate

Parameters:
argcallable, str, list or dict, optional

Aggregation spec. Accepted combinations are:

  • callable function

  • string function name

  • list of functions and/or function names, e.g. [np.sum, 'mean']

  • dict of column names -> function, function name or list of such.

  • None only if named aggregation syntax is used

split_everyint, optional

Number of intermediate partitions that may be aggregated at once. This defaults to 8. If your intermediate partitions are likely to be small (either due to a small number of groups or a small initial partition size), consider increasing this number for better performance.

split_outint, optional

Number of output partitions. Default is 1.

shufflebool or str, optional

Whether a shuffle-based algorithm should be used. A specific algorithm name may also be specified (e.g. "tasks" or "p2p"). The shuffle-based algorithm is likely to be more efficient than shuffle=False when split_out>1 and the number of unique groups is large (high cardinality). Default is False when split_out = 1. When split_out > 1, it chooses the algorithm set by the shuffle option in the dask config system, or "tasks" if nothing is set.

kwargs: tuple or pd.NamedAgg, optional

Used for named aggregations where the keywords are the output column names and the values are tuples where the first element is the input column name and the second element is the aggregation function. pandas.NamedAgg can also be used as the value. To use the named aggregation syntax, arg must be set to None.

apply(func, *args, **kwargs)#

Parallel version of pandas GroupBy.apply

This mimics the pandas version except for the following:

  1. If the grouper does not align with the index then this causes a full shuffle. The order of rows within each group may not be preserved.

  2. Dask’s GroupBy.apply is not appropriate for aggregations. For custom aggregations, use dask.dataframe.groupby.Aggregation.

Warning

Pandas’ groupby-apply can be used to to apply arbitrary functions, including aggregations that result in one row per group. Dask’s groupby-apply will apply func once on each group, doing a shuffle if needed, such that each group is contained in one partition. When func is a reduction, e.g., you’ll end up with one row per group. To apply a custom aggregation with Dask, use dask.dataframe.groupby.Aggregation.

Parameters:
func: function

Function to apply

args, kwargsScalar, Delayed or object

Arguments and keywords to pass to the function.

metapd.DataFrame, pd.Series, dict, iterable, tuple, optional

An empty pd.DataFrame or pd.Series that matches the dtypes and column names of the output. This metadata is necessary for many algorithms in dask dataframe to work. For ease of use, some alternative inputs are also available. Instead of a DataFrame, a dict of {name: dtype} or iterable of (name, dtype) can be provided (note that the order of the names should match the order of the columns). Instead of a series, a tuple of (name, dtype) can be used. If not provided, dask will try to infer the metadata. This may lead to unexpected results, so providing meta is recommended. For more information, see dask.dataframe.utils.make_meta.

Returns:
appliedSeries or DataFrame depending on columns keyword
bfill(limit=None)#

Backward fill the values.

This docstring was copied from pandas.core.groupby.groupby.GroupBy.bfill.

Some inconsistencies with the Dask version may exist.

Parameters:
limitint, optional

Limit of how many values to fill.

Returns:
Series or DataFrame

Object with missing values filled.

See also

Series.bfill

Backward fill the missing values in the dataset.

DataFrame.bfill

Backward fill the missing values in the dataset.

Series.fillna

Fill NaN values of a Series.

DataFrame.fillna

Fill NaN values of a DataFrame.

Examples

With Series:

>>> index = ['Falcon', 'Falcon', 'Parrot', 'Parrot', 'Parrot']  
>>> s = pd.Series([None, 1, None, None, 3], index=index)  
>>> s  
Falcon    NaN
Falcon    1.0
Parrot    NaN
Parrot    NaN
Parrot    3.0
dtype: float64
>>> s.groupby(level=0).bfill()  
Falcon    1.0
Falcon    1.0
Parrot    3.0
Parrot    3.0
Parrot    3.0
dtype: float64
>>> s.groupby(level=0).bfill(limit=1)  
Falcon    1.0
Falcon    1.0
Parrot    NaN
Parrot    3.0
Parrot    3.0
dtype: float64

With DataFrame:

>>> df = pd.DataFrame({'A': [1, None, None, None, 4],  
...                    'B': [None, None, 5, None, 7]}, index=index)
>>> df  
          A         B
Falcon  1.0       NaN
Falcon  NaN       NaN
Parrot  NaN       5.0
Parrot  NaN       NaN
Parrot  4.0       7.0
>>> df.groupby(level=0).bfill()  
          A         B
Falcon  1.0       NaN
Falcon  NaN       NaN
Parrot  4.0       5.0
Parrot  4.0       7.0
Parrot  4.0       7.0
>>> df.groupby(level=0).bfill(limit=1)  
          A         B
Falcon  1.0       NaN
Falcon  NaN       NaN
Parrot  NaN       5.0
Parrot  4.0       7.0
Parrot  4.0       7.0
corr(ddof=1, split_every=None, split_out=1, numeric_only=_NoDefault.no_default)#

Compute pairwise correlation of columns, excluding NA/null values.

This docstring was copied from pandas.core.frame.DataFrame.corr.

Some inconsistencies with the Dask version may exist.

Groupby correlation: corr(X, Y) = cov(X, Y) / (std_x * std_y)

Parameters:
method{‘pearson’, ‘kendall’, ‘spearman’} or callable (Not supported in Dask)

Method of correlation:

  • pearson : standard correlation coefficient

  • kendall : Kendall Tau correlation coefficient

  • spearman : Spearman rank correlation

  • callable: callable with input two 1d ndarrays

    and returning a float. Note that the returned matrix from corr will have 1 along the diagonals and will be symmetric regardless of the callable’s behavior.

min_periodsint, optional (Not supported in Dask)

Minimum number of observations required per pair of columns to have a valid result. Currently only available for Pearson and Spearman correlation.

numeric_onlybool, default False

Include only float, int or boolean data.

New in version 1.5.0.

Changed in version 2.0.0: The default value of numeric_only is now False.

Returns:
DataFrame

Correlation matrix.

See also

DataFrame.corrwith

Compute pairwise correlation with another DataFrame or Series.

Series.corr

Compute the correlation between two Series.

Notes

Pearson, Kendall and Spearman correlation are currently computed using pairwise complete observations.

Examples

>>> def histogram_intersection(a, b):  
...     v = np.minimum(a, b).sum().round(decimals=1)
...     return v
>>> df = pd.DataFrame([(.2, .3), (.0, .6), (.6, .0), (.2, .1)],  
...                   columns=['dogs', 'cats'])
>>> df.corr(method=histogram_intersection)  
      dogs  cats
dogs   1.0   0.3
cats   0.3   1.0
>>> df = pd.DataFrame([(1, 1), (2, np.nan), (np.nan, 3), (4, 4)],  
...                   columns=['dogs', 'cats'])
>>> df.corr(min_periods=3)  
      dogs  cats
dogs   1.0   NaN
cats   NaN   1.0
count(split_every=None, split_out=1)#

Compute count of group, excluding missing values.

This docstring was copied from pandas.core.groupby.groupby.GroupBy.count.

Some inconsistencies with the Dask version may exist.

Returns:
Series or DataFrame

Count of values within each group.

See also

Series.groupby

Apply a function groupby to a Series.

DataFrame.groupby

Apply a function groupby to each row or column of a DataFrame.

Examples

For SeriesGroupBy:

>>> lst = ['a', 'a', 'b']  
>>> ser = pd.Series([1, 2, np.nan], index=lst)  
>>> ser  
a    1.0
a    2.0
b    NaN
dtype: float64
>>> ser.groupby(level=0).count()  
a    2
b    0
dtype: int64

For DataFrameGroupBy:

>>> data = [[1, np.nan, 3], [1, np.nan, 6], [7, 8, 9]]  
>>> df = pd.DataFrame(data, columns=["a", "b", "c"],  
...                   index=["cow", "horse", "bull"])
>>> df  
        a         b     c
cow     1       NaN     3
horse   1       NaN     6
bull    7       8.0     9
>>> df.groupby("a").count()  
    b   c
a
1   0   2
7   1   1

For Resampler:

>>> ser = pd.Series([1, 2, 3, 4], index=pd.DatetimeIndex(  
...                 ['2023-01-01', '2023-01-15', '2023-02-01', '2023-02-15']))
>>> ser  
2023-01-01    1
2023-01-15    2
2023-02-01    3
2023-02-15    4
dtype: int64
>>> ser.resample('MS').count()  
2023-01-01    2
2023-02-01    2
Freq: MS, dtype: int64
cov(ddof=1, split_every=None, split_out=1, std=False, numeric_only=_NoDefault.no_default)#

Compute pairwise covariance of columns, excluding NA/null values.

This docstring was copied from pandas.core.frame.DataFrame.cov.

Some inconsistencies with the Dask version may exist.

Groupby covariance is accomplished by

  1. Computing intermediate values for sum, count, and the product of all columns: a b c -> a*a, a*b, b*b, b*c, c*c.

  2. The values are then aggregated and the final covariance value is calculated: cov(X, Y) = X*Y - Xbar * Ybar

When std is True calculate Correlation

Compute the pairwise covariance among the series of a DataFrame. The returned data frame is the covariance matrix of the columns of the DataFrame.

Both NA and null values are automatically excluded from the calculation. (See the note below about bias from missing values.) A threshold can be set for the minimum number of observations for each value created. Comparisons with observations below this threshold will be returned as NaN.

This method is generally used for the analysis of time series data to understand the relationship between different measures across time.

Parameters:
min_periodsint, optional (Not supported in Dask)

Minimum number of observations required per pair of columns to have a valid result.

ddofint, default 1

Delta degrees of freedom. The divisor used in calculations is N - ddof, where N represents the number of elements. This argument is applicable only when no nan is in the dataframe.

numeric_onlybool, default False

Include only float, int or boolean data.

New in version 1.5.0.

Changed in version 2.0.0: The default value of numeric_only is now False.

Returns:
DataFrame

The covariance matrix of the series of the DataFrame.

See also

Series.cov

Compute covariance with another Series.

core.window.ewm.ExponentialMovingWindow.cov

Exponential weighted sample covariance.

core.window.expanding.Expanding.cov

Expanding sample covariance.

core.window.rolling.Rolling.cov

Rolling sample covariance.

Notes

Returns the covariance matrix of the DataFrame’s time series. The covariance is normalized by N-ddof.

For DataFrames that have Series that are missing data (assuming that data is missing at random) the returned covariance matrix will be an unbiased estimate of the variance and covariance between the member Series.

However, for many applications this estimate may not be acceptable because the estimate covariance matrix is not guaranteed to be positive semi-definite. This could lead to estimate correlations having absolute values which are greater than one, and/or a non-invertible covariance matrix. See Estimation of covariance matrices for more details.

Examples

>>> df = pd.DataFrame([(1, 2), (0, 3), (2, 0), (1, 1)],  
...                   columns=['dogs', 'cats'])
>>> df.cov()  
          dogs      cats
dogs  0.666667 -1.000000
cats -1.000000  1.666667
>>> np.random.seed(42)  
>>> df = pd.DataFrame(np.random.randn(1000, 5),  
...                   columns=['a', 'b', 'c', 'd', 'e'])
>>> df.cov()  
          a         b         c         d         e
a  0.998438 -0.020161  0.059277 -0.008943  0.014144
b -0.020161  1.059352 -0.008543 -0.024738  0.009826
c  0.059277 -0.008543  1.010670 -0.001486 -0.000271
d -0.008943 -0.024738 -0.001486  0.921297 -0.013692
e  0.014144  0.009826 -0.000271 -0.013692  0.977795

Minimum number of periods

This method also supports an optional min_periods keyword that specifies the required minimum number of non-NA observations for each column pair in order to have a valid result:

>>> np.random.seed(42)  
>>> df = pd.DataFrame(np.random.randn(20, 3),  
...                   columns=['a', 'b', 'c'])
>>> df.loc[df.index[:5], 'a'] = np.nan  
>>> df.loc[df.index[5:10], 'b'] = np.nan  
>>> df.cov(min_periods=12)  
          a         b         c
a  0.316741       NaN -0.150812
b       NaN  1.248003  0.191417
c -0.150812  0.191417  0.895202
cumcount(axis=_NoDefault.no_default)#

Number each item in each group from 0 to the length of that group - 1.

This docstring was copied from pandas.core.groupby.groupby.GroupBy.cumcount.

Some inconsistencies with the Dask version may exist.

Essentially this is equivalent to

self.apply(lambda x: pd.Series(np.arange(len(x)), x.index))
Parameters:
ascendingbool, default True (Not supported in Dask)

If False, number in reverse, from length of group - 1 to 0.

Returns:
Series

Sequence number of each element within each group.

See also

ngroup

Number the groups themselves.

Examples

>>> df = pd.DataFrame([['a'], ['a'], ['a'], ['b'], ['b'], ['a']],  
...                   columns=['A'])
>>> df  
   A
0  a
1  a
2  a
3  b
4  b
5  a
>>> df.groupby('A').cumcount()  
0    0
1    1
2    2
3    0
4    1
5    3
dtype: int64
>>> df.groupby('A').cumcount(ascending=False)  
0    3
1    2
2    1
3    1
4    0
5    0
dtype: int64
cumprod(axis=_NoDefault.no_default, numeric_only=_NoDefault.no_default)#

Cumulative product for each group.

This docstring was copied from pandas.core.groupby.groupby.GroupBy.cumprod.

Some inconsistencies with the Dask version may exist.

Returns:
Series or DataFrame

See also

Series.groupby

Apply a function groupby to a Series.

DataFrame.groupby

Apply a function groupby to each row or column of a DataFrame.

Examples

For SeriesGroupBy:

>>> lst = ['a', 'a', 'b']  
>>> ser = pd.Series([6, 2, 0], index=lst)  
>>> ser  
a    6
a    2
b    0
dtype: int64
>>> ser.groupby(level=0).cumprod()  
a    6
a   12
b    0
dtype: int64

For DataFrameGroupBy:

>>> data = [[1, 8, 2], [1, 2, 5], [2, 6, 9]]  
>>> df = pd.DataFrame(data, columns=["a", "b", "c"],  
...                   index=["cow", "horse", "bull"])
>>> df  
        a   b   c
cow     1   8   2
horse   1   2   5
bull    2   6   9
>>> df.groupby("a").groups  
{1: ['cow', 'horse'], 2: ['bull']}
>>> df.groupby("a").cumprod()  
        b   c
cow     8   2
horse  16  10
bull    6   9
cumsum(axis=_NoDefault.no_default, numeric_only=_NoDefault.no_default)#

Cumulative sum for each group.

This docstring was copied from pandas.core.groupby.groupby.GroupBy.cumsum.

Some inconsistencies with the Dask version may exist.

Returns:
Series or DataFrame

See also

Series.groupby

Apply a function groupby to a Series.

DataFrame.groupby

Apply a function groupby to each row or column of a DataFrame.

Examples

For SeriesGroupBy:

>>> lst = ['a', 'a', 'b']  
>>> ser = pd.Series([6, 2, 0], index=lst)  
>>> ser  
a    6
a    2
b    0
dtype: int64
>>> ser.groupby(level=0).cumsum()  
a    6
a    8
b    0
dtype: int64

For DataFrameGroupBy:

>>> data = [[1, 8, 2], [1, 2, 5], [2, 6, 9]]  
>>> df = pd.DataFrame(data, columns=["a", "b", "c"],  
...                   index=["fox", "gorilla", "lion"])
>>> df  
          a   b   c
fox       1   8   2
gorilla   1   2   5
lion      2   6   9
>>> df.groupby("a").groups  
{1: ['fox', 'gorilla'], 2: ['lion']}
>>> df.groupby("a").cumsum()  
          b   c
fox       8   2
gorilla  10   7
lion      6   9
ffill(limit=None)#

Forward fill the values.

This docstring was copied from pandas.core.groupby.groupby.GroupBy.ffill.

Some inconsistencies with the Dask version may exist.

Parameters:
limitint, optional

Limit of how many values to fill.

Returns:
Series or DataFrame

Object with missing values filled.

See also

Series.ffill

Returns Series with minimum number of char in object.

DataFrame.ffill

Object with missing values filled or None if inplace=True.

Series.fillna

Fill NaN values of a Series.

DataFrame.fillna

Fill NaN values of a DataFrame.

Examples

For SeriesGroupBy:

>>> key = [0, 0, 1, 1]  
>>> ser = pd.Series([np.nan, 2, 3, np.nan], index=key)  
>>> ser  
0    NaN
0    2.0
1    3.0
1    NaN
dtype: float64
>>> ser.groupby(level=0).ffill()  
0    NaN
0    2.0
1    3.0
1    3.0
dtype: float64

For DataFrameGroupBy:

>>> df = pd.DataFrame(  
...     {
...         "key": [0, 0, 1, 1, 1],
...         "A": [np.nan, 2, np.nan, 3, np.nan],
...         "B": [2, 3, np.nan, np.nan, np.nan],
...         "C": [np.nan, np.nan, 2, np.nan, np.nan],
...     }
... )
>>> df  
   key    A    B   C
0    0  NaN  2.0 NaN
1    0  2.0  3.0 NaN
2    1  NaN  NaN 2.0
3    1  3.0  NaN NaN
4    1  NaN  NaN NaN

Propagate non-null values forward or backward within each group along columns.

>>> df.groupby("key").ffill()  
     A    B   C
0  NaN  2.0 NaN
1  2.0  3.0 NaN
2  NaN  NaN 2.0
3  3.0  NaN 2.0
4  3.0  NaN 2.0

Propagate non-null values forward or backward within each group along rows.

>>> df.T.groupby(np.array([0, 0, 1, 1])).ffill().T  
   key    A    B    C
0  0.0  0.0  2.0  2.0
1  0.0  2.0  3.0  3.0
2  1.0  1.0  NaN  2.0
3  1.0  3.0  NaN  NaN
4  1.0  1.0  NaN  NaN

Only replace the first NaN element within a group along rows.

>>> df.groupby("key").ffill(limit=1)  
     A    B    C
0  NaN  2.0  NaN
1  2.0  3.0  NaN
2  NaN  NaN  2.0
3  3.0  NaN  2.0
4  3.0  NaN  NaN
fillna(value=None, method=None, limit=None, axis=_NoDefault.no_default)#

Fill NA/NaN values using the specified method.

Parameters:
valuescalar, default None

Value to use to fill holes (e.g. 0).

method{‘bfill’, ‘ffill’, None}, default None

Method to use for filling holes in reindexed Series. ffill: propagate last valid observation forward to next valid. bfill: use next valid observation to fill gap.

axis{0 or ‘index’, 1 or ‘columns’}

Axis along which to fill missing values.

limitint, default None

If method is specified, this is the maximum number of consecutive NaN values to forward/backward fill. In other words, if there is a gap with more than this number of consecutive NaNs, it will only be partially filled. If method is not specified, this is the maximum number of entries along the entire axis where NaNs will be filled. Must be greater than 0 if not None.

Returns:
Series or DataFrame

Object with missing values filled

first(split_every=None, split_out=1)#

Compute the first entry of each column within each group.

This docstring was copied from pandas.core.groupby.groupby.GroupBy.first.

Some inconsistencies with the Dask version may exist.

Defaults to skipping NA elements.

Parameters:
numeric_onlybool, default False

Include only float, int, boolean columns.

min_countint, default -1 (Not supported in Dask)

The required number of valid values to perform the operation. If fewer than min_count valid values are present the result will be NA.

skipnabool, default True (Not supported in Dask)

Exclude NA/null values. If an entire row/column is NA, the result will be NA.

New in version 2.2.1.

Returns:
Series or DataFrame

First values within each group.

See also

DataFrame.groupby

Apply a function groupby to each row or column of a DataFrame.

pandas.core.groupby.DataFrameGroupBy.last

Compute the last non-null entry of each column.

pandas.core.groupby.DataFrameGroupBy.nth

Take the nth row from each group.

Examples

>>> df = pd.DataFrame(dict(A=[1, 1, 3], B=[None, 5, 6], C=[1, 2, 3],  
...                        D=['3/11/2000', '3/12/2000', '3/13/2000']))
>>> df['D'] = pd.to_datetime(df['D'])  
>>> df.groupby("A").first()  
     B  C          D
A
1  5.0  1 2000-03-11
3  6.0  3 2000-03-13
>>> df.groupby("A").first(min_count=2)  
    B    C          D
A
1 NaN  1.0 2000-03-11
3 NaN  NaN        NaT
>>> df.groupby("A").first(numeric_only=True)  
     B  C
A
1  5.0  1
3  6.0  3
get_group(key)#

Construct DataFrame from group with provided name.

This docstring was copied from pandas.core.groupby.groupby.GroupBy.get_group.

Some inconsistencies with the Dask version may exist.

Known inconsistencies:

If the group is not present, Dask will return an empty Series/DataFrame.

Parameters:
nameobject (Not supported in Dask)

The name of the group to get as a DataFrame.

objDataFrame, default None (Not supported in Dask)

The DataFrame to take the DataFrame out of. If it is None, the object groupby was called on will be used.

Deprecated since version 2.1.0: The obj is deprecated and will be removed in a future version. Do df.iloc[gb.indices.get(name)] instead of gb.get_group(name, obj=df).

Returns:
same type as obj

Examples

For SeriesGroupBy:

>>> lst = ['a', 'a', 'b']  
>>> ser = pd.Series([1, 2, 3], index=lst)  
>>> ser  
a    1
a    2
b    3
dtype: int64
>>> ser.groupby(level=0).get_group("a")  
a    1
a    2
dtype: int64

For DataFrameGroupBy:

>>> data = [[1, 2, 3], [1, 5, 6], [7, 8, 9]]  
>>> df = pd.DataFrame(data, columns=["a", "b", "c"],  
...                   index=["owl", "toucan", "eagle"])
>>> df  
        a  b  c
owl     1  2  3
toucan  1  5  6
eagle   7  8  9
>>> df.groupby(by=["a"]).get_group((1,))  
        a  b  c
owl     1  2  3
toucan  1  5  6

For Resampler:

>>> ser = pd.Series([1, 2, 3, 4], index=pd.DatetimeIndex(  
...                 ['2023-01-01', '2023-01-15', '2023-02-01', '2023-02-15']))
>>> ser  
2023-01-01    1
2023-01-15    2
2023-02-01    3
2023-02-15    4
dtype: int64
>>> ser.resample('MS').get_group('2023-01-01')  
2023-01-01    1
2023-01-15    2
dtype: int64
idxmax(split_every=None, split_out=1, shuffle_method=None, axis=_NoDefault.no_default, skipna=True, numeric_only=_NoDefault.no_default)#

Return index of first occurrence of maximum over requested axis.

This docstring was copied from pandas.core.frame.DataFrame.idxmax.

Some inconsistencies with the Dask version may exist.

NA/null values are excluded.

Parameters:
axis{0 or ‘index’, 1 or ‘columns’}, default 0

The axis to use. 0 or ‘index’ for row-wise, 1 or ‘columns’ for column-wise.

skipnabool, default True

Exclude NA/null values. If an entire row/column is NA, the result will be NA.

numeric_onlybool, default False

Include only float, int or boolean data.

New in version 1.5.0.

Returns:
Series

Indexes of maxima along the specified axis.

Raises:
ValueError
  • If the row/column is empty

See also

Series.idxmax

Return index of the maximum element.

Notes

This method is the DataFrame version of ndarray.argmax.

Examples

Consider a dataset containing food consumption in Argentina.

>>> df = pd.DataFrame({'consumption': [10.51, 103.11, 55.48],  
...                     'co2_emissions': [37.2, 19.66, 1712]},
...                   index=['Pork', 'Wheat Products', 'Beef'])
>>> df  
                consumption  co2_emissions
Pork                  10.51         37.20
Wheat Products       103.11         19.66
Beef                  55.48       1712.00

By default, it returns the index for the maximum value in each column.

>>> df.idxmax()  
consumption     Wheat Products
co2_emissions             Beef
dtype: object

To return the index for the maximum value in each row, use axis="columns".

>>> df.idxmax(axis="columns")  
Pork              co2_emissions
Wheat Products     consumption
Beef              co2_emissions
dtype: object
idxmin(split_every=None, split_out=1, shuffle_method=None, axis=_NoDefault.no_default, skipna=True, numeric_only=_NoDefault.no_default)#

Return index of first occurrence of minimum over requested axis.

This docstring was copied from pandas.core.frame.DataFrame.idxmin.

Some inconsistencies with the Dask version may exist.

NA/null values are excluded.

Parameters:
axis{0 or ‘index’, 1 or ‘columns’}, default 0

The axis to use. 0 or ‘index’ for row-wise, 1 or ‘columns’ for column-wise.

skipnabool, default True

Exclude NA/null values. If an entire row/column is NA, the result will be NA.

numeric_onlybool, default False

Include only float, int or boolean data.

New in version 1.5.0.

Returns:
Series

Indexes of minima along the specified axis.

Raises:
ValueError
  • If the row/column is empty

See also

Series.idxmin

Return index of the minimum element.

Notes

This method is the DataFrame version of ndarray.argmin.

Examples

Consider a dataset containing food consumption in Argentina.

>>> df = pd.DataFrame({'consumption': [10.51, 103.11, 55.48],  
...                     'co2_emissions': [37.2, 19.66, 1712]},
...                   index=['Pork', 'Wheat Products', 'Beef'])
>>> df  
                consumption  co2_emissions
Pork                  10.51         37.20
Wheat Products       103.11         19.66
Beef                  55.48       1712.00

By default, it returns the index for the minimum value in each column.

>>> df.idxmin()  
consumption                Pork
co2_emissions    Wheat Products
dtype: object

To return the index for the minimum value in each row, use axis="columns".

>>> df.idxmin(axis="columns")  
Pork                consumption
Wheat Products    co2_emissions
Beef                consumption
dtype: object
last(split_every=None, split_out=1)#

Compute the last entry of each column within each group.

This docstring was copied from pandas.core.groupby.groupby.GroupBy.last.

Some inconsistencies with the Dask version may exist.

Defaults to skipping NA elements.

Parameters:
numeric_onlybool, default False

Include only float, int, boolean columns. If None, will attempt to use everything, then use only numeric data.

min_countint, default -1 (Not supported in Dask)

The required number of valid values to perform the operation. If fewer than min_count valid values are present the result will be NA.

skipnabool, default True (Not supported in Dask)

Exclude NA/null values. If an entire row/column is NA, the result will be NA.

New in version 2.2.1.

Returns:
Series or DataFrame

Last of values within each group.

See also

DataFrame.groupby

Apply a function groupby to each row or column of a DataFrame.

pandas.core.groupby.DataFrameGroupBy.first

Compute the first non-null entry of each column.

pandas.core.groupby.DataFrameGroupBy.nth

Take the nth row from each group.

Examples

>>> df = pd.DataFrame(dict(A=[1, 1, 3], B=[5, None, 6], C=[1, 2, 3]))  
>>> df.groupby("A").last()  
     B  C
A
1  5.0  2
3  6.0  3
max(split_every=None, split_out=1)#

Compute max of group values.

This docstring was copied from pandas.core.groupby.groupby.GroupBy.max.

Some inconsistencies with the Dask version may exist.

Parameters:
numeric_onlybool, default False

Include only float, int, boolean columns.

Changed in version 2.0.0: numeric_only no longer accepts None.

min_countint, default -1 (Not supported in Dask)

The required number of valid values to perform the operation. If fewer than min_count non-NA values are present the result will be NA.

enginestr, default None None (Not supported in Dask)
  • 'cython' : Runs rolling apply through C-extensions from cython.

  • 'numba'Runs rolling apply through JIT compiled code from numba.

    Only available when raw is set to True.

  • None : Defaults to 'cython' or globally setting compute.use_numba

engine_kwargsdict, default None None (Not supported in Dask)
  • For 'cython' engine, there are no accepted engine_kwargs

  • For 'numba' engine, the engine can accept nopython, nogil

    and parallel dictionary keys. The values must either be True or False. The default engine_kwargs for the 'numba' engine is {'nopython': True, 'nogil': False, 'parallel': False} and will be applied to both the func and the apply groupby aggregation.

Returns:
Series or DataFrame

Computed max of values within each group.

Examples

For SeriesGroupBy:

>>> lst = ['a', 'a', 'b', 'b']  
>>> ser = pd.Series([1, 2, 3, 4], index=lst)  
>>> ser  
a    1
a    2
b    3
b    4
dtype: int64
>>> ser.groupby(level=0).max()  
a    2
b    4
dtype: int64

For DataFrameGroupBy:

>>> data = [[1, 8, 2], [1, 2, 5], [2, 5, 8], [2, 6, 9]]  
>>> df = pd.DataFrame(data, columns=["a", "b", "c"],  
...                   index=["tiger", "leopard", "cheetah", "lion"])
>>> df  
          a  b  c
  tiger   1  8  2
leopard   1  2  5
cheetah   2  5  8
   lion   2  6  9
>>> df.groupby("a").max()  
    b  c
a
1   8  5
2   6  9
mean(split_every=None, split_out=1)#

Compute mean of groups, excluding missing values.

This docstring was copied from pandas.core.groupby.groupby.GroupBy.mean.

Some inconsistencies with the Dask version may exist.

Parameters:
numeric_onlybool, default False

Include only float, int, boolean columns.

Changed in version 2.0.0: numeric_only no longer accepts None and defaults to False.

enginestr, default None (Not supported in Dask)
  • 'cython' : Runs the operation through C-extensions from cython.

  • 'numba' : Runs the operation through JIT compiled code from numba.

  • None : Defaults to 'cython' or globally setting compute.use_numba

New in version 1.4.0.

engine_kwargsdict, default None (Not supported in Dask)
  • For 'cython' engine, there are no accepted engine_kwargs

  • For 'numba' engine, the engine can accept nopython, nogil and parallel dictionary keys. The values must either be True or False. The default engine_kwargs for the 'numba' engine is {{'nopython': True, 'nogil': False, 'parallel': False}}

New in version 1.4.0.

Returns:
pandas.Series or pandas.DataFrame

See also

Series.groupby

Apply a function groupby to a Series.

DataFrame.groupby

Apply a function groupby to each row or column of a DataFrame.

Examples

>>> df = pd.DataFrame({'A': [1, 1, 2, 1, 2],  
...                    'B': [np.nan, 2, 3, 4, 5],
...                    'C': [1, 2, 1, 1, 2]}, columns=['A', 'B', 'C'])

Groupby one column and return the mean of the remaining columns in each group.

>>> df.groupby('A').mean()  
     B         C
A
1  3.0  1.333333
2  4.0  1.500000

Groupby two columns and return the mean of the remaining column.

>>> df.groupby(['A', 'B']).mean()  
         C
A B
1 2.0  2.0
  4.0  1.0
2 3.0  1.0
  5.0  2.0

Groupby one column and return the mean of only particular column in the group.

>>> df.groupby('A')['B'].mean()  
A
1    3.0
2    4.0
Name: B, dtype: float64
median(split_every=None, split_out=1, shuffle_method=None, numeric_only=_NoDefault.no_default)#

Compute median of groups, excluding missing values.

This docstring was copied from pandas.core.groupby.groupby.GroupBy.median.

Some inconsistencies with the Dask version may exist.

For multiple groupings, the result index will be a MultiIndex

Parameters:
numeric_onlybool, default False

Include only float, int, boolean columns.

Changed in version 2.0.0: numeric_only no longer accepts None and defaults to False.

Returns:
Series or DataFrame

Median of values within each group.

Examples

For SeriesGroupBy:

>>> lst = ['a', 'a', 'a', 'b', 'b', 'b']  
>>> ser = pd.Series([7, 2, 8, 4, 3, 3], index=lst)  
>>> ser  
a     7
a     2
a     8
b     4
b     3
b     3
dtype: int64
>>> ser.groupby(level=0).median()  
a    7.0
b    3.0
dtype: float64

For DataFrameGroupBy:

>>> data = {'a': [1, 3, 5, 7, 7, 8, 3], 'b': [1, 4, 8, 4, 4, 2, 1]}  
>>> df = pd.DataFrame(data, index=['dog', 'dog', 'dog',  
...                   'mouse', 'mouse', 'mouse', 'mouse'])
>>> df  
         a  b
  dog    1  1
  dog    3  4
  dog    5  8
mouse    7  4
mouse    7  4
mouse    8  2
mouse    3  1
>>> df.groupby(level=0).median()  
         a    b
dog    3.0  4.0
mouse  7.0  3.0

For Resampler:

>>> ser = pd.Series([1, 2, 3, 3, 4, 5],  
...                 index=pd.DatetimeIndex(['2023-01-01',
...                                         '2023-01-10',
...                                         '2023-01-15',
...                                         '2023-02-01',
...                                         '2023-02-10',
...                                         '2023-02-15']))
>>> ser.resample('MS').median()  
2023-01-01    2.0
2023-02-01    4.0
Freq: MS, dtype: float64
min(split_every=None, split_out=1)#

Compute min of group values.

This docstring was copied from pandas.core.groupby.groupby.GroupBy.min.

Some inconsistencies with the Dask version may exist.

Parameters:
numeric_onlybool, default False

Include only float, int, boolean columns.

Changed in version 2.0.0: numeric_only no longer accepts None.

min_countint, default -1 (Not supported in Dask)

The required number of valid values to perform the operation. If fewer than min_count non-NA values are present the result will be NA.

enginestr, default None None (Not supported in Dask)
  • 'cython' : Runs rolling apply through C-extensions from cython.

  • 'numba'Runs rolling apply through JIT compiled code from numba.

    Only available when raw is set to True.

  • None : Defaults to 'cython' or globally setting compute.use_numba

engine_kwargsdict, default None None (Not supported in Dask)
  • For 'cython' engine, there are no accepted engine_kwargs

  • For 'numba' engine, the engine can accept nopython, nogil

    and parallel dictionary keys. The values must either be True or False. The default engine_kwargs for the 'numba' engine is {'nopython': True, 'nogil': False, 'parallel': False} and will be applied to both the func and the apply groupby aggregation.

Returns:
Series or DataFrame

Computed min of values within each group.

Examples

For SeriesGroupBy:

>>> lst = ['a', 'a', 'b', 'b']  
>>> ser = pd.Series([1, 2, 3, 4], index=lst)  
>>> ser  
a    1
a    2
b    3
b    4
dtype: int64
>>> ser.groupby(level=0).min()  
a    1
b    3
dtype: int64

For DataFrameGroupBy:

>>> data = [[1, 8, 2], [1, 2, 5], [2, 5, 8], [2, 6, 9]]  
>>> df = pd.DataFrame(data, columns=["a", "b", "c"],  
...                   index=["tiger", "leopard", "cheetah", "lion"])
>>> df  
          a  b  c
  tiger   1  8  2
leopard   1  2  5
cheetah   2  5  8
   lion   2  6  9
>>> df.groupby("a").min()  
    b  c
a
1   2  2
2   5  8
prod(split_every=None, split_out=1, shuffle_method=None, min_count=None, numeric_only=_NoDefault.no_default)#

Compute prod of group values.

This docstring was copied from pandas.core.groupby.groupby.GroupBy.prod.

Some inconsistencies with the Dask version may exist.

Parameters:
numeric_onlybool, default False

Include only float, int, boolean columns.

Changed in version 2.0.0: numeric_only no longer accepts None.

min_countint, default 0

The required number of valid values to perform the operation. If fewer than min_count non-NA values are present the result will be NA.

Returns:
Series or DataFrame

Computed prod of values within each group.

Examples

For SeriesGroupBy:

>>> lst = ['a', 'a', 'b', 'b']  
>>> ser = pd.Series([1, 2, 3, 4], index=lst)  
>>> ser  
a    1
a    2
b    3
b    4
dtype: int64
>>> ser.groupby(level=0).prod()  
a    2
b   12
dtype: int64

For DataFrameGroupBy:

>>> data = [[1, 8, 2], [1, 2, 5], [2, 5, 8], [2, 6, 9]]  
>>> df = pd.DataFrame(data, columns=["a", "b", "c"],  
...                   index=["tiger", "leopard", "cheetah", "lion"])
>>> df  
          a  b  c
  tiger   1  8  2
leopard   1  2  5
cheetah   2  5  8
   lion   2  6  9
>>> df.groupby("a").prod()  
     b    c
a
1   16   10
2   30   72
rolling(window, min_periods=None, center=False, win_type=None, axis=0)#

Provides rolling transformations.

Note

Since MultiIndexes are not well supported in Dask, this method returns a dataframe with the same index as the original data. The groupby column is not added as the first level of the index like pandas does.

This method works differently from other groupby methods. It does a groupby on each partition (plus some overlap). This means that the output has the same shape and number of partitions as the original.

Parameters:
windowstr, offset

Size of the moving window. This is the number of observations used for calculating the statistic. Data must have a DatetimeIndex

min_periodsint, default None

Minimum number of observations in window required to have a value (otherwise result is NA).

centerboolean, default False

Set the labels at the center of the window.

win_typestring, default None

Provide a window type. The recognized window types are identical to pandas.

axisint, default 0
Returns:
a Rolling object on which to call a method to compute a statistic

Examples

>>> import dask
>>> ddf = dask.datasets.timeseries(freq="1h")
>>> result = ddf.groupby("name").x.rolling('1D').max()
shift(periods=1, freq=_NoDefault.no_default, axis=_NoDefault.no_default, fill_value=_NoDefault.no_default, meta=_NoDefault.no_default)#

Parallel version of pandas GroupBy.shift

This mimics the pandas version except for the following:

If the grouper does not align with the index then this causes a full shuffle. The order of rows within each group may not be preserved.

Parameters:
periodsDelayed, Scalar or int, default 1

Number of periods to shift.

freqDelayed, Scalar or str, optional

Frequency string.

axisaxis to shift, default 0

Shift direction.

fill_valueScalar, Delayed or object, optional

The scalar value to use for newly introduced missing values.

metapd.DataFrame, pd.Series, dict, iterable, tuple, optional

An empty pd.DataFrame or pd.Series that matches the dtypes and column names of the output. This metadata is necessary for many algorithms in dask dataframe to work. For ease of use, some alternative inputs are also available. Instead of a DataFrame, a dict of {name: dtype} or iterable of (name, dtype) can be provided (note that the order of the names should match the order of the columns). Instead of a series, a tuple of (name, dtype) can be used. If not provided, dask will try to infer the metadata. This may lead to unexpected results, so providing meta is recommended. For more information, see dask.dataframe.utils.make_meta.

Returns:
shiftedSeries or DataFrame shifted within each group.

Examples

>>> import dask
>>> ddf = dask.datasets.timeseries(freq="1h")
>>> result = ddf.groupby("name").shift(1, meta={"id": int, "x": float, "y": float})
size(split_every=None, split_out=1, shuffle_method=None)#

Compute group sizes.

This docstring was copied from pandas.core.groupby.groupby.GroupBy.size.

Some inconsistencies with the Dask version may exist.

Returns:
DataFrame or Series

Number of rows in each group as a Series if as_index is True or a DataFrame if as_index is False.

See also

Series.groupby

Apply a function groupby to a Series.

DataFrame.groupby

Apply a function groupby to each row or column of a DataFrame.

Examples

For SeriesGroupBy:

>>> lst = ['a', 'a', 'b']  
>>> ser = pd.Series([1, 2, 3], index=lst)  
>>> ser  
a     1
a     2
b     3
dtype: int64
>>> ser.groupby(level=0).size()  
a    2
b    1
dtype: int64
>>> data = [[1, 2, 3], [1, 5, 6], [7, 8, 9]]  
>>> df = pd.DataFrame(data, columns=["a", "b", "c"],  
...                   index=["owl", "toucan", "eagle"])
>>> df  
        a  b  c
owl     1  2  3
toucan  1  5  6
eagle   7  8  9
>>> df.groupby("a").size()  
a
1    2
7    1
dtype: int64

For Resampler:

>>> ser = pd.Series([1, 2, 3], index=pd.DatetimeIndex(  
...                 ['2023-01-01', '2023-01-15', '2023-02-01']))
>>> ser  
2023-01-01    1
2023-01-15    2
2023-02-01    3
dtype: int64
>>> ser.resample('MS').size()  
2023-01-01    2
2023-02-01    1
Freq: MS, dtype: int64
std(split_every=None, split_out=1)#

Compute standard deviation of groups, excluding missing values.

This docstring was copied from pandas.core.groupby.groupby.GroupBy.std.

Some inconsistencies with the Dask version may exist.

For multiple groupings, the result index will be a MultiIndex.

Parameters:
ddofint, default 1

Degrees of freedom.

enginestr, default None (Not supported in Dask)
  • 'cython' : Runs the operation through C-extensions from cython.

  • 'numba' : Runs the operation through JIT compiled code from numba.

  • None : Defaults to 'cython' or globally setting compute.use_numba

New in version 1.4.0.

engine_kwargsdict, default None (Not supported in Dask)
  • For 'cython' engine, there are no accepted engine_kwargs

  • For 'numba' engine, the engine can accept nopython, nogil and parallel dictionary keys. The values must either be True or False. The default engine_kwargs for the 'numba' engine is {{'nopython': True, 'nogil': False, 'parallel': False}}

New in version 1.4.0.

numeric_onlybool, default False

Include only float, int or boolean data.

New in version 1.5.0.

Changed in version 2.0.0: numeric_only now defaults to False.

Returns:
Series or DataFrame

Standard deviation of values within each group.

See also

Series.groupby

Apply a function groupby to a Series.

DataFrame.groupby

Apply a function groupby to each row or column of a DataFrame.

Examples

For SeriesGroupBy:

>>> lst = ['a', 'a', 'a', 'b', 'b', 'b']  
>>> ser = pd.Series([7, 2, 8, 4, 3, 3], index=lst)  
>>> ser  
a     7
a     2
a     8
b     4
b     3
b     3
dtype: int64
>>> ser.groupby(level=0).std()  
a    3.21455
b    0.57735
dtype: float64

For DataFrameGroupBy:

>>> data = {'a': [1, 3, 5, 7, 7, 8, 3], 'b': [1, 4, 8, 4, 4, 2, 1]}  
>>> df = pd.DataFrame(data, index=['dog', 'dog', 'dog',  
...                   'mouse', 'mouse', 'mouse', 'mouse'])
>>> df  
         a  b
  dog    1  1
  dog    3  4
  dog    5  8
mouse    7  4
mouse    7  4
mouse    8  2
mouse    3  1
>>> df.groupby(level=0).std()  
              a         b
dog    2.000000  3.511885
mouse  2.217356  1.500000
sum(split_every=None, split_out=1)#

Compute sum of group values.

This docstring was copied from pandas.core.groupby.groupby.GroupBy.sum.

Some inconsistencies with the Dask version may exist.

Parameters:
numeric_onlybool, default False

Include only float, int, boolean columns.

Changed in version 2.0.0: numeric_only no longer accepts None.

min_countint, default 0

The required number of valid values to perform the operation. If fewer than min_count non-NA values are present the result will be NA.

enginestr, default None None (Not supported in Dask)
  • 'cython' : Runs rolling apply through C-extensions from cython.

  • 'numba'Runs rolling apply through JIT compiled code from numba.

    Only available when raw is set to True.

  • None : Defaults to 'cython' or globally setting compute.use_numba

engine_kwargsdict, default None None (Not supported in Dask)
  • For 'cython' engine, there are no accepted engine_kwargs

  • For 'numba' engine, the engine can accept nopython, nogil

    and parallel dictionary keys. The values must either be True or False. The default engine_kwargs for the 'numba' engine is {'nopython': True, 'nogil': False, 'parallel': False} and will be applied to both the func and the apply groupby aggregation.

Returns:
Series or DataFrame

Computed sum of values within each group.

Examples

For SeriesGroupBy:

>>> lst = ['a', 'a', 'b', 'b']  
>>> ser = pd.Series([1, 2, 3, 4], index=lst)  
>>> ser  
a    1
a    2
b    3
b    4
dtype: int64
>>> ser.groupby(level=0).sum()  
a    3
b    7
dtype: int64

For DataFrameGroupBy:

>>> data = [[1, 8, 2], [1, 2, 5], [2, 5, 8], [2, 6, 9]]  
>>> df = pd.DataFrame(data, columns=["a", "b", "c"],  
...                   index=["tiger", "leopard", "cheetah", "lion"])
>>> df  
          a  b  c
  tiger   1  8  2
leopard   1  2  5
cheetah   2  5  8
   lion   2  6  9
>>> df.groupby("a").sum()  
     b   c
a
1   10   7
2   11  17
transform(func, *args, **kwargs)#

Parallel version of pandas GroupBy.transform

This mimics the pandas version except for the following:

  1. If the grouper does not align with the index then this causes a full shuffle. The order of rows within each group may not be preserved.

  2. Dask’s GroupBy.transform is not appropriate for aggregations. For custom aggregations, use dask.dataframe.groupby.Aggregation.

Warning

Pandas’ groupby-transform can be used to apply arbitrary functions, including aggregations that result in one row per group. Dask’s groupby-transform will apply func once on each group, doing a shuffle if needed, such that each group is contained in one partition. When func is a reduction, e.g., you’ll end up with one row per group. To apply a custom aggregation with Dask, use dask.dataframe.groupby.Aggregation.

Parameters:
func: function

Function to apply

args, kwargsScalar, Delayed or object

Arguments and keywords to pass to the function.

metapd.DataFrame, pd.Series, dict, iterable, tuple, optional

An empty pd.DataFrame or pd.Series that matches the dtypes and column names of the output. This metadata is necessary for many algorithms in dask dataframe to work. For ease of use, some alternative inputs are also available. Instead of a DataFrame, a dict of {name: dtype} or iterable of (name, dtype) can be provided (note that the order of the names should match the order of the columns). Instead of a series, a tuple of (name, dtype) can be used. If not provided, dask will try to infer the metadata. This may lead to unexpected results, so providing meta is recommended. For more information, see dask.dataframe.utils.make_meta.

Returns:
appliedSeries or DataFrame depending on columns keyword
var(split_every=None, split_out=1)#

Compute variance of groups, excluding missing values.

This docstring was copied from pandas.core.groupby.groupby.GroupBy.var.

Some inconsistencies with the Dask version may exist.

For multiple groupings, the result index will be a MultiIndex.

Parameters:
ddofint, default 1

Degrees of freedom.

enginestr, default None (Not supported in Dask)
  • 'cython' : Runs the operation through C-extensions from cython.

  • 'numba' : Runs the operation through JIT compiled code from numba.

  • None : Defaults to 'cython' or globally setting compute.use_numba

New in version 1.4.0.

engine_kwargsdict, default None (Not supported in Dask)
  • For 'cython' engine, there are no accepted engine_kwargs

  • For 'numba' engine, the engine can accept nopython, nogil and parallel dictionary keys. The values must either be True or False. The default engine_kwargs for the 'numba' engine is {{'nopython': True, 'nogil': False, 'parallel': False}}

New in version 1.4.0.

numeric_onlybool, default False

Include only float, int or boolean data.

New in version 1.5.0.

Changed in version 2.0.0: numeric_only now defaults to False.

Returns:
Series or DataFrame

Variance of values within each group.

See also

Series.groupby

Apply a function groupby to a Series.

DataFrame.groupby

Apply a function groupby to each row or column of a DataFrame.

Examples

For SeriesGroupBy:

>>> lst = ['a', 'a', 'a', 'b', 'b', 'b']  
>>> ser = pd.Series([7, 2, 8, 4, 3, 3], index=lst)  
>>> ser  
a     7
a     2
a     8
b     4
b     3
b     3
dtype: int64
>>> ser.groupby(level=0).var()  
a    10.333333
b     0.333333
dtype: float64

For DataFrameGroupBy:

>>> data = {'a': [1, 3, 5, 7, 7, 8, 3], 'b': [1, 4, 8, 4, 4, 2, 1]}  
>>> df = pd.DataFrame(data, index=['dog', 'dog', 'dog',  
...                   'mouse', 'mouse', 'mouse', 'mouse'])
>>> df  
         a  b
  dog    1  1
  dog    3  4
  dog    5  8
mouse    7  4
mouse    7  4
mouse    8  2
mouse    3  1
>>> df.groupby(level=0).var()  
              a          b
dog    4.000000  12.333333
mouse  4.916667   2.250000
dask_cudf.groupby_agg(*args, **kwargs)#

DataFrames and Series#

The core distributed objects provided by Dask-cuDF are the DataFrame and Series. These inherit respectively from dask.dataframe.DataFrame and dask.dataframe.Series, and so the API is essentially identical. The full API is provided below.

class dask_cudf.DataFrame(expr)#

Bases: DataFrame, CudfFrameBase

Attributes

divisions

Tuple of npartitions + 1 values, in ascending order, marking the lower/upper bounds of each partition's index.

dtypes

Return data types

iloc

Purely integer-location based indexing for selection by position.

index

Return dask Index instance

known_divisions

Whether the divisions are known.

loc

Purely label-location based indexer for selection by label.

ndim

Return dimensionality

npartitions

Return number of partitions

partitions

Slice dataframe by partitions

size

Size of the Series or DataFrame as a Delayed object.

values

Return a dask.array of the values of this dataframe

axes

columns

dask

empty

expr

nbytes

shape

Methods

abs()

Return a Series/DataFrame with absolute numeric value of each element.

add_prefix(prefix)

Prefix labels with string prefix.

add_suffix(suffix)

Suffix labels with string suffix.

align(other[, join, axis, fill_value])

Align two objects on their axes with the specified join method.

all([axis, skipna, split_every])

Return whether all elements are True, potentially over an axis.

analyze([filename, format])

Outputs statistics about every node in the expression.

any([axis, skipna, split_every])

Return whether any element is True, potentially over an axis.

apply(function, *args[, meta, axis])

Parallel version of pandas.DataFrame.apply

assign(**pairs)

Assign new columns to a DataFrame.

astype(dtypes)

Cast a pandas object to a specified dtype dtype.

bfill([axis, limit])

Fill NA/NaN values by using the next valid observation to fill the gap.

categorize([columns, index, split_every])

Convert columns of the DataFrame to category dtype.

clear_divisions()

Forget division information.

clip([lower, upper, axis])

Trim values at input threshold(s).

combine(other, func[, fill_value, overwrite])

Perform column-wise combine with another DataFrame.

combine_first(other)

Update null elements with value in the same location in other.

compute([fuse])

Compute this DataFrame.

compute_current_divisions([col, set_divisions])

Compute the current divisions of the DataFrame.

copy([deep])

Make a copy of the dataframe

corr([method, min_periods, numeric_only, ...])

Compute pairwise correlation of columns, excluding NA/null values.

count([axis, numeric_only, split_every])

Count non-NA cells for each column or row.

cov([min_periods, numeric_only, split_every])

Compute pairwise covariance of columns, excluding NA/null values.

cummax([axis, skipna])

Return cumulative maximum over a DataFrame or Series axis.

cummin([axis, skipna])

Return cumulative minimum over a DataFrame or Series axis.

cumprod([axis, skipna])

Return cumulative product over a DataFrame or Series axis.

cumsum([axis, skipna])

Return cumulative sum over a DataFrame or Series axis.

describe([split_every, percentiles, ...])

Generate descriptive statistics.

diff([periods, axis])

First discrete difference of element.

dot(other[, meta])

Compute the dot product between the Series and the columns of other.

drop([labels, axis, columns, errors])

Drop specified labels from rows or columns.

drop_duplicates([subset, split_every, ...])

Return DataFrame with duplicate rows removed.

dropna([how, subset, thresh])

Remove missing values.

enforce_runtime_divisions()

Enforce the current divisions at runtime.

eval(expr, **kwargs)

Evaluate a string describing operations on DataFrame columns.

explain([stage, format])

Create a graph representation of the Expression.

explode(column)

Transform each element of a list-like to a row, replicating index values.

ffill([axis, limit])

Fill NA/NaN values by propagating the last valid observation to next valid.

fillna([value, axis])

Fill NA/NaN values using the specified method.

from_dict(*args, **kwargs)

Construct a Dask DataFrame from a Python Dictionary

get_partition(n)

Get a dask DataFrame/Series representing the nth partition.

groupby(by[, group_keys, sort, observed, dropna])

Group DataFrame using a mapper or by a Series of columns.

head([n, npartitions, compute])

First n rows of the dataset

idxmax([axis, skipna, numeric_only, split_every])

Return index of first occurrence of maximum over requested axis.

idxmin([axis, skipna, numeric_only, split_every])

Return index of first occurrence of minimum over requested axis.

info([buf, verbose, memory_usage])

Concise summary of a Dask DataFrame

isin(values)

Whether each element in the DataFrame is contained in values.

isna()

Detect missing values.

isnull()

DataFrame.isnull is an alias for DataFrame.isna.

items()

Iterate over (column name, Series) pairs.

iterrows()

Iterate over DataFrame rows as (index, Series) pairs.

itertuples([index, name])

Iterate over DataFrame rows as namedtuples.

join(other[, on, how, lsuffix, rsuffix, ...])

Join columns of another DataFrame.

kurt([axis, fisher, bias, nan_policy, ...])

Return unbiased kurtosis over requested axis.

kurtosis([axis, fisher, bias, nan_policy, ...])

Return unbiased kurtosis over requested axis.

map_overlap(func, before, after, *args[, ...])

Apply a function to each partition, sharing rows with adjacent partitions.

map_partitions(func, *args[, meta, ...])

Apply a Python function to each partition

mask(cond[, other])

Replace values where the condition is True.

max([axis, skipna, numeric_only, split_every])

Return the maximum of the values over the requested axis.

mean([axis, skipna, numeric_only, split_every])

Return the mean of the values over the requested axis.

median([axis, numeric_only])

Return the median of the values over the requested axis.

median_approximate([axis, method, numeric_only])

Return the approximate median of the values over the requested axis.

melt([id_vars, value_vars, var_name, ...])

Unpivot a DataFrame from wide to long format, optionally leaving identifiers set.

memory_usage([deep, index])

Return the memory usage of each column in bytes.

memory_usage_per_partition([index, deep])

Return the memory usage of each partition

merge(right[, how, on, left_on, right_on, ...])

Merge the DataFrame with another DataFrame

min([axis, skipna, numeric_only, split_every])

Return the minimum of the values over the requested axis.

mode([dropna, split_every, numeric_only])

Get the mode(s) of each element along the selected axis.

nlargest([n, columns, split_every])

Return the first n rows ordered by columns in descending order.

notnull()

DataFrame.notnull is an alias for DataFrame.notna.

nsmallest([n, columns, split_every])

Return the first n rows ordered by columns in ascending order.

nunique([axis, dropna, split_every])

Count number of distinct elements in specified axis.

nunique_approx([split_every])

Approximate number of unique rows.

optimize([fuse])

Optimizes the DataFrame.

persist([fuse])

Persist this dask collection into memory

pipe(func, *args, **kwargs)

Apply chainable functions that expect Series or DataFrames.

pivot_table(index, columns, values[, aggfunc])

Create a spreadsheet-style pivot table as a DataFrame.

pop(item)

Return item and drop from frame.

pprint()

Outputs a string representation of the DataFrame.

prod([axis, skipna, numeric_only, ...])

Return the product of the values over the requested axis.

product([axis, skipna, numeric_only, ...])

Return the product of the values over the requested axis.

quantile([q, axis, numeric_only, method])

Approximate row-wise and precise column-wise quantiles of DataFrame

query(expr, **kwargs)

Filter dataframe with complex expression

random_split(frac[, random_state, shuffle])

Pseudorandomly split dataframe into different pieces row-wise

reduction(chunk[, aggregate, combine, meta, ...])

Generic row-wise reductions.

rename([index, columns])

Rename columns or index labels.

rename_axis([mapper, index, columns, axis])

Set the name of the axis for the index or columns.

repartition([divisions, npartitions, ...])

Repartition a collection

replace([to_replace, value, regex])

Replace values given in to_replace with value.

resample(rule[, closed, label])

Resample time-series data.

reset_index([drop])

Reset the index to the default index.

rolling(window, **kwargs)

Provides rolling transformations.

round([decimals])

Round a DataFrame to a variable number of decimal places.

sample([n, frac, replace, random_state])

Random sample of items

select_dtypes([include, exclude])

Return a subset of the DataFrame's columns based on the column dtypes.

sem([axis, skipna, ddof, split_every, ...])

Return unbiased standard error of the mean over requested axis.

set_index(*args[, divisions])

Set the DataFrame index (row labels) using an existing column.

shift([periods, freq, axis])

Shift index by desired number of periods with an optional time freq.

shuffle([on, ignore_index, npartitions, ...])

Rearrange DataFrame into new partitions

skew([axis, bias, nan_policy, numeric_only])

Return unbiased skew over requested axis.

sort_values(by[, npartitions, ascending, ...])

Sort the dataset by a single column.

squeeze([axis])

Squeeze 1 dimensional axis objects into scalars.

std([axis, skipna, ddof, numeric_only, ...])

Return sample standard deviation over requested axis.

sum([axis, skipna, numeric_only, min_count, ...])

Return the sum of the values over the requested axis.

tail([n, compute])

Last n rows of the dataset

to_backend([backend])

Move to a new DataFrame backend

to_bag([index, format])

Create a Dask Bag from a Series

to_csv(filename, **kwargs)

See dd.to_csv docstring for more information

to_dask_array([lengths, meta, optimize])

Convert a dask DataFrame to a dask array.

to_dask_dataframe(**kwargs)

Create a dask.dataframe object from a dask_cudf object

to_delayed([optimize_graph])

Convert into a list of dask.delayed objects, one per partition.

to_hdf(path_or_buf, key[, mode, append])

See dd.to_hdf docstring for more information

to_html([max_rows])

Render a DataFrame as an HTML table.

to_json(filename, *args, **kwargs)

See dd.to_json docstring for more information

to_legacy_dataframe([optimize])

Convert to a legacy dask-dataframe collection

to_orc(*args, **kwargs)

See dd.to_orc docstring for more information

to_string([max_rows])

Render a DataFrame to a console-friendly tabular output.

to_timestamp([freq, how])

Cast to DatetimeIndex of timestamps, at beginning of period.

var([axis, skipna, ddof, numeric_only, ...])

Return unbiased variance over requested axis.

visualize([tasks])

Visualize the expression or task graph

where(cond[, other])

Replace values where the condition is False.

add

div

divide

eq

floordiv

ge

gt

le

lower_once

lt

map

mod

mul

ne

pow

radd

rdiv

read_text

rfloordiv

rmod

rmul

rpow

rsub

rtruediv

simplify

sub

to_parquet

to_records

to_sql

truediv

abs()#

Return a Series/DataFrame with absolute numeric value of each element.

This docstring was copied from pandas.core.frame.DataFrame.abs.

Some inconsistencies with the Dask version may exist.

This function only applies to elements that are all numeric.

Returns:
abs

Series/DataFrame containing the absolute value of each element.

See also

numpy.absolute

Calculate the absolute value element-wise.

Notes

For complex inputs, 1.2 + 1j, the absolute value is \(\sqrt{ a^2 + b^2 }\).

Examples

Absolute numeric values in a Series.

>>> s = pd.Series([-1.10, 2, -3.33, 4])  
>>> s.abs()  
0    1.10
1    2.00
2    3.33
3    4.00
dtype: float64

Absolute numeric values in a Series with complex numbers.

>>> s = pd.Series([1.2 + 1j])  
>>> s.abs()  
0    1.56205
dtype: float64

Absolute numeric values in a Series with a Timedelta element.

>>> s = pd.Series([pd.Timedelta('1 days')])  
>>> s.abs()  
0   1 days
dtype: timedelta64[ns]

Select rows with data closest to certain value using argsort (from StackOverflow).

>>> df = pd.DataFrame({  
...     'a': [4, 5, 6, 7],
...     'b': [10, 20, 30, 40],
...     'c': [100, 50, -30, -50]
... })
>>> df  
     a    b    c
0    4   10  100
1    5   20   50
2    6   30  -30
3    7   40  -50
>>> df.loc[(df.c - 43).abs().argsort()]  
     a    b    c
1    5   20   50
0    4   10  100
2    6   30  -30
3    7   40  -50
add_prefix(prefix)#

Prefix labels with string prefix.

This docstring was copied from pandas.core.frame.DataFrame.add_prefix.

Some inconsistencies with the Dask version may exist.

For Series, the row labels are prefixed. For DataFrame, the column labels are prefixed.

Parameters:
prefixstr

The string to add before each label.

axis{0 or ‘index’, 1 or ‘columns’, None}, default None (Not supported in Dask)

Axis to add prefix on

New in version 2.0.0.

Returns:
Series or DataFrame

New Series or DataFrame with updated labels.

See also

Series.add_suffix

Suffix row labels with string suffix.

DataFrame.add_suffix

Suffix column labels with string suffix.

Examples

>>> s = pd.Series([1, 2, 3, 4])  
>>> s  
0    1
1    2
2    3
3    4
dtype: int64
>>> s.add_prefix('item_')  
item_0    1
item_1    2
item_2    3
item_3    4
dtype: int64
>>> df = pd.DataFrame({'A': [1, 2, 3, 4], 'B': [3, 4, 5, 6]})  
>>> df  
   A  B
0  1  3
1  2  4
2  3  5
3  4  6
>>> df.add_prefix('col_')  
     col_A  col_B
0       1       3
1       2       4
2       3       5
3       4       6
add_suffix(suffix)#

Suffix labels with string suffix.

This docstring was copied from pandas.core.frame.DataFrame.add_suffix.

Some inconsistencies with the Dask version may exist.

For Series, the row labels are suffixed. For DataFrame, the column labels are suffixed.

Parameters:
suffixstr

The string to add after each label.

axis{0 or ‘index’, 1 or ‘columns’, None}, default None (Not supported in Dask)

Axis to add suffix on

New in version 2.0.0.

Returns:
Series or DataFrame

New Series or DataFrame with updated labels.

See also

Series.add_prefix

Prefix row labels with string prefix.

DataFrame.add_prefix

Prefix column labels with string prefix.

Examples

>>> s = pd.Series([1, 2, 3, 4])  
>>> s  
0    1
1    2
2    3
3    4
dtype: int64
>>> s.add_suffix('_item')  
0_item    1
1_item    2
2_item    3
3_item    4
dtype: int64
>>> df = pd.DataFrame({'A': [1, 2, 3, 4], 'B': [3, 4, 5, 6]})  
>>> df  
   A  B
0  1  3
1  2  4
2  3  5
3  4  6
>>> df.add_suffix('_col')  
     A_col  B_col
0       1       3
1       2       4
2       3       5
3       4       6
align(other, join='outer', axis=None, fill_value=None)#

Align two objects on their axes with the specified join method.

This docstring was copied from pandas.core.frame.DataFrame.align.

Some inconsistencies with the Dask version may exist.

Join method is specified for each axis Index.

Parameters:
otherDataFrame or Series
join{‘outer’, ‘inner’, ‘left’, ‘right’}, default ‘outer’

Type of alignment to be performed.

  • left: use only keys from left frame, preserve key order.

  • right: use only keys from right frame, preserve key order.

  • outer: use union of keys from both frames, sort keys lexicographically.

  • inner: use intersection of keys from both frames, preserve the order of the left keys.

axisallowed axis of the other object, default None

Align on index (0), columns (1), or both (None).

levelint or level name, default None (Not supported in Dask)

Broadcast across a level, matching Index values on the passed MultiIndex level.

copybool, default True (Not supported in Dask)

Always returns new objects. If copy=False and no reindexing is required then original objects are returned.

Note

The copy keyword will change behavior in pandas 3.0. Copy-on-Write will be enabled by default, which means that all methods with a copy keyword will use a lazy copy mechanism to defer the copy and ignore the copy keyword. The copy keyword will be removed in a future version of pandas.

You can already get the future behavior and improvements through enabling copy on write pd.options.mode.copy_on_write = True

fill_valuescalar, default np.nan

Value to use for missing values. Defaults to NaN, but can be any “compatible” value.

method{‘backfill’, ‘bfill’, ‘pad’, ‘ffill’, None}, default None (Not supported in Dask)

Method to use for filling holes in reindexed Series:

  • pad / ffill: propagate last valid observation forward to next valid.

  • backfill / bfill: use NEXT valid observation to fill gap.

Deprecated since version 2.1.

limitint, default None (Not supported in Dask)

If method is specified, this is the maximum number of consecutive NaN values to forward/backward fill. In other words, if there is a gap with more than this number of consecutive NaNs, it will only be partially filled. If method is not specified, this is the maximum number of entries along the entire axis where NaNs will be filled. Must be greater than 0 if not None.

Deprecated since version 2.1.

fill_axis{0 or ‘index’} for Series, {0 or ‘index’, 1 or ‘columns’} for DataFrame, default 0 (Not supported in Dask)

Filling axis, method and limit.

Deprecated since version 2.1.

broadcast_axis{0 or ‘index’} for Series, {0 or ‘index’, 1 or ‘columns’} for DataFrame, default None (Not supported in Dask)

Broadcast values along this axis, if aligning two objects of different dimensions.

Deprecated since version 2.1.

Returns:
tuple of (Series/DataFrame, type of other)

Aligned objects.

Examples

>>> df = pd.DataFrame(  
...     [[1, 2, 3, 4], [6, 7, 8, 9]], columns=["D", "B", "E", "A"], index=[1, 2]
... )
>>> other = pd.DataFrame(  
...     [[10, 20, 30, 40], [60, 70, 80, 90], [600, 700, 800, 900]],
...     columns=["A", "B", "C", "D"],
...     index=[2, 3, 4],
... )
>>> df  
   D  B  E  A
1  1  2  3  4
2  6  7  8  9
>>> other  
    A    B    C    D
2   10   20   30   40
3   60   70   80   90
4  600  700  800  900

Align on columns:

>>> left, right = df.align(other, join="outer", axis=1)  
>>> left  
   A  B   C  D  E
1  4  2 NaN  1  3
2  9  7 NaN  6  8
>>> right  
    A    B    C    D   E
2   10   20   30   40 NaN
3   60   70   80   90 NaN
4  600  700  800  900 NaN

We can also align on the index:

>>> left, right = df.align(other, join="outer", axis=0)  
>>> left  
    D    B    E    A
1  1.0  2.0  3.0  4.0
2  6.0  7.0  8.0  9.0
3  NaN  NaN  NaN  NaN
4  NaN  NaN  NaN  NaN
>>> right  
    A      B      C      D
1    NaN    NaN    NaN    NaN
2   10.0   20.0   30.0   40.0
3   60.0   70.0   80.0   90.0
4  600.0  700.0  800.0  900.0

Finally, the default axis=None will align on both index and columns:

>>> left, right = df.align(other, join="outer", axis=None)  
>>> left  
     A    B   C    D    E
1  4.0  2.0 NaN  1.0  3.0
2  9.0  7.0 NaN  6.0  8.0
3  NaN  NaN NaN  NaN  NaN
4  NaN  NaN NaN  NaN  NaN
>>> right  
       A      B      C      D   E
1    NaN    NaN    NaN    NaN NaN
2   10.0   20.0   30.0   40.0 NaN
3   60.0   70.0   80.0   90.0 NaN
4  600.0  700.0  800.0  900.0 NaN
all(axis=0, skipna=True, split_every=False, **kwargs)#

Return whether all elements are True, potentially over an axis.

This docstring was copied from pandas.core.frame.DataFrame.all.

Some inconsistencies with the Dask version may exist.

Returns True unless there at least one element within a series or along a Dataframe axis that is False or equivalent (e.g. zero or empty).

Parameters:
axis{0 or ‘index’, 1 or ‘columns’, None}, default 0

Indicate which axis or axes should be reduced. For Series this parameter is unused and defaults to 0.

  • 0 / ‘index’ : reduce the index, return a Series whose index is the original column labels.

  • 1 / ‘columns’ : reduce the columns, return a Series whose index is the original index.

  • None : reduce all axes, return a scalar.

bool_onlybool, default False (Not supported in Dask)

Include only boolean columns. Not implemented for Series.

skipnabool, default True

Exclude NA/null values. If the entire row/column is NA and skipna is True, then the result will be True, as for an empty row/column. If skipna is False, then NA are treated as True, because these are not equal to zero.

**kwargsany, default None

Additional keywords have no effect but might be accepted for compatibility with NumPy.

Returns:
Series or DataFrame

If level is specified, then, DataFrame is returned; otherwise, Series is returned.

See also

Series.all

Return True if all elements are True.

DataFrame.any

Return True if one (or more) elements are True.

Examples

Series

>>> pd.Series([True, True]).all()  
True
>>> pd.Series([True, False]).all()  
False
>>> pd.Series([], dtype="float64").all()  
True
>>> pd.Series([np.nan]).all()  
True
>>> pd.Series([np.nan]).all(skipna=False)  
True

DataFrames

Create a dataframe from a dictionary.

>>> df = pd.DataFrame({'col1': [True, True], 'col2': [True, False]})  
>>> df  
   col1   col2
0  True   True
1  True  False

Default behaviour checks if values in each column all return True.

>>> df.all()  
col1     True
col2    False
dtype: bool

Specify axis='columns' to check if values in each row all return True.

>>> df.all(axis='columns')  
0     True
1    False
dtype: bool

Or axis=None for whether every value is True.

>>> df.all(axis=None)  
False
analyze(filename: str | None = None, format: str | None = None) None#

Outputs statistics about every node in the expression.

analyze optimizes the expression and triggers a computation. It records statistics like memory usage per partition to analyze how data flow through the graph.

Warning

analyze adds plugins to the scheduler and the workers that have a non-trivial cost. This method should not be used in production workflows.

Parameters:
filename: str, None

File to store the graph representation.

format: str, default is png

File format for the graph representation.

Returns:
None, but writes a graph representation of the expression enriched with
statistics to disk.
any(axis=0, skipna=True, split_every=False, **kwargs)#

Return whether any element is True, potentially over an axis.

This docstring was copied from pandas.core.frame.DataFrame.any.

Some inconsistencies with the Dask version may exist.

Returns False unless there is at least one element within a series or along a Dataframe axis that is True or equivalent (e.g. non-zero or non-empty).

Parameters:
axis{0 or ‘index’, 1 or ‘columns’, None}, default 0

Indicate which axis or axes should be reduced. For Series this parameter is unused and defaults to 0.

  • 0 / ‘index’ : reduce the index, return a Series whose index is the original column labels.

  • 1 / ‘columns’ : reduce the columns, return a Series whose index is the original index.

  • None : reduce all axes, return a scalar.

bool_onlybool, default False (Not supported in Dask)

Include only boolean columns. Not implemented for Series.

skipnabool, default True

Exclude NA/null values. If the entire row/column is NA and skipna is True, then the result will be False, as for an empty row/column. If skipna is False, then NA are treated as True, because these are not equal to zero.

**kwargsany, default None

Additional keywords have no effect but might be accepted for compatibility with NumPy.

Returns:
Series or DataFrame

If level is specified, then, DataFrame is returned; otherwise, Series is returned.

See also

numpy.any

Numpy version of this method.

Series.any

Return whether any element is True.

Series.all

Return whether all elements are True.

DataFrame.any

Return whether any element is True over requested axis.

DataFrame.all

Return whether all elements are True over requested axis.

Examples

Series

For Series input, the output is a scalar indicating whether any element is True.

>>> pd.Series([False, False]).any()  
False
>>> pd.Series([True, False]).any()  
True
>>> pd.Series([], dtype="float64").any()  
False
>>> pd.Series([np.nan]).any()  
False
>>> pd.Series([np.nan]).any(skipna=False)  
True

DataFrame

Whether each column contains at least one True element (the default).

>>> df = pd.DataFrame({"A": [1, 2], "B": [0, 2], "C": [0, 0]})  
>>> df  
   A  B  C
0  1  0  0
1  2  2  0
>>> df.any()  
A     True
B     True
C    False
dtype: bool

Aggregating over the columns.

>>> df = pd.DataFrame({"A": [True, False], "B": [1, 2]})  
>>> df  
       A  B
0   True  1
1  False  2
>>> df.any(axis='columns')  
0    True
1    True
dtype: bool
>>> df = pd.DataFrame({"A": [True, False], "B": [1, 0]})  
>>> df  
       A  B
0   True  1
1  False  0
>>> df.any(axis='columns')  
0    True
1    False
dtype: bool

Aggregating over the entire DataFrame with axis=None.

>>> df.any(axis=None)  
True

any for an empty DataFrame is an empty Series.

>>> pd.DataFrame([]).any()  
Series([], dtype: bool)
apply(function, *args, meta=_NoDefault.no_default, axis=0, **kwargs)#

Parallel version of pandas.DataFrame.apply

This mimics the pandas version except for the following:

  1. Only axis=1 is supported (and must be specified explicitly).

  2. The user should provide output metadata via the meta keyword.

Parameters:
funcfunction

Function to apply to each column/row

axis{0 or ‘index’, 1 or ‘columns’}, default 0
  • 0 or ‘index’: apply function to each column (NOT SUPPORTED)

  • 1 or ‘columns’: apply function to each row

metapd.DataFrame, pd.Series, dict, iterable, tuple, optional

An empty pd.DataFrame or pd.Series that matches the dtypes and column names of the output. This metadata is necessary for many algorithms in dask dataframe to work. For ease of use, some alternative inputs are also available. Instead of a DataFrame, a dict of {name: dtype} or iterable of (name, dtype) can be provided (note that the order of the names should match the order of the columns). Instead of a series, a tuple of (name, dtype) can be used. If not provided, dask will try to infer the metadata. This may lead to unexpected results, so providing meta is recommended. For more information, see dask.dataframe.utils.make_meta.

argstuple

Positional arguments to pass to function in addition to the array/series

Additional keyword arguments will be passed as keywords to the function
Returns:
appliedSeries or DataFrame

Examples

>>> import pandas as pd
>>> import dask.dataframe as dd
>>> df = pd.DataFrame({'x': [1, 2, 3, 4, 5],
...                    'y': [1., 2., 3., 4., 5.]})
>>> ddf = dd.from_pandas(df, npartitions=2)

Apply a function to row-wise passing in extra arguments in args and kwargs:

>>> def myadd(row, a, b=1):
...     return row.sum() + a + b
>>> res = ddf.apply(myadd, axis=1, args=(2,), b=1.5)  

By default, dask tries to infer the output metadata by running your provided function on some fake data. This works well in many cases, but can sometimes be expensive, or even fail. To avoid this, you can manually specify the output metadata with the meta keyword. This can be specified in many forms, for more information see dask.dataframe.utils.make_meta.

Here we specify the output is a Series with name 'x', and dtype float64:

>>> res = ddf.apply(myadd, axis=1, args=(2,), b=1.5, meta=('x', 'f8'))

In the case where the metadata doesn’t change, you can also pass in the object itself directly:

>>> res = ddf.apply(lambda row: row + 1, axis=1, meta=ddf)
assign(**pairs)#

Assign new columns to a DataFrame.

This docstring was copied from pandas.core.frame.DataFrame.assign.

Some inconsistencies with the Dask version may exist.

Returns a new object with all original columns in addition to new ones. Existing columns that are re-assigned will be overwritten.

Parameters:
**kwargsdict of {str: callable or Series}

The column names are keywords. If the values are callable, they are computed on the DataFrame and assigned to the new columns. The callable must not change input DataFrame (though pandas doesn’t check it). If the values are not callable, (e.g. a Series, scalar, or array), they are simply assigned.

Returns:
DataFrame

A new DataFrame with the new columns in addition to all the existing columns.

Notes

Assigning multiple columns within the same assign is possible. Later items in ‘**kwargs’ may refer to newly created or modified columns in ‘df’; items are computed and assigned into ‘df’ in order.

Examples

>>> df = pd.DataFrame({'temp_c': [17.0, 25.0]},  
...                   index=['Portland', 'Berkeley'])
>>> df  
          temp_c
Portland    17.0
Berkeley    25.0

Where the value is a callable, evaluated on df:

>>> df.assign(temp_f=lambda x: x.temp_c * 9 / 5 + 32)  
          temp_c  temp_f
Portland    17.0    62.6
Berkeley    25.0    77.0

Alternatively, the same behavior can be achieved by directly referencing an existing Series or sequence:

>>> df.assign(temp_f=df['temp_c'] * 9 / 5 + 32)  
          temp_c  temp_f
Portland    17.0    62.6
Berkeley    25.0    77.0

You can create multiple columns within the same assign where one of the columns depends on another one defined within the same assign:

>>> df.assign(temp_f=lambda x: x['temp_c'] * 9 / 5 + 32,  
...           temp_k=lambda x: (x['temp_f'] + 459.67) * 5 / 9)
          temp_c  temp_f  temp_k
Portland    17.0    62.6  290.15
Berkeley    25.0    77.0  298.15
astype(dtypes)#

Cast a pandas object to a specified dtype dtype.

This docstring was copied from pandas.core.frame.DataFrame.astype.

Some inconsistencies with the Dask version may exist.

Parameters:
dtypestr, data type, Series or Mapping of column name -> data type

Use a str, numpy.dtype, pandas.ExtensionDtype or Python type to cast entire pandas object to the same type. Alternatively, use a mapping, e.g. {col: dtype, …}, where col is a column label and dtype is a numpy.dtype or Python type to cast one or more of the DataFrame’s columns to column-specific types.

copybool, default True (Not supported in Dask)

Return a copy when copy=True (be very careful setting copy=False as changes to values then may propagate to other pandas objects).

Note

The copy keyword will change behavior in pandas 3.0. Copy-on-Write will be enabled by default, which means that all methods with a copy keyword will use a lazy copy mechanism to defer the copy and ignore the copy keyword. The copy keyword will be removed in a future version of pandas.

You can already get the future behavior and improvements through enabling copy on write pd.options.mode.copy_on_write = True

errors{‘raise’, ‘ignore’}, default ‘raise’ (Not supported in Dask)

Control raising of exceptions on invalid data for provided dtype.

  • raise : allow exceptions to be raised

  • ignore : suppress exceptions. On error return original object.

Returns:
same type as caller

See also

to_datetime

Convert argument to datetime.

to_timedelta

Convert argument to timedelta.

to_numeric

Convert argument to a numeric type.

numpy.ndarray.astype

Cast a numpy array to a specified type.

Notes

Changed in version 2.0.0: Using astype to convert from timezone-naive dtype to timezone-aware dtype will raise an exception. Use Series.dt.tz_localize() instead.

Examples

Create a DataFrame:

>>> d = {'col1': [1, 2], 'col2': [3, 4]}  
>>> df = pd.DataFrame(data=d)  
>>> df.dtypes  
col1    int64
col2    int64
dtype: object

Cast all columns to int32:

>>> df.astype('int32').dtypes  
col1    int32
col2    int32
dtype: object

Cast col1 to int32 using a dictionary:

>>> df.astype({'col1': 'int32'}).dtypes  
col1    int32
col2    int64
dtype: object

Create a series:

>>> ser = pd.Series([1, 2], dtype='int32')  
>>> ser  
0    1
1    2
dtype: int32
>>> ser.astype('int64')  
0    1
1    2
dtype: int64

Convert to categorical type:

>>> ser.astype('category')  
0    1
1    2
dtype: category
Categories (2, int32): [1, 2]

Convert to ordered categorical type with custom ordering:

>>> from pandas.api.types import CategoricalDtype  
>>> cat_dtype = CategoricalDtype(  
...     categories=[2, 1], ordered=True)
>>> ser.astype(cat_dtype)  
0    1
1    2
dtype: category
Categories (2, int64): [2 < 1]

Create a series of dates:

>>> ser_date = pd.Series(pd.date_range('20200101', periods=3))  
>>> ser_date  
0   2020-01-01
1   2020-01-02
2   2020-01-03
dtype: datetime64[ns]
bfill(axis=0, limit=None)#

Fill NA/NaN values by using the next valid observation to fill the gap.

This docstring was copied from pandas.core.frame.DataFrame.bfill.

Some inconsistencies with the Dask version may exist.

Parameters:
axis{0 or ‘index’} for Series, {0 or ‘index’, 1 or ‘columns’} for DataFrame

Axis along which to fill missing values. For Series this parameter is unused and defaults to 0.

inplacebool, default False (Not supported in Dask)

If True, fill in-place. Note: this will modify any other views on this object (e.g., a no-copy slice for a column in a DataFrame).

limitint, default None

If method is specified, this is the maximum number of consecutive NaN values to forward/backward fill. In other words, if there is a gap with more than this number of consecutive NaNs, it will only be partially filled. If method is not specified, this is the maximum number of entries along the entire axis where NaNs will be filled. Must be greater than 0 if not None.

limit_area{None, ‘inside’, ‘outside’}, default None (Not supported in Dask)

If limit is specified, consecutive NaNs will be filled with this restriction.

  • None: No fill restriction.

  • ‘inside’: Only fill NaNs surrounded by valid values (interpolate).

  • ‘outside’: Only fill NaNs outside valid values (extrapolate).

New in version 2.2.0.

downcastdict, default is None (Not supported in Dask)

A dict of item->dtype of what to downcast if possible, or the string ‘infer’ which will try to downcast to an appropriate equal type (e.g. float64 to int64 if possible).

Deprecated since version 2.2.0.

Returns:
Series/DataFrame or None

Object with missing values filled or None if inplace=True.

Examples

For Series:

>>> s = pd.Series([1, None, None, 2])  
>>> s.bfill()  
0    1.0
1    2.0
2    2.0
3    2.0
dtype: float64
>>> s.bfill(limit=1)  
0    1.0
1    NaN
2    2.0
3    2.0
dtype: float64

With DataFrame:

>>> df = pd.DataFrame({'A': [1, None, None, 4], 'B': [None, 5, None, 7]})  
>>> df  
      A     B
0   1.0   NaN
1   NaN   5.0
2   NaN   NaN
3   4.0   7.0
>>> df.bfill()  
      A     B
0   1.0   5.0
1   4.0   5.0
2   4.0   7.0
3   4.0   7.0
>>> df.bfill(limit=1)  
      A     B
0   1.0   5.0
1   NaN   5.0
2   4.0   7.0
3   4.0   7.0
categorize(columns=None, index=None, split_every=None, **kwargs)#

Convert columns of the DataFrame to category dtype.

Warning

This method eagerly computes the categories of the chosen columns.

Parameters:
columnslist, optional

A list of column names to convert to categoricals. By default any column with an object dtype is converted to a categorical, and any unknown categoricals are made known.

indexbool, optional

Whether to categorize the index. By default, object indices are converted to categorical, and unknown categorical indices are made known. Set True to always categorize the index, False to never.

split_everyint, optional

Group partitions into groups of this size while performing a tree-reduction. If set to False, no tree-reduction will be used.

kwargs

Keyword arguments are passed on to compute.

clear_divisions()#

Forget division information.

This is useful if the divisions are no longer meaningful.

clip(lower=None, upper=None, axis=None, **kwargs)#

Trim values at input threshold(s).

This docstring was copied from pandas.core.frame.DataFrame.clip.

Some inconsistencies with the Dask version may exist.

Assigns values outside boundary to boundary values. Thresholds can be singular values or array like, and in the latter case the clipping is performed element-wise in the specified axis.

Parameters:
lowerfloat or array-like, default None

Minimum threshold value. All values below this threshold will be set to it. A missing threshold (e.g NA) will not clip the value.

upperfloat or array-like, default None

Maximum threshold value. All values above this threshold will be set to it. A missing threshold (e.g NA) will not clip the value.

axis{{0 or ‘index’, 1 or ‘columns’, None}}, default None

Align object with lower and upper along the given axis. For Series this parameter is unused and defaults to None.

inplacebool, default False (Not supported in Dask)

Whether to perform the operation in place on the data.

*args, **kwargs

Additional keywords have no effect but might be accepted for compatibility with numpy.

Returns:
Series or DataFrame or None

Same type as calling object with the values outside the clip boundaries replaced or None if inplace=True.

See also

Series.clip

Trim values at input threshold in series.

DataFrame.clip

Trim values at input threshold in dataframe.

numpy.clip

Clip (limit) the values in an array.

Examples

>>> data = {'col_0': [9, -3, 0, -1, 5], 'col_1': [-2, -7, 6, 8, -5]}  
>>> df = pd.DataFrame(data)  
>>> df  
   col_0  col_1
0      9     -2
1     -3     -7
2      0      6
3     -1      8
4      5     -5

Clips per column using lower and upper thresholds:

>>> df.clip(-4, 6)  
   col_0  col_1
0      6     -2
1     -3     -4
2      0      6
3     -1      6
4      5     -4

Clips using specific lower and upper thresholds per column:

>>> df.clip([-2, -1], [4, 5])  
    col_0  col_1
0      4     -1
1     -2     -1
2      0      5
3     -1      5
4      4     -1

Clips using specific lower and upper thresholds per column element:

>>> t = pd.Series([2, -4, -1, 6, 3])  
>>> t  
0    2
1   -4
2   -1
3    6
4    3
dtype: int64
>>> df.clip(t, t + 4, axis=0)  
   col_0  col_1
0      6      2
1     -3     -4
2      0      3
3      6      8
4      5      3

Clips using specific lower threshold per column element, with missing values:

>>> t = pd.Series([2, -4, np.nan, 6, 3])  
>>> t  
0    2.0
1   -4.0
2    NaN
3    6.0
4    3.0
dtype: float64
>>> df.clip(t, axis=0)  
col_0  col_1
0      9      2
1     -3     -4
2      0      6
3      6      8
4      5      3
combine(other, func, fill_value=None, overwrite=True)#

Perform column-wise combine with another DataFrame.

This docstring was copied from pandas.core.frame.DataFrame.combine.

Some inconsistencies with the Dask version may exist.

Combines a DataFrame with other DataFrame using func to element-wise combine columns. The row and column indexes of the resulting DataFrame will be the union of the two.

Parameters:
otherDataFrame

The DataFrame to merge column-wise.

funcfunction

Function that takes two series as inputs and return a Series or a scalar. Used to merge the two dataframes column by columns.

fill_valuescalar value, default None

The value to fill NaNs with prior to passing any column to the merge func.

overwritebool, default True

If True, columns in self that do not exist in other will be overwritten with NaNs.

Returns:
DataFrame

Combination of the provided DataFrames.

See also

DataFrame.combine_first

Combine two DataFrame objects and default to non-null values in frame calling the method.

Examples

Combine using a simple function that chooses the smaller column.

>>> df1 = pd.DataFrame({'A': [0, 0], 'B': [4, 4]})  
>>> df2 = pd.DataFrame({'A': [1, 1], 'B': [3, 3]})  
>>> take_smaller = lambda s1, s2: s1 if s1.sum() < s2.sum() else s2  
>>> df1.combine(df2, take_smaller)  
   A  B
0  0  3
1  0  3

Example using a true element-wise combine function.

>>> df1 = pd.DataFrame({'A': [5, 0], 'B': [2, 4]})  
>>> df2 = pd.DataFrame({'A': [1, 1], 'B': [3, 3]})  
>>> df1.combine(df2, np.minimum)  
   A  B
0  1  2
1  0  3

Using fill_value fills Nones prior to passing the column to the merge function.

>>> df1 = pd.DataFrame({'A': [0, 0], 'B': [None, 4]})  
>>> df2 = pd.DataFrame({'A': [1, 1], 'B': [3, 3]})  
>>> df1.combine(df2, take_smaller, fill_value=-5)  
   A    B
0  0 -5.0
1  0  4.0

However, if the same element in both dataframes is None, that None is preserved

>>> df1 = pd.DataFrame({'A': [0, 0], 'B': [None, 4]})  
>>> df2 = pd.DataFrame({'A': [1, 1], 'B': [None, 3]})  
>>> df1.combine(df2, take_smaller, fill_value=-5)  
    A    B
0  0 -5.0
1  0  3.0

Example that demonstrates the use of overwrite and behavior when the axis differ between the dataframes.

>>> df1 = pd.DataFrame({'A': [0, 0], 'B': [4, 4]})  
>>> df2 = pd.DataFrame({'B': [3, 3], 'C': [-10, 1], }, index=[1, 2])  
>>> df1.combine(df2, take_smaller)  
     A    B     C
0  NaN  NaN   NaN
1  NaN  3.0 -10.0
2  NaN  3.0   1.0
>>> df1.combine(df2, take_smaller, overwrite=False)  
     A    B     C
0  0.0  NaN   NaN
1  0.0  3.0 -10.0
2  NaN  3.0   1.0

Demonstrating the preference of the passed in dataframe.

>>> df2 = pd.DataFrame({'B': [3, 3], 'C': [1, 1], }, index=[1, 2])  
>>> df2.combine(df1, take_smaller)  
   A    B   C
0  0.0  NaN NaN
1  0.0  3.0 NaN
2  NaN  3.0 NaN
>>> df2.combine(df1, take_smaller, overwrite=False)  
     A    B   C
0  0.0  NaN NaN
1  0.0  3.0 1.0
2  NaN  3.0 1.0
combine_first(other)#

Update null elements with value in the same location in other.

This docstring was copied from pandas.core.frame.DataFrame.combine_first.

Some inconsistencies with the Dask version may exist.

Combine two DataFrame objects by filling null values in one DataFrame with non-null values from other DataFrame. The row and column indexes of the resulting DataFrame will be the union of the two. The resulting dataframe contains the ‘first’ dataframe values and overrides the second one values where both first.loc[index, col] and second.loc[index, col] are not missing values, upon calling first.combine_first(second).

Parameters:
otherDataFrame

Provided DataFrame to use to fill null values.

Returns:
DataFrame

The result of combining the provided DataFrame with the other object.

See also

DataFrame.combine

Perform series-wise operation on two DataFrames using a given function.

Examples

>>> df1 = pd.DataFrame({'A': [None, 0], 'B': [None, 4]})  
>>> df2 = pd.DataFrame({'A': [1, 1], 'B': [3, 3]})  
>>> df1.combine_first(df2)  
     A    B
0  1.0  3.0
1  0.0  4.0

Null values still persist if the location of that null value does not exist in other

>>> df1 = pd.DataFrame({'A': [None, 0], 'B': [4, None]})  
>>> df2 = pd.DataFrame({'B': [3, 3], 'C': [1, 1]}, index=[1, 2])  
>>> df1.combine_first(df2)  
     A    B    C
0  NaN  4.0  NaN
1  0.0  3.0  1.0
2  NaN  3.0  1.0
compute(fuse=True, **kwargs)#

Compute this DataFrame.

This turns a lazy Dask DataFrame into an in-memory pandas DataFrame. The entire dataset must fit into memory before calling this operation.

The optimizer runs over the DataFrame before triggering the computation. The optimizer injects a repartition operation that reduces the partition count to 1 to enable better optimization strategies.

Parameters:
fusebool, default True

Whether to fuse the expression tree before computing. Fusing significantly reduces the number of tasks and improves performance. It shouldn’t be disabled unless absolutely necessary.

kwargs

Extra keywords to forward to the base compute function.

See also

dask.compute
compute_current_divisions(col=None, set_divisions: bool = False)#

Compute the current divisions of the DataFrame.

This method triggers immediate computation. If you find yourself running this command repeatedly for the same dataframe, we recommend storing the result so you don’t have to rerun it.

If the column or index values overlap between partitions, raises ValueError. To prevent this, make sure the data are sorted by the column or index.

Parameters:
colstring, optional

Calculate the divisions for a non-index column by passing in the name of the column. If col is not specified, the index will be used to calculate divisions. In this case, if the divisions are already known, they will be returned immediately without computing.

set_divisionsbool, default False

Whether to set the computed divisions into the DataFrame. If False, the divisions of the DataFrame are unchanged.

Examples

>>> import dask
>>> ddf = dask.datasets.timeseries(start="2021-01-01", end="2021-01-07", freq="1h").clear_divisions()
>>> divisions = ddf.compute_current_divisions()
>>> print(divisions)  
(Timestamp('2021-01-01 00:00:00'),
 Timestamp('2021-01-02 00:00:00'),
 Timestamp('2021-01-03 00:00:00'),
 Timestamp('2021-01-04 00:00:00'),
 Timestamp('2021-01-05 00:00:00'),
 Timestamp('2021-01-06 00:00:00'),
 Timestamp('2021-01-06 23:00:00'))
>>> ddf.divisions = divisions
>>> ddf.known_divisions
True
>>> ddf = ddf.reset_index().clear_divisions()
>>> divisions = ddf.compute_current_divisions("timestamp")
>>> print(divisions)  
(Timestamp('2021-01-01 00:00:00'),
 Timestamp('2021-01-02 00:00:00'),
 Timestamp('2021-01-03 00:00:00'),
 Timestamp('2021-01-04 00:00:00'),
 Timestamp('2021-01-05 00:00:00'),
 Timestamp('2021-01-06 00:00:00'),
 Timestamp('2021-01-06 23:00:00'))
>>> ddf = ddf.set_index("timestamp", divisions=divisions, sorted=True)
copy(deep: bool = False)#

Make a copy of the dataframe

This is strictly a shallow copy of the underlying computational graph. It does not affect the underlying data

Parameters:
deepboolean, default False

The deep value must be False and it is declared as a parameter just for compatibility with third-party libraries like cuDF and pandas

corr(method='pearson', min_periods=None, numeric_only=False, split_every=False)#

Compute pairwise correlation of columns, excluding NA/null values.

This docstring was copied from pandas.core.frame.DataFrame.corr.

Some inconsistencies with the Dask version may exist.

Parameters:
method{‘pearson’, ‘kendall’, ‘spearman’} or callable

Method of correlation:

  • pearson : standard correlation coefficient

  • kendall : Kendall Tau correlation coefficient

  • spearman : Spearman rank correlation

  • callable: callable with input two 1d ndarrays

    and returning a float. Note that the returned matrix from corr will have 1 along the diagonals and will be symmetric regardless of the callable’s behavior.

min_periodsint, optional

Minimum number of observations required per pair of columns to have a valid result. Currently only available for Pearson and Spearman correlation.

numeric_onlybool, default False

Include only float, int or boolean data.

New in version 1.5.0.

Changed in version 2.0.0: The default value of numeric_only is now False.

Returns:
DataFrame

Correlation matrix.

See also

DataFrame.corrwith

Compute pairwise correlation with another DataFrame or Series.

Series.corr

Compute the correlation between two Series.

Notes

Pearson, Kendall and Spearman correlation are currently computed using pairwise complete observations.

Examples

>>> def histogram_intersection(a, b):  
...     v = np.minimum(a, b).sum().round(decimals=1)
...     return v
>>> df = pd.DataFrame([(.2, .3), (.0, .6), (.6, .0), (.2, .1)],  
...                   columns=['dogs', 'cats'])
>>> df.corr(method=histogram_intersection)  
      dogs  cats
dogs   1.0   0.3
cats   0.3   1.0
>>> df = pd.DataFrame([(1, 1), (2, np.nan), (np.nan, 3), (4, 4)],  
...                   columns=['dogs', 'cats'])
>>> df.corr(min_periods=3)  
      dogs  cats
dogs   1.0   NaN
cats   NaN   1.0
count(axis=0, numeric_only=False, split_every=False)#

Count non-NA cells for each column or row.

This docstring was copied from pandas.core.frame.DataFrame.count.

Some inconsistencies with the Dask version may exist.

The values None, NaN, NaT, pandas.NA are considered NA.

Parameters:
axis{0 or ‘index’, 1 or ‘columns’}, default 0

If 0 or ‘index’ counts are generated for each column. If 1 or ‘columns’ counts are generated for each row.

numeric_onlybool, default False

Include only float, int or boolean data.

Returns:
Series

For each column/row the number of non-NA/null entries.

See also

Series.count

Number of non-NA elements in a Series.

DataFrame.value_counts

Count unique combinations of columns.

DataFrame.shape

Number of DataFrame rows and columns (including NA elements).

DataFrame.isna

Boolean same-sized DataFrame showing places of NA elements.

Examples

Constructing DataFrame from a dictionary:

>>> df = pd.DataFrame({"Person":  
...                    ["John", "Myla", "Lewis", "John", "Myla"],
...                    "Age": [24., np.nan, 21., 33, 26],
...                    "Single": [False, True, True, True, False]})
>>> df  
   Person   Age  Single
0    John  24.0   False
1    Myla   NaN    True
2   Lewis  21.0    True
3    John  33.0    True
4    Myla  26.0   False

Notice the uncounted NA values:

>>> df.count()  
Person    5
Age       4
Single    5
dtype: int64

Counts for each row:

>>> df.count(axis='columns')  
0    3
1    2
2    3
3    3
4    3
dtype: int64
cov(min_periods=None, numeric_only=False, split_every=False)#

Compute pairwise covariance of columns, excluding NA/null values.

This docstring was copied from pandas.core.frame.DataFrame.cov.

Some inconsistencies with the Dask version may exist.

Compute the pairwise covariance among the series of a DataFrame. The returned data frame is the covariance matrix of the columns of the DataFrame.

Both NA and null values are automatically excluded from the calculation. (See the note below about bias from missing values.) A threshold can be set for the minimum number of observations for each value created. Comparisons with observations below this threshold will be returned as NaN.

This method is generally used for the analysis of time series data to understand the relationship between different measures across time.

Parameters:
min_periodsint, optional

Minimum number of observations required per pair of columns to have a valid result.

ddofint, default 1 (Not supported in Dask)

Delta degrees of freedom. The divisor used in calculations is N - ddof, where N represents the number of elements. This argument is applicable only when no nan is in the dataframe.

numeric_onlybool, default False

Include only float, int or boolean data.

New in version 1.5.0.

Changed in version 2.0.0: The default value of numeric_only is now False.

Returns:
DataFrame

The covariance matrix of the series of the DataFrame.

See also

Series.cov

Compute covariance with another Series.

core.window.ewm.ExponentialMovingWindow.cov

Exponential weighted sample covariance.

core.window.expanding.Expanding.cov

Expanding sample covariance.

core.window.rolling.Rolling.cov

Rolling sample covariance.

Notes

Returns the covariance matrix of the DataFrame’s time series. The covariance is normalized by N-ddof.

For DataFrames that have Series that are missing data (assuming that data is missing at random) the returned covariance matrix will be an unbiased estimate of the variance and covariance between the member Series.

However, for many applications this estimate may not be acceptable because the estimate covariance matrix is not guaranteed to be positive semi-definite. This could lead to estimate correlations having absolute values which are greater than one, and/or a non-invertible covariance matrix. See Estimation of covariance matrices for more details.

Examples

>>> df = pd.DataFrame([(1, 2), (0, 3), (2, 0), (1, 1)],  
...                   columns=['dogs', 'cats'])
>>> df.cov()  
          dogs      cats
dogs  0.666667 -1.000000
cats -1.000000  1.666667
>>> np.random.seed(42)  
>>> df = pd.DataFrame(np.random.randn(1000, 5),  
...                   columns=['a', 'b', 'c', 'd', 'e'])
>>> df.cov()  
          a         b         c         d         e
a  0.998438 -0.020161  0.059277 -0.008943  0.014144
b -0.020161  1.059352 -0.008543 -0.024738  0.009826
c  0.059277 -0.008543  1.010670 -0.001486 -0.000271
d -0.008943 -0.024738 -0.001486  0.921297 -0.013692
e  0.014144  0.009826 -0.000271 -0.013692  0.977795

Minimum number of periods

This method also supports an optional min_periods keyword that specifies the required minimum number of non-NA observations for each column pair in order to have a valid result:

>>> np.random.seed(42)  
>>> df = pd.DataFrame(np.random.randn(20, 3),  
...                   columns=['a', 'b', 'c'])
>>> df.loc[df.index[:5], 'a'] = np.nan  
>>> df.loc[df.index[5:10], 'b'] = np.nan  
>>> df.cov(min_periods=12)  
          a         b         c
a  0.316741       NaN -0.150812
b       NaN  1.248003  0.191417
c -0.150812  0.191417  0.895202
cummax(axis=0, skipna=True)#

Return cumulative maximum over a DataFrame or Series axis.

This docstring was copied from pandas.core.frame.DataFrame.cummax.

Some inconsistencies with the Dask version may exist.

Returns a DataFrame or Series of the same size containing the cumulative maximum.

Parameters:
axis{0 or ‘index’, 1 or ‘columns’}, default 0

The index or the name of the axis. 0 is equivalent to None or ‘index’. For Series this parameter is unused and defaults to 0.

skipnabool, default True

Exclude NA/null values. If an entire row/column is NA, the result will be NA.

*args, **kwargs

Additional keywords have no effect but might be accepted for compatibility with NumPy.

Returns:
Series or DataFrame

Return cumulative maximum of Series or DataFrame.

See also

core.window.expanding.Expanding.max

Similar functionality but ignores NaN values.

DataFrame.max

Return the maximum over DataFrame axis.

DataFrame.cummax

Return cumulative maximum over DataFrame axis.

DataFrame.cummin

Return cumulative minimum over DataFrame axis.

DataFrame.cumsum

Return cumulative sum over DataFrame axis.

DataFrame.cumprod

Return cumulative product over DataFrame axis.

Examples

Series

>>> s = pd.Series([2, np.nan, 5, -1, 0])  
>>> s  
0    2.0
1    NaN
2    5.0
3   -1.0
4    0.0
dtype: float64

By default, NA values are ignored.

>>> s.cummax()  
0    2.0
1    NaN
2    5.0
3    5.0
4    5.0
dtype: float64

To include NA values in the operation, use skipna=False

>>> s.cummax(skipna=False)  
0    2.0
1    NaN
2    NaN
3    NaN
4    NaN
dtype: float64

DataFrame

>>> df = pd.DataFrame([[2.0, 1.0],  
...                    [3.0, np.nan],
...                    [1.0, 0.0]],
...                   columns=list('AB'))
>>> df  
     A    B
0  2.0  1.0
1  3.0  NaN
2  1.0  0.0

By default, iterates over rows and finds the maximum in each column. This is equivalent to axis=None or axis='index'.

>>> df.cummax()  
     A    B
0  2.0  1.0
1  3.0  NaN
2  3.0  1.0

To iterate over columns and find the maximum in each row, use axis=1

>>> df.cummax(axis=1)  
     A    B
0  2.0  2.0
1  3.0  NaN
2  1.0  1.0
cummin(axis=0, skipna=True)#

Return cumulative minimum over a DataFrame or Series axis.

This docstring was copied from pandas.core.frame.DataFrame.cummin.

Some inconsistencies with the Dask version may exist.

Returns a DataFrame or Series of the same size containing the cumulative minimum.

Parameters:
axis{0 or ‘index’, 1 or ‘columns’}, default 0

The index or the name of the axis. 0 is equivalent to None or ‘index’. For Series this parameter is unused and defaults to 0.

skipnabool, default True

Exclude NA/null values. If an entire row/column is NA, the result will be NA.

*args, **kwargs

Additional keywords have no effect but might be accepted for compatibility with NumPy.

Returns:
Series or DataFrame

Return cumulative minimum of Series or DataFrame.

See also

core.window.expanding.Expanding.min

Similar functionality but ignores NaN values.

DataFrame.min

Return the minimum over DataFrame axis.

DataFrame.cummax

Return cumulative maximum over DataFrame axis.

DataFrame.cummin

Return cumulative minimum over DataFrame axis.

DataFrame.cumsum

Return cumulative sum over DataFrame axis.

DataFrame.cumprod

Return cumulative product over DataFrame axis.

Examples

Series

>>> s = pd.Series([2, np.nan, 5, -1, 0])  
>>> s  
0    2.0
1    NaN
2    5.0
3   -1.0
4    0.0
dtype: float64

By default, NA values are ignored.

>>> s.cummin()  
0    2.0
1    NaN
2    2.0
3   -1.0
4   -1.0
dtype: float64

To include NA values in the operation, use skipna=False

>>> s.cummin(skipna=False)  
0    2.0
1    NaN
2    NaN
3    NaN
4    NaN
dtype: float64

DataFrame

>>> df = pd.DataFrame([[2.0, 1.0],  
...                    [3.0, np.nan],
...                    [1.0, 0.0]],
...                   columns=list('AB'))
>>> df  
     A    B
0  2.0  1.0
1  3.0  NaN
2  1.0  0.0

By default, iterates over rows and finds the minimum in each column. This is equivalent to axis=None or axis='index'.

>>> df.cummin()  
     A    B
0  2.0  1.0
1  2.0  NaN
2  1.0  0.0

To iterate over columns and find the minimum in each row, use axis=1

>>> df.cummin(axis=1)  
     A    B
0  2.0  1.0
1  3.0  NaN
2  1.0  0.0
cumprod(axis=0, skipna=True, **kwargs)#

Return cumulative product over a DataFrame or Series axis.

This docstring was copied from pandas.core.frame.DataFrame.cumprod.

Some inconsistencies with the Dask version may exist.

Returns a DataFrame or Series of the same size containing the cumulative product.

Parameters:
axis{0 or ‘index’, 1 or ‘columns’}, default 0

The index or the name of the axis. 0 is equivalent to None or ‘index’. For Series this parameter is unused and defaults to 0.

skipnabool, default True

Exclude NA/null values. If an entire row/column is NA, the result will be NA.

*args, **kwargs

Additional keywords have no effect but might be accepted for compatibility with NumPy.

Returns:
Series or DataFrame

Return cumulative product of Series or DataFrame.

See also

core.window.expanding.Expanding.prod

Similar functionality but ignores NaN values.

DataFrame.prod

Return the product over DataFrame axis.

DataFrame.cummax

Return cumulative maximum over DataFrame axis.

DataFrame.cummin

Return cumulative minimum over DataFrame axis.

DataFrame.cumsum

Return cumulative sum over DataFrame axis.

DataFrame.cumprod

Return cumulative product over DataFrame axis.

Examples

Series

>>> s = pd.Series([2, np.nan, 5, -1, 0])  
>>> s  
0    2.0
1    NaN
2    5.0
3   -1.0
4    0.0
dtype: float64

By default, NA values are ignored.

>>> s.cumprod()  
0     2.0
1     NaN
2    10.0
3   -10.0
4    -0.0
dtype: float64

To include NA values in the operation, use skipna=False

>>> s.cumprod(skipna=False)  
0    2.0
1    NaN
2    NaN
3    NaN
4    NaN
dtype: float64

DataFrame

>>> df = pd.DataFrame([[2.0, 1.0],  
...                    [3.0, np.nan],
...                    [1.0, 0.0]],
...                   columns=list('AB'))
>>> df  
     A    B
0  2.0  1.0
1  3.0  NaN
2  1.0  0.0

By default, iterates over rows and finds the product in each column. This is equivalent to axis=None or axis='index'.

>>> df.cumprod()  
     A    B
0  2.0  1.0
1  6.0  NaN
2  6.0  0.0

To iterate over columns and find the product in each row, use axis=1

>>> df.cumprod(axis=1)  
     A    B
0  2.0  2.0
1  3.0  NaN
2  1.0  0.0
cumsum(axis=0, skipna=True, **kwargs)#

Return cumulative sum over a DataFrame or Series axis.

This docstring was copied from pandas.core.frame.DataFrame.cumsum.

Some inconsistencies with the Dask version may exist.

Returns a DataFrame or Series of the same size containing the cumulative sum.

Parameters:
axis{0 or ‘index’, 1 or ‘columns’}, default 0

The index or the name of the axis. 0 is equivalent to None or ‘index’. For Series this parameter is unused and defaults to 0.

skipnabool, default True

Exclude NA/null values. If an entire row/column is NA, the result will be NA.

*args, **kwargs

Additional keywords have no effect but might be accepted for compatibility with NumPy.

Returns:
Series or DataFrame

Return cumulative sum of Series or DataFrame.

See also

core.window.expanding.Expanding.sum

Similar functionality but ignores NaN values.

DataFrame.sum

Return the sum over DataFrame axis.

DataFrame.cummax

Return cumulative maximum over DataFrame axis.

DataFrame.cummin

Return cumulative minimum over DataFrame axis.

DataFrame.cumsum

Return cumulative sum over DataFrame axis.

DataFrame.cumprod

Return cumulative product over DataFrame axis.

Examples

Series

>>> s = pd.Series([2, np.nan, 5, -1, 0])  
>>> s  
0    2.0
1    NaN
2    5.0
3   -1.0
4    0.0
dtype: float64

By default, NA values are ignored.

>>> s.cumsum()  
0    2.0
1    NaN
2    7.0
3    6.0
4    6.0
dtype: float64

To include NA values in the operation, use skipna=False

>>> s.cumsum(skipna=False)  
0    2.0
1    NaN
2    NaN
3    NaN
4    NaN
dtype: float64

DataFrame

>>> df = pd.DataFrame([[2.0, 1.0],  
...                    [3.0, np.nan],
...                    [1.0, 0.0]],
...                   columns=list('AB'))
>>> df  
     A    B
0  2.0  1.0
1  3.0  NaN
2  1.0  0.0

By default, iterates over rows and finds the sum in each column. This is equivalent to axis=None or axis='index'.

>>> df.cumsum()  
     A    B
0  2.0  1.0
1  5.0  NaN
2  6.0  1.0

To iterate over columns and find the sum in each row, use axis=1

>>> df.cumsum(axis=1)  
     A    B
0  2.0  3.0
1  3.0  NaN
2  1.0  1.0
describe(split_every=False, percentiles=None, percentiles_method='default', include=None, exclude=None)#

Generate descriptive statistics.

This docstring was copied from pandas.core.frame.DataFrame.describe.

Some inconsistencies with the Dask version may exist.

Descriptive statistics include those that summarize the central tendency, dispersion and shape of a dataset’s distribution, excluding NaN values.

Analyzes both numeric and object series, as well as DataFrame column sets of mixed data types. The output will vary depending on what is provided. Refer to the notes below for more detail.

Parameters:
percentileslist-like of numbers, optional

The percentiles to include in the output. All should fall between 0 and 1. The default is [.25, .5, .75], which returns the 25th, 50th, and 75th percentiles.

include‘all’, list-like of dtypes or None (default), optional

A white list of data types to include in the result. Ignored for Series. Here are the options:

  • ‘all’ : All columns of the input will be included in the output.

  • A list-like of dtypes : Limits the results to the provided data types. To limit the result to numeric types submit numpy.number. To limit it instead to object columns submit the numpy.object data type. Strings can also be used in the style of select_dtypes (e.g. df.describe(include=['O'])). To select pandas categorical columns, use 'category'

  • None (default) : The result will include all numeric columns.

excludelist-like of dtypes or None (default), optional,

A black list of data types to omit from the result. Ignored for Series. Here are the options:

  • A list-like of dtypes : Excludes the provided data types from the result. To exclude numeric types submit numpy.number. To exclude object columns submit the data type numpy.object. Strings can also be used in the style of select_dtypes (e.g. df.describe(exclude=['O'])). To exclude pandas categorical columns, use 'category'

  • None (default) : The result will exclude nothing.

Returns:
Series or DataFrame

Summary statistics of the Series or Dataframe provided.

See also

DataFrame.count

Count number of non-NA/null observations.

DataFrame.max

Maximum of the values in the object.

DataFrame.min

Minimum of the values in the object.

DataFrame.mean

Mean of the values.

DataFrame.std

Standard deviation of the observations.

DataFrame.select_dtypes

Subset of a DataFrame including/excluding columns based on their dtype.

Notes

For numeric data, the result’s index will include count, mean, std, min, max as well as lower, 50 and upper percentiles. By default the lower percentile is 25 and the upper percentile is 75. The 50 percentile is the same as the median.

For object data (e.g. strings or timestamps), the result’s index will include count, unique, top, and freq. The top is the most common value. The freq is the most common value’s frequency. Timestamps also include the first and last items.

If multiple object values have the highest count, then the count and top results will be arbitrarily chosen from among those with the highest count.

For mixed data types provided via a DataFrame, the default is to return only an analysis of numeric columns. If the dataframe consists only of object and categorical data without any numeric columns, the default is to return an analysis of both the object and categorical columns. If include='all' is provided as an option, the result will include a union of attributes of each type.

The include and exclude parameters can be used to limit which columns in a DataFrame are analyzed for the output. The parameters are ignored when analyzing a Series.

Examples

Describing a numeric Series.

>>> s = pd.Series([1, 2, 3])  
>>> s.describe()  
count    3.0
mean     2.0
std      1.0
min      1.0
25%      1.5
50%      2.0
75%      2.5
max      3.0
dtype: float64

Describing a categorical Series.

>>> s = pd.Series(['a', 'a', 'b', 'c'])  
>>> s.describe()  
count     4
unique    3
top       a
freq      2
dtype: object

Describing a timestamp Series.

>>> s = pd.Series([  
...     np.datetime64("2000-01-01"),
...     np.datetime64("2010-01-01"),
...     np.datetime64("2010-01-01")
... ])
>>> s.describe()  
count                      3
mean     2006-09-01 08:00:00
min      2000-01-01 00:00:00
25%      2004-12-31 12:00:00
50%      2010-01-01 00:00:00
75%      2010-01-01 00:00:00
max      2010-01-01 00:00:00
dtype: object

Describing a DataFrame. By default only numeric fields are returned.

>>> df = pd.DataFrame({'categorical': pd.Categorical(['d', 'e', 'f']),  
...                    'numeric': [1, 2, 3],
...                    'object': ['a', 'b', 'c']
...                    })
>>> df.describe()  
       numeric
count      3.0
mean       2.0
std        1.0
min        1.0
25%        1.5
50%        2.0
75%        2.5
max        3.0

Describing all columns of a DataFrame regardless of data type.

>>> df.describe(include='all')  
       categorical  numeric object
count            3      3.0      3
unique           3      NaN      3
top              f      NaN      a
freq             1      NaN      1
mean           NaN      2.0    NaN
std            NaN      1.0    NaN
min            NaN      1.0    NaN
25%            NaN      1.5    NaN
50%            NaN      2.0    NaN
75%            NaN      2.5    NaN
max            NaN      3.0    NaN

Describing a column from a DataFrame by accessing it as an attribute.

>>> df.numeric.describe()  
count    3.0
mean     2.0
std      1.0
min      1.0
25%      1.5
50%      2.0
75%      2.5
max      3.0
Name: numeric, dtype: float64

Including only numeric columns in a DataFrame description.

>>> df.describe(include=[np.number])  
       numeric
count      3.0
mean       2.0
std        1.0
min        1.0
25%        1.5
50%        2.0
75%        2.5
max        3.0

Including only string columns in a DataFrame description.

>>> df.describe(include=[object])  
       object
count       3
unique      3
top         a
freq        1

Including only categorical columns from a DataFrame description.

>>> df.describe(include=['category'])  
       categorical
count            3
unique           3
top              d
freq             1

Excluding numeric columns from a DataFrame description.

>>> df.describe(exclude=[np.number])  
       categorical object
count            3      3
unique           3      3
top              f      a
freq             1      1

Excluding object columns from a DataFrame description.

>>> df.describe(exclude=[object])  
       categorical  numeric
count            3      3.0
unique           3      NaN
top              f      NaN
freq             1      NaN
mean           NaN      2.0
std            NaN      1.0
min            NaN      1.0
25%            NaN      1.5
50%            NaN      2.0
75%            NaN      2.5
max            NaN      3.0
diff(periods=1, axis=0)#

First discrete difference of element.

This docstring was copied from pandas.core.frame.DataFrame.diff.

Some inconsistencies with the Dask version may exist.

Note

Pandas currently uses an object-dtype column to represent boolean data with missing values. This can cause issues for boolean-specific operations, like |. To enable boolean- specific operations, at the cost of metadata that doesn’t match pandas, use .astype(bool) after the shift.

Calculates the difference of a DataFrame element compared with another element in the DataFrame (default is element in previous row).

Parameters:
periodsint, default 1

Periods to shift for calculating difference, accepts negative values.

axis{0 or ‘index’, 1 or ‘columns’}, default 0

Take difference over rows (0) or columns (1).

Returns:
DataFrame

First differences of the Series.

See also

DataFrame.pct_change

Percent change over given number of periods.

DataFrame.shift

Shift index by desired number of periods with an optional time freq.

Series.diff

First discrete difference of object.

Notes

For boolean dtypes, this uses operator.xor() rather than operator.sub(). The result is calculated according to current dtype in DataFrame, however dtype of the result is always float64.

Examples

Difference with previous row

>>> df = pd.DataFrame({'a': [1, 2, 3, 4, 5, 6],  
...                    'b': [1, 1, 2, 3, 5, 8],
...                    'c': [1, 4, 9, 16, 25, 36]})
>>> df  
   a  b   c
0  1  1   1
1  2  1   4
2  3  2   9
3  4  3  16
4  5  5  25
5  6  8  36
>>> df.diff()  
     a    b     c
0  NaN  NaN   NaN
1  1.0  0.0   3.0
2  1.0  1.0   5.0
3  1.0  1.0   7.0
4  1.0  2.0   9.0
5  1.0  3.0  11.0

Difference with previous column

>>> df.diff(axis=1)  
    a  b   c
0 NaN  0   0
1 NaN -1   3
2 NaN -1   7
3 NaN -1  13
4 NaN  0  20
5 NaN  2  28

Difference with 3rd previous row

>>> df.diff(periods=3)  
     a    b     c
0  NaN  NaN   NaN
1  NaN  NaN   NaN
2  NaN  NaN   NaN
3  3.0  2.0  15.0
4  3.0  4.0  21.0
5  3.0  6.0  27.0

Difference with following row

>>> df.diff(periods=-1)  
     a    b     c
0 -1.0  0.0  -3.0
1 -1.0 -1.0  -5.0
2 -1.0 -1.0  -7.0
3 -1.0 -2.0  -9.0
4 -1.0 -3.0 -11.0
5  NaN  NaN   NaN

Overflow in input dtype

>>> df = pd.DataFrame({'a': [1, 0]}, dtype=np.uint8)  
>>> df.diff()  
       a
0    NaN
1  255.0
property divisions#

Tuple of npartitions + 1 values, in ascending order, marking the lower/upper bounds of each partition’s index. Divisions allow Dask to know which partition will contain a given value, significantly speeding up operations like loc, merge, and groupby by not having to search the full dataset.

Example: for divisions = (0, 10, 50, 100), there are three partitions, where the index in each partition contains values [0, 10), [10, 50), and [50, 100], respectively. Dask therefore knows df.loc[45] will be in the second partition.

When every item in divisions is None, the divisions are unknown. Most operations can still be performed, but some will be much slower, and a few may fail.

It is not supported to set divisions directly. Instead, use set_index, which sorts and splits the data as needed. See https://docs.dask.org/en/latest/dataframe-design.html#partitions.

dot(other, meta=_NoDefault.no_default)#

Compute the dot product between the Series and the columns of other.

This docstring was copied from pandas.core.series.Series.dot.

Some inconsistencies with the Dask version may exist.

This method computes the dot product between the Series and another one, or the Series and each columns of a DataFrame, or the Series and each columns of an array.

It can also be called using self @ other.

Parameters:
otherSeries, DataFrame or array-like

The other object to compute the dot product with its columns.

Returns:
scalar, Series or numpy.ndarray

Return the dot product of the Series and other if other is a Series, the Series of the dot product of Series and each rows of other if other is a DataFrame or a numpy.ndarray between the Series and each columns of the numpy array.

See also

DataFrame.dot

Compute the matrix product with the DataFrame.

Series.mul

Multiplication of series and other, element-wise.

Notes

The Series and other has to share the same index if other is a Series or a DataFrame.

Examples

>>> s = pd.Series([0, 1, 2, 3])  
>>> other = pd.Series([-1, 2, -3, 4])  
>>> s.dot(other)  
8
>>> s @ other  
8
>>> df = pd.DataFrame([[0, 1], [-2, 3], [4, -5], [6, 7]])  
>>> s.dot(df)  
0    24
1    14
dtype: int64
>>> arr = np.array([[0, 1], [-2, 3], [4, -5], [6, 7]])  
>>> s.dot(arr)  
array([24, 14])
drop(labels=None, axis=0, columns=None, errors='raise')#

Drop specified labels from rows or columns.

This docstring was copied from pandas.core.frame.DataFrame.drop.

Some inconsistencies with the Dask version may exist.

Remove rows or columns by specifying label names and corresponding axis, or by directly specifying index or column names. When using a multi-index, labels on different levels can be removed by specifying the level. See the user guide for more information about the now unused levels.

Parameters:
labelssingle label or list-like

Index or column labels to drop. A tuple will be used as a single label and not treated as a list-like.

axis{0 or ‘index’, 1 or ‘columns’}, default 0

Whether to drop labels from the index (0 or ‘index’) or columns (1 or ‘columns’).

indexsingle label or list-like (Not supported in Dask)

Alternative to specifying axis (labels, axis=0 is equivalent to index=labels).

columnssingle label or list-like

Alternative to specifying axis (labels, axis=1 is equivalent to columns=labels).

levelint or level name, optional (Not supported in Dask)

For MultiIndex, level from which the labels will be removed.

inplacebool, default False (Not supported in Dask)

If False, return a copy. Otherwise, do operation in place and return None.

errors{‘ignore’, ‘raise’}, default ‘raise’

If ‘ignore’, suppress error and only existing labels are dropped.

Returns:
DataFrame or None

Returns DataFrame or None DataFrame with the specified index or column labels removed or None if inplace=True.

Raises:
KeyError

If any of the labels is not found in the selected axis.

See also

DataFrame.loc

Label-location based indexer for selection by label.

DataFrame.dropna

Return DataFrame with labels on given axis omitted where (all or any) data are missing.

DataFrame.drop_duplicates

Return DataFrame with duplicate rows removed, optionally only considering certain columns.

Series.drop

Return Series with specified index labels removed.

Examples

>>> df = pd.DataFrame(np.arange(12).reshape(3, 4),  
...                   columns=['A', 'B', 'C', 'D'])
>>> df  
   A  B   C   D
0  0  1   2   3
1  4  5   6   7
2  8  9  10  11

Drop columns

>>> df.drop(['B', 'C'], axis=1)  
   A   D
0  0   3
1  4   7
2  8  11
>>> df.drop(columns=['B', 'C'])  
   A   D
0  0   3
1  4   7
2  8  11

Drop a row by index

>>> df.drop([0, 1])  
   A  B   C   D
2  8  9  10  11

Drop columns and/or rows of MultiIndex DataFrame

>>> midx = pd.MultiIndex(levels=[['llama', 'cow', 'falcon'],  
...                              ['speed', 'weight', 'length']],
...                      codes=[[0, 0, 0, 1, 1, 1, 2, 2, 2],
...                             [0, 1, 2, 0, 1, 2, 0, 1, 2]])
>>> df = pd.DataFrame(index=midx, columns=['big', 'small'],  
...                   data=[[45, 30], [200, 100], [1.5, 1], [30, 20],
...                         [250, 150], [1.5, 0.8], [320, 250],
...                         [1, 0.8], [0.3, 0.2]])
>>> df  
                big     small
llama   speed   45.0    30.0
        weight  200.0   100.0
        length  1.5     1.0
cow     speed   30.0    20.0
        weight  250.0   150.0
        length  1.5     0.8
falcon  speed   320.0   250.0
        weight  1.0     0.8
        length  0.3     0.2

Drop a specific index combination from the MultiIndex DataFrame, i.e., drop the combination 'falcon' and 'weight', which deletes only the corresponding row

>>> df.drop(index=('falcon', 'weight'))  
                big     small
llama   speed   45.0    30.0
        weight  200.0   100.0
        length  1.5     1.0
cow     speed   30.0    20.0
        weight  250.0   150.0
        length  1.5     0.8
falcon  speed   320.0   250.0
        length  0.3     0.2
>>> df.drop(index='cow', columns='small')  
                big
llama   speed   45.0
        weight  200.0
        length  1.5
falcon  speed   320.0
        weight  1.0
        length  0.3
>>> df.drop(index='length', level=1)  
                big     small
llama   speed   45.0    30.0
        weight  200.0   100.0
cow     speed   30.0    20.0
        weight  250.0   150.0
falcon  speed   320.0   250.0
        weight  1.0     0.8
drop_duplicates(subset=None, split_every=None, split_out=True, shuffle_method=None, ignore_index=False, keep='first')#

Return DataFrame with duplicate rows removed.

This docstring was copied from pandas.core.frame.DataFrame.drop_duplicates.

Some inconsistencies with the Dask version may exist.

Known inconsistencies:

keep=False will raise a NotImplementedError

Considering certain columns is optional. Indexes, including time indexes are ignored.

Parameters:
subsetcolumn label or sequence of labels, optional

Only consider certain columns for identifying duplicates, by default use all of the columns.

keep{‘first’, ‘last’, False}, default ‘first’

Determines which duplicates (if any) to keep.

  • ‘first’ : Drop duplicates except for the first occurrence.

  • ‘last’ : Drop duplicates except for the last occurrence.

  • False : Drop all duplicates.

inplacebool, default False (Not supported in Dask)

Whether to modify the DataFrame rather than creating a new one.

ignore_indexbool, default False

If True, the resulting axis will be labeled 0, 1, …, n - 1.

Returns:
DataFrame or None

DataFrame with duplicates removed or None if inplace=True.

See also

DataFrame.value_counts

Count unique combinations of columns.

Examples

Consider dataset containing ramen rating.

>>> df = pd.DataFrame({  
...     'brand': ['Yum Yum', 'Yum Yum', 'Indomie', 'Indomie', 'Indomie'],
...     'style': ['cup', 'cup', 'cup', 'pack', 'pack'],
...     'rating': [4, 4, 3.5, 15, 5]
... })
>>> df  
    brand style  rating
0  Yum Yum   cup     4.0
1  Yum Yum   cup     4.0
2  Indomie   cup     3.5
3  Indomie  pack    15.0
4  Indomie  pack     5.0

By default, it removes duplicate rows based on all columns.

>>> df.drop_duplicates()  
    brand style  rating
0  Yum Yum   cup     4.0
2  Indomie   cup     3.5
3  Indomie  pack    15.0
4  Indomie  pack     5.0

To remove duplicates on specific column(s), use subset.

>>> df.drop_duplicates(subset=['brand'])  
    brand style  rating
0  Yum Yum   cup     4.0
2  Indomie   cup     3.5

To remove duplicates and keep last occurrences, use keep.

>>> df.drop_duplicates(subset=['brand', 'style'], keep='last')  
    brand style  rating
1  Yum Yum   cup     4.0
2  Indomie   cup     3.5
4  Indomie  pack     5.0
dropna(how=_NoDefault.no_default, subset=None, thresh=_NoDefault.no_default)#

Remove missing values.

This docstring was copied from pandas.core.frame.DataFrame.dropna.

Some inconsistencies with the Dask version may exist.

See the User Guide for more on which values are considered missing, and how to work with missing data.

Parameters:
axis{0 or ‘index’, 1 or ‘columns’}, default 0 (Not supported in Dask)

Determine if rows or columns which contain missing values are removed.

  • 0, or ‘index’ : Drop rows which contain missing values.

  • 1, or ‘columns’ : Drop columns which contain missing value.

Only a single axis is allowed.

how{‘any’, ‘all’}, default ‘any’

Determine if row or column is removed from DataFrame, when we have at least one NA or all NA.

  • ‘any’ : If any NA values are present, drop that row or column.

  • ‘all’ : If all values are NA, drop that row or column.

threshint, optional

Require that many non-NA values. Cannot be combined with how.

subsetcolumn label or sequence of labels, optional

Labels along other axis to consider, e.g. if you are dropping rows these would be a list of columns to include.

inplacebool, default False (Not supported in Dask)

Whether to modify the DataFrame rather than creating a new one.

ignore_indexbool, default False (Not supported in Dask)

If True, the resulting axis will be labeled 0, 1, …, n - 1.

New in version 2.0.0.

Returns:
DataFrame or None

DataFrame with NA entries dropped from it or None if inplace=True.

See also

DataFrame.isna

Indicate missing values.

DataFrame.notna

Indicate existing (non-missing) values.

DataFrame.fillna

Replace missing values.

Series.dropna

Drop missing values.

Index.dropna

Drop missing indices.

Examples

>>> df = pd.DataFrame({"name": ['Alfred', 'Batman', 'Catwoman'],  
...                    "toy": [np.nan, 'Batmobile', 'Bullwhip'],
...                    "born": [pd.NaT, pd.Timestamp("1940-04-25"),
...                             pd.NaT]})
>>> df  
       name        toy       born
0    Alfred        NaN        NaT
1    Batman  Batmobile 1940-04-25
2  Catwoman   Bullwhip        NaT

Drop the rows where at least one element is missing.

>>> df.dropna()  
     name        toy       born
1  Batman  Batmobile 1940-04-25

Drop the columns where at least one element is missing.

>>> df.dropna(axis='columns')  
       name
0    Alfred
1    Batman
2  Catwoman

Drop the rows where all elements are missing.

>>> df.dropna(how='all')  
       name        toy       born
0    Alfred        NaN        NaT
1    Batman  Batmobile 1940-04-25
2  Catwoman   Bullwhip        NaT

Keep only the rows with at least 2 non-NA values.

>>> df.dropna(thresh=2)  
       name        toy       born
1    Batman  Batmobile 1940-04-25
2  Catwoman   Bullwhip        NaT

Define in which columns to look for missing values.

>>> df.dropna(subset=['name', 'toy'])  
       name        toy       born
1    Batman  Batmobile 1940-04-25
2  Catwoman   Bullwhip        NaT
property dtypes#

Return data types

enforce_runtime_divisions()#

Enforce the current divisions at runtime.

Injects a layer into the Task Graph that checks that the current divisions match the expected divisions at runtime.

eval(expr, **kwargs)#

Evaluate a string describing operations on DataFrame columns.

This docstring was copied from pandas.core.frame.DataFrame.eval.

Some inconsistencies with the Dask version may exist.

Operates on columns only, not specific rows or elements. This allows eval to run arbitrary code, which can make you vulnerable to code injection if you pass user input to this function.

Parameters:
exprstr

The expression string to evaluate.

inplacebool, default False (Not supported in Dask)

If the expression contains an assignment, whether to perform the operation inplace and mutate the existing DataFrame. Otherwise, a new DataFrame is returned.

**kwargs

See the documentation for eval() for complete details on the keyword arguments accepted by query().

Returns:
ndarray, scalar, pandas object, or None

The result of the evaluation or None if inplace=True.

See also

DataFrame.query

Evaluates a boolean expression to query the columns of a frame.

DataFrame.assign

Can evaluate an expression or function to create new values for a column.

eval

Evaluate a Python expression as a string using various backends.

Notes

For more details see the API documentation for eval(). For detailed examples see enhancing performance with eval.

Examples

>>> df = pd.DataFrame({'A': range(1, 6), 'B': range(10, 0, -2)})  
>>> df  
   A   B
0  1  10
1  2   8
2  3   6
3  4   4
4  5   2
>>> df.eval('A + B')  
0    11
1    10
2     9
3     8
4     7
dtype: int64

Assignment is allowed though by default the original DataFrame is not modified.

>>> df.eval('C = A + B')  
   A   B   C
0  1  10  11
1  2   8  10
2  3   6   9
3  4   4   8
4  5   2   7
>>> df  
   A   B
0  1  10
1  2   8
2  3   6
3  4   4
4  5   2

Multiple columns can be assigned to using multi-line expressions:

>>> df.eval(  
...     '''
... C = A + B
... D = A - B
... '''
... )
   A   B   C  D
0  1  10  11 -9
1  2   8  10 -6
2  3   6   9 -3
3  4   4   8  0
4  5   2   7  3
explain(stage: Literal['logical', 'simplified-logical', 'tuned-logical', 'physical', 'simplified-physical', 'fused'] = 'fused', format: str | None = None)#

Create a graph representation of the Expression.

explain runs the optimizer and creates a graph of the optimized expression with graphviz. No computation is triggered.

Parameters:
stage: {“logical”, “simplified-logical”, “tuned-logical”, “physical”, “simplified-physical”, “fused”}

The optimizer stage that is returned. Default is “fused”.

  • logical: outputs the expression as is

  • simplified-logical: simplifies the expression which includes predicate pushdown and column projection.

  • tuned-logical: applies additional optimizations like partition squashing

  • physical: outputs the physical expression; this expression can actually be computed

  • simplified-physical: runs another simplification after the physical plan is generated

  • fused: fuses the physical expression to reduce the nodes in thr graph.

Warning

The optimizer stages are subject to change.

format: str, default None

The format of the output. Default is “png”.

Returns:
None, but opens a new window with the graph visualization and outputs
a file with the graph representation.
explode(column)#

Transform each element of a list-like to a row, replicating index values.

This docstring was copied from pandas.core.frame.DataFrame.explode.

Some inconsistencies with the Dask version may exist.

Parameters:
columnIndexLabel

Column(s) to explode. For multiple columns, specify a non-empty list with each element be str or tuple, and all specified columns their list-like data on same row of the frame must have matching length.

New in version 1.3.0: Multi-column explode

ignore_indexbool, default False (Not supported in Dask)

If True, the resulting index will be labeled 0, 1, …, n - 1.

Returns:
DataFrame

Exploded lists to rows of the subset columns; index will be duplicated for these rows.

Raises:
ValueError
  • If columns of the frame are not unique.

  • If specified columns to explode is empty list.

  • If specified columns to explode have not matching count of elements rowwise in the frame.

See also

DataFrame.unstack

Pivot a level of the (necessarily hierarchical) index labels.

DataFrame.melt

Unpivot a DataFrame from wide format to long format.

Series.explode

Explode a DataFrame from list-like columns to long format.

Notes

This routine will explode list-likes including lists, tuples, sets, Series, and np.ndarray. The result dtype of the subset rows will be object. Scalars will be returned unchanged, and empty list-likes will result in a np.nan for that row. In addition, the ordering of rows in the output will be non-deterministic when exploding sets.

Reference the user guide for more examples.

Examples

>>> df = pd.DataFrame({'A': [[0, 1, 2], 'foo', [], [3, 4]],  
...                    'B': 1,
...                    'C': [['a', 'b', 'c'], np.nan, [], ['d', 'e']]})
>>> df  
           A  B          C
0  [0, 1, 2]  1  [a, b, c]
1        foo  1        NaN
2         []  1         []
3     [3, 4]  1     [d, e]

Single-column explode.

>>> df.explode('A')  
     A  B          C
0    0  1  [a, b, c]
0    1  1  [a, b, c]
0    2  1  [a, b, c]
1  foo  1        NaN
2  NaN  1         []
3    3  1     [d, e]
3    4  1     [d, e]

Multi-column explode.

>>> df.explode(list('AC'))  
     A  B    C
0    0  1    a
0    1  1    b
0    2  1    c
1  foo  1  NaN
2  NaN  1  NaN
3    3  1    d
3    4  1    e
ffill(axis=0, limit=None)#

Fill NA/NaN values by propagating the last valid observation to next valid.

This docstring was copied from pandas.core.frame.DataFrame.ffill.

Some inconsistencies with the Dask version may exist.

Parameters:
axis{0 or ‘index’} for Series, {0 or ‘index’, 1 or ‘columns’} for DataFrame

Axis along which to fill missing values. For Series this parameter is unused and defaults to 0.

inplacebool, default False (Not supported in Dask)

If True, fill in-place. Note: this will modify any other views on this object (e.g., a no-copy slice for a column in a DataFrame).

limitint, default None

If method is specified, this is the maximum number of consecutive NaN values to forward/backward fill. In other words, if there is a gap with more than this number of consecutive NaNs, it will only be partially filled. If method is not specified, this is the maximum number of entries along the entire axis where NaNs will be filled. Must be greater than 0 if not None.

limit_area{None, ‘inside’, ‘outside’}, default None (Not supported in Dask)

If limit is specified, consecutive NaNs will be filled with this restriction.

  • None: No fill restriction.

  • ‘inside’: Only fill NaNs surrounded by valid values (interpolate).

  • ‘outside’: Only fill NaNs outside valid values (extrapolate).

New in version 2.2.0.

downcastdict, default is None (Not supported in Dask)

A dict of item->dtype of what to downcast if possible, or the string ‘infer’ which will try to downcast to an appropriate equal type (e.g. float64 to int64 if possible).

Deprecated since version 2.2.0.

Returns:
Series/DataFrame or None

Object with missing values filled or None if inplace=True.

Examples

>>> df = pd.DataFrame([[np.nan, 2, np.nan, 0],  
...                    [3, 4, np.nan, 1],
...                    [np.nan, np.nan, np.nan, np.nan],
...                    [np.nan, 3, np.nan, 4]],
...                   columns=list("ABCD"))
>>> df  
     A    B   C    D
0  NaN  2.0 NaN  0.0
1  3.0  4.0 NaN  1.0
2  NaN  NaN NaN  NaN
3  NaN  3.0 NaN  4.0
>>> df.ffill()  
     A    B   C    D
0  NaN  2.0 NaN  0.0
1  3.0  4.0 NaN  1.0
2  3.0  4.0 NaN  1.0
3  3.0  3.0 NaN  4.0
>>> ser = pd.Series([1, np.nan, 2, 3])  
>>> ser.ffill()  
0   1.0
1   1.0
2   2.0
3   3.0
dtype: float64
fillna(value=None, axis=None)#

Fill NA/NaN values using the specified method.

This docstring was copied from pandas.core.frame.DataFrame.fillna.

Some inconsistencies with the Dask version may exist.

Parameters:
valuescalar, dict, Series, or DataFrame

Value to use to fill holes (e.g. 0), alternately a dict/Series/DataFrame of values specifying which value to use for each index (for a Series) or column (for a DataFrame). Values not in the dict/Series/DataFrame will not be filled. This value cannot be a list.

method{‘backfill’, ‘bfill’, ‘ffill’, None}, default None (Not supported in Dask)

Method to use for filling holes in reindexed Series:

  • ffill: propagate last valid observation forward to next valid.

  • backfill / bfill: use next valid observation to fill gap.

Deprecated since version 2.1.0: Use ffill or bfill instead.

axis{0 or ‘index’} for Series, {0 or ‘index’, 1 or ‘columns’} for DataFrame

Axis along which to fill missing values. For Series this parameter is unused and defaults to 0.

inplacebool, default False (Not supported in Dask)

If True, fill in-place. Note: this will modify any other views on this object (e.g., a no-copy slice for a column in a DataFrame).

limitint, default None (Not supported in Dask)

If method is specified, this is the maximum number of consecutive NaN values to forward/backward fill. In other words, if there is a gap with more than this number of consecutive NaNs, it will only be partially filled. If method is not specified, this is the maximum number of entries along the entire axis where NaNs will be filled. Must be greater than 0 if not None.

downcastdict, default is None (Not supported in Dask)

A dict of item->dtype of what to downcast if possible, or the string ‘infer’ which will try to downcast to an appropriate equal type (e.g. float64 to int64 if possible).

Deprecated since version 2.2.0.

Returns:
Series/DataFrame or None

Object with missing values filled or None if inplace=True.

See also

ffill

Fill values by propagating the last valid observation to next valid.

bfill

Fill values by using the next valid observation to fill the gap.

interpolate

Fill NaN values using interpolation.

reindex

Conform object to new index.

asfreq

Convert TimeSeries to specified frequency.

Examples

>>> df = pd.DataFrame([[np.nan, 2, np.nan, 0],  
...                    [3, 4, np.nan, 1],
...                    [np.nan, np.nan, np.nan, np.nan],
...                    [np.nan, 3, np.nan, 4]],
...                   columns=list("ABCD"))
>>> df  
     A    B   C    D
0  NaN  2.0 NaN  0.0
1  3.0  4.0 NaN  1.0
2  NaN  NaN NaN  NaN
3  NaN  3.0 NaN  4.0

Replace all NaN elements with 0s.

>>> df.fillna(0)  
     A    B    C    D
0  0.0  2.0  0.0  0.0
1  3.0  4.0  0.0  1.0
2  0.0  0.0  0.0  0.0
3  0.0  3.0  0.0  4.0

Replace all NaN elements in column ‘A’, ‘B’, ‘C’, and ‘D’, with 0, 1, 2, and 3 respectively.

>>> values = {"A": 0, "B": 1, "C": 2, "D": 3}  
>>> df.fillna(value=values)  
     A    B    C    D
0  0.0  2.0  2.0  0.0
1  3.0  4.0  2.0  1.0
2  0.0  1.0  2.0  3.0
3  0.0  3.0  2.0  4.0

Only replace the first NaN element.

>>> df.fillna(value=values, limit=1)  
     A    B    C    D
0  0.0  2.0  2.0  0.0
1  3.0  4.0  NaN  1.0
2  NaN  1.0  NaN  3.0
3  NaN  3.0  NaN  4.0

When filling using a DataFrame, replacement happens along the same column names and same indices

>>> df2 = pd.DataFrame(np.zeros((4, 4)), columns=list("ABCE"))  
>>> df.fillna(df2)  
     A    B    C    D
0  0.0  2.0  0.0  0.0
1  3.0  4.0  0.0  1.0
2  0.0  0.0  0.0  NaN
3  0.0  3.0  0.0  4.0

Note that column D is not affected since it is not present in df2.

classmethod from_dict(*args, **kwargs)#

Construct a Dask DataFrame from a Python Dictionary

See also

dask.dataframe.from_dict
get_partition(n)#

Get a dask DataFrame/Series representing the nth partition.

Parameters:
nint

The 0-indexed partition number to select.

Returns:
Dask DataFrame or Series

The same type as the original object.

groupby(by, group_keys=True, sort=None, observed=None, dropna=None, **kwargs)#

Group DataFrame using a mapper or by a Series of columns.

This docstring was copied from pandas.core.frame.DataFrame.groupby.

Some inconsistencies with the Dask version may exist.

A groupby operation involves some combination of splitting the object, applying a function, and combining the results. This can be used to group large amounts of data and compute operations on these groups.

Parameters:
bymapping, function, label, pd.Grouper or list of such

Used to determine the groups for the groupby. If by is a function, it’s called on each value of the object’s index. If a dict or Series is passed, the Series or dict VALUES will be used to determine the groups (the Series’ values are first aligned; see .align() method). If a list or ndarray of length equal to the selected axis is passed (see the groupby user guide), the values are used as-is to determine the groups. A label or list of labels may be passed to group by the columns in self. Notice that a tuple is interpreted as a (single) key.

axis{0 or ‘index’, 1 or ‘columns’}, default 0 (Not supported in Dask)

Split along rows (0) or columns (1). For Series this parameter is unused and defaults to 0.

Deprecated since version 2.1.0: Will be removed and behave like axis=0 in a future version. For axis=1, do frame.T.groupby(...) instead.

levelint, level name, or sequence of such, default None (Not supported in Dask)

If the axis is a MultiIndex (hierarchical), group by a particular level or levels. Do not specify both by and level.

as_indexbool, default True (Not supported in Dask)

Return object with group labels as the index. Only relevant for DataFrame input. as_index=False is effectively “SQL-style” grouped output. This argument has no effect on filtrations (see the filtrations in the user guide), such as head(), tail(), nth() and in transformations (see the transformations in the user guide).

sortbool, default True

Sort group keys. Get better performance by turning this off. Note this does not influence the order of observations within each group. Groupby preserves the order of rows within each group. If False, the groups will appear in the same order as they did in the original DataFrame. This argument has no effect on filtrations (see the filtrations in the user guide), such as head(), tail(), nth() and in transformations (see the transformations in the user guide).

Changed in version 2.0.0: Specifying sort=False with an ordered categorical grouper will no longer sort the values.

group_keysbool, default True

When calling apply and the by argument produces a like-indexed (i.e. a transform) result, add group keys to index to identify pieces. By default group keys are not included when the result’s index (and column) labels match the inputs, and are included otherwise.

Changed in version 1.5.0: Warns that group_keys will no longer be ignored when the result from apply is a like-indexed Series or DataFrame. Specify group_keys explicitly to include the group keys or not.

Changed in version 2.0.0: group_keys now defaults to True.

observedbool, default False

This only applies if any of the groupers are Categoricals. If True: only show observed values for categorical groupers. If False: show all values for categorical groupers.

Deprecated since version 2.1.0: The default value will change to True in a future version of pandas.

dropnabool, default True

If True, and if group keys contain NA values, NA values together with row/column will be dropped. If False, NA values will also be treated as the key in groups.

Returns:
pandas.api.typing.DataFrameGroupBy

Returns a groupby object that contains information about the groups.

See also

resample

Convenience method for frequency conversion and resampling of time series.

Notes

See the user guide for more detailed usage and examples, including splitting an object into groups, iterating through groups, selecting a group, aggregation, and more.

Examples

>>> df = pd.DataFrame({'Animal': ['Falcon', 'Falcon',  
...                               'Parrot', 'Parrot'],
...                    'Max Speed': [380., 370., 24., 26.]})
>>> df  
   Animal  Max Speed
0  Falcon      380.0
1  Falcon      370.0
2  Parrot       24.0
3  Parrot       26.0
>>> df.groupby(['Animal']).mean()  
        Max Speed
Animal
Falcon      375.0
Parrot       25.0

Hierarchical Indexes

We can groupby different levels of a hierarchical index using the level parameter:

>>> arrays = [['Falcon', 'Falcon', 'Parrot', 'Parrot'],  
...           ['Captive', 'Wild', 'Captive', 'Wild']]
>>> index = pd.MultiIndex.from_arrays(arrays, names=('Animal', 'Type'))  
>>> df = pd.DataFrame({'Max Speed': [390., 350., 30., 20.]},  
...                   index=index)
>>> df  
                Max Speed
Animal Type
Falcon Captive      390.0
       Wild         350.0
Parrot Captive       30.0
       Wild          20.0
>>> df.groupby(level=0).mean()  
        Max Speed
Animal
Falcon      370.0
Parrot       25.0
>>> df.groupby(level="Type").mean()  
         Max Speed
Type
Captive      210.0
Wild         185.0

We can also choose to include NA in group keys or not by setting dropna parameter, the default setting is True.

>>> l = [[1, 2, 3], [1, None, 4], [2, 1, 3], [1, 2, 2]]  
>>> df = pd.DataFrame(l, columns=["a", "b", "c"])  
>>> df.groupby(by=["b"]).sum()  
    a   c
b
1.0 2   3
2.0 2   5
>>> df.groupby(by=["b"], dropna=False).sum()  
    a   c
b
1.0 2   3
2.0 2   5
NaN 1   4
>>> l = [["a", 12, 12], [None, 12.3, 33.], ["b", 12.3, 123], ["a", 1, 1]]  
>>> df = pd.DataFrame(l, columns=["a", "b", "c"])  
>>> df.groupby(by="a").sum()  
    b     c
a
a   13.0   13.0
b   12.3  123.0
>>> df.groupby(by="a", dropna=False).sum()  
    b     c
a
a   13.0   13.0
b   12.3  123.0
NaN 12.3   33.0

When using .apply(), use group_keys to include or exclude the group keys. The group_keys argument defaults to True (include).

>>> df = pd.DataFrame({'Animal': ['Falcon', 'Falcon',  
...                               'Parrot', 'Parrot'],
...                    'Max Speed': [380., 370., 24., 26.]})
>>> df.groupby("Animal", group_keys=True)[['Max Speed']].apply(lambda x: x)  
          Max Speed
Animal
Falcon 0      380.0
       1      370.0
Parrot 2       24.0
       3       26.0
>>> df.groupby("Animal", group_keys=False)[['Max Speed']].apply(lambda x: x)  
   Max Speed
0      380.0
1      370.0
2       24.0
3       26.0
head(n: int = 5, npartitions=1, compute: bool = True)#

First n rows of the dataset

Parameters:
nint, optional

The number of rows to return. Default is 5.

npartitionsint, optional

Elements are only taken from the first npartitions, with a default of 1. If there are fewer than n rows in the first npartitions a warning will be raised and any found rows returned. Pass -1 to use all partitions.

computebool, optional

Whether to compute the result, default is True.

idxmax(axis=0, skipna=True, numeric_only=False, split_every=False)#

Return index of first occurrence of maximum over requested axis.

This docstring was copied from pandas.core.frame.DataFrame.idxmax.

Some inconsistencies with the Dask version may exist.

NA/null values are excluded.

Parameters:
axis{0 or ‘index’, 1 or ‘columns’}, default 0

The axis to use. 0 or ‘index’ for row-wise, 1 or ‘columns’ for column-wise.

skipnabool, default True

Exclude NA/null values. If an entire row/column is NA, the result will be NA.

numeric_onlybool, default False

Include only float, int or boolean data.

New in version 1.5.0.

Returns:
Series

Indexes of maxima along the specified axis.

Raises:
ValueError
  • If the row/column is empty

See also

Series.idxmax

Return index of the maximum element.

Notes

This method is the DataFrame version of ndarray.argmax.

Examples

Consider a dataset containing food consumption in Argentina.

>>> df = pd.DataFrame({'consumption': [10.51, 103.11, 55.48],  
...                     'co2_emissions': [37.2, 19.66, 1712]},
...                   index=['Pork', 'Wheat Products', 'Beef'])
>>> df  
                consumption  co2_emissions
Pork                  10.51         37.20
Wheat Products       103.11         19.66
Beef                  55.48       1712.00

By default, it returns the index for the maximum value in each column.

>>> df.idxmax()  
consumption     Wheat Products
co2_emissions             Beef
dtype: object

To return the index for the maximum value in each row, use axis="columns".

>>> df.idxmax(axis="columns")  
Pork              co2_emissions
Wheat Products     consumption
Beef              co2_emissions
dtype: object
idxmin(axis=0, skipna=True, numeric_only=False, split_every=False)#

Return index of first occurrence of minimum over requested axis.

This docstring was copied from pandas.core.frame.DataFrame.idxmin.

Some inconsistencies with the Dask version may exist.

NA/null values are excluded.

Parameters:
axis{0 or ‘index’, 1 or ‘columns’}, default 0

The axis to use. 0 or ‘index’ for row-wise, 1 or ‘columns’ for column-wise.

skipnabool, default True

Exclude NA/null values. If an entire row/column is NA, the result will be NA.

numeric_onlybool, default False

Include only float, int or boolean data.

New in version 1.5.0.

Returns:
Series

Indexes of minima along the specified axis.

Raises:
ValueError
  • If the row/column is empty

See also

Series.idxmin

Return index of the minimum element.

Notes

This method is the DataFrame version of ndarray.argmin.

Examples

Consider a dataset containing food consumption in Argentina.

>>> df = pd.DataFrame({'consumption': [10.51, 103.11, 55.48],  
...                     'co2_emissions': [37.2, 19.66, 1712]},
...                   index=['Pork', 'Wheat Products', 'Beef'])
>>> df  
                consumption  co2_emissions
Pork                  10.51         37.20
Wheat Products       103.11         19.66
Beef                  55.48       1712.00

By default, it returns the index for the minimum value in each column.

>>> df.idxmin()  
consumption                Pork
co2_emissions    Wheat Products
dtype: object

To return the index for the minimum value in each row, use axis="columns".

>>> df.idxmin(axis="columns")  
Pork                consumption
Wheat Products    co2_emissions
Beef                consumption
dtype: object
property iloc#

Purely integer-location based indexing for selection by position.

Only indexing the column positions is supported. Trying to select row positions will raise a ValueError.

See Indexing into Dask DataFrames for more.

Examples

>>> df.iloc[:, [2, 0, 1]]  
property index#

Return dask Index instance

info(buf=None, verbose=False, memory_usage=False)#

Concise summary of a Dask DataFrame

isin(values)#

Whether each element in the DataFrame is contained in values.

This docstring was copied from pandas.core.frame.DataFrame.isin.

Some inconsistencies with the Dask version may exist.

Parameters:
valuesiterable, Series, DataFrame or dict

The result will only be true at a location if all the labels match. If values is a Series, that’s the index. If values is a dict, the keys must be the column names, which must match. If values is a DataFrame, then both the index and column labels must match.

Returns:
DataFrame

DataFrame of booleans showing whether each element in the DataFrame is contained in values.

See also

DataFrame.eq

Equality test for DataFrame.

Series.isin

Equivalent method on Series.

Series.str.contains

Test if pattern or regex is contained within a string of a Series or Index.

Examples

>>> df = pd.DataFrame({'num_legs': [2, 4], 'num_wings': [2, 0]},  
...                   index=['falcon', 'dog'])
>>> df  
        num_legs  num_wings
falcon         2          2
dog            4          0

When values is a list check whether every value in the DataFrame is present in the list (which animals have 0 or 2 legs or wings)

>>> df.isin([0, 2])  
        num_legs  num_wings
falcon      True       True
dog        False       True

To check if values is not in the DataFrame, use the ~ operator:

>>> ~df.isin([0, 2])  
        num_legs  num_wings
falcon     False      False
dog         True      False

When values is a dict, we can pass values to check for each column separately:

>>> df.isin({'num_wings': [0, 3]})  
        num_legs  num_wings
falcon     False      False
dog        False       True

When values is a Series or DataFrame the index and column must match. Note that ‘falcon’ does not match based on the number of legs in other.

>>> other = pd.DataFrame({'num_legs': [8, 3], 'num_wings': [0, 2]},  
...                      index=['spider', 'falcon'])
>>> df.isin(other)  
        num_legs  num_wings
falcon     False       True
dog        False      False
isna()#

Detect missing values.

This docstring was copied from pandas.core.frame.DataFrame.isna.

Some inconsistencies with the Dask version may exist.

Return a boolean same-sized object indicating if the values are NA. NA values, such as None or numpy.NaN, gets mapped to True values. Everything else gets mapped to False values. Characters such as empty strings '' or numpy.inf are not considered NA values (unless you set pandas.options.mode.use_inf_as_na = True).

Returns:
DataFrame

Mask of bool values for each element in DataFrame that indicates whether an element is an NA value.

See also

DataFrame.isnull

Alias of isna.

DataFrame.notna

Boolean inverse of isna.

DataFrame.dropna

Omit axes labels with missing values.

isna

Top-level isna.

Examples

Show which entries in a DataFrame are NA.

>>> df = pd.DataFrame(dict(age=[5, 6, np.nan],  
...                        born=[pd.NaT, pd.Timestamp('1939-05-27'),
...                              pd.Timestamp('1940-04-25')],
...                        name=['Alfred', 'Batman', ''],
...                        toy=[None, 'Batmobile', 'Joker']))
>>> df  
   age       born    name        toy
0  5.0        NaT  Alfred       None
1  6.0 1939-05-27  Batman  Batmobile
2  NaN 1940-04-25              Joker
>>> df.isna()  
     age   born   name    toy
0  False   True  False   True
1  False  False  False  False
2   True  False  False  False

Show which entries in a Series are NA.

>>> ser = pd.Series([5, 6, np.nan])  
>>> ser  
0    5.0
1    6.0
2    NaN
dtype: float64
>>> ser.isna()  
0    False
1    False
2     True
dtype: bool
isnull()#

DataFrame.isnull is an alias for DataFrame.isna.

This docstring was copied from pandas.core.frame.DataFrame.isnull.

Some inconsistencies with the Dask version may exist.

Detect missing values.

Return a boolean same-sized object indicating if the values are NA. NA values, such as None or numpy.NaN, gets mapped to True values. Everything else gets mapped to False values. Characters such as empty strings '' or numpy.inf are not considered NA values (unless you set pandas.options.mode.use_inf_as_na = True).

Returns:
DataFrame

Mask of bool values for each element in DataFrame that indicates whether an element is an NA value.

See also

DataFrame.isnull

Alias of isna.

DataFrame.notna

Boolean inverse of isna.

DataFrame.dropna

Omit axes labels with missing values.

isna

Top-level isna.

Examples

Show which entries in a DataFrame are NA.

>>> df = pd.DataFrame(dict(age=[5, 6, np.nan],  
...                        born=[pd.NaT, pd.Timestamp('1939-05-27'),
...                              pd.Timestamp('1940-04-25')],
...                        name=['Alfred', 'Batman', ''],
...                        toy=[None, 'Batmobile', 'Joker']))
>>> df  
   age       born    name        toy
0  5.0        NaT  Alfred       None
1  6.0 1939-05-27  Batman  Batmobile
2  NaN 1940-04-25              Joker
>>> df.isna()  
     age   born   name    toy
0  False   True  False   True
1  False  False  False  False
2   True  False  False  False

Show which entries in a Series are NA.

>>> ser = pd.Series([5, 6, np.nan])  
>>> ser  
0    5.0
1    6.0
2    NaN
dtype: float64
>>> ser.isna()  
0    False
1    False
2     True
dtype: bool
items()#

Iterate over (column name, Series) pairs.

This docstring was copied from pandas.core.frame.DataFrame.items.

Some inconsistencies with the Dask version may exist.

Iterates over the DataFrame columns, returning a tuple with the column name and the content as a Series.

Yields:
labelobject

The column names for the DataFrame being iterated over.

contentSeries

The column entries belonging to each label, as a Series.

See also

DataFrame.iterrows

Iterate over DataFrame rows as (index, Series) pairs.

DataFrame.itertuples

Iterate over DataFrame rows as namedtuples of the values.

Examples

>>> df = pd.DataFrame({'species': ['bear', 'bear', 'marsupial'],  
...                   'population': [1864, 22000, 80000]},
...                   index=['panda', 'polar', 'koala'])
>>> df  
        species   population
panda   bear      1864
polar   bear      22000
koala   marsupial 80000
>>> for label, content in df.items():  
...     print(f'label: {label}')
...     print(f'content: {content}', sep='\n')
...
label: species
content:
panda         bear
polar         bear
koala    marsupial
Name: species, dtype: object
label: population
content:
panda     1864
polar    22000
koala    80000
Name: population, dtype: int64
iterrows()#

Iterate over DataFrame rows as (index, Series) pairs.

This docstring was copied from pandas.core.frame.DataFrame.iterrows.

Some inconsistencies with the Dask version may exist.

Yields:
indexlabel or tuple of label

The index of the row. A tuple for a MultiIndex.

dataSeries

The data of the row as a Series.

See also

DataFrame.itertuples

Iterate over DataFrame rows as namedtuples of the values.

DataFrame.items

Iterate over (column name, Series) pairs.

Notes

  1. Because iterrows returns a Series for each row, it does not preserve dtypes across the rows (dtypes are preserved across columns for DataFrames).

    To preserve dtypes while iterating over the rows, it is better to use itertuples() which returns namedtuples of the values and which is generally faster than iterrows.

  2. You should never modify something you are iterating over. This is not guaranteed to work in all cases. Depending on the data types, the iterator returns a copy and not a view, and writing to it will have no effect.

Examples

>>> df = pd.DataFrame([[1, 1.5]], columns=['int', 'float'])  
>>> row = next(df.iterrows())[1]  
>>> row  
int      1.0
float    1.5
Name: 0, dtype: float64
>>> print(row['int'].dtype)  
float64
>>> print(df['int'].dtype)  
int64
itertuples(index=True, name='Pandas')#

Iterate over DataFrame rows as namedtuples.

This docstring was copied from pandas.core.frame.DataFrame.itertuples.

Some inconsistencies with the Dask version may exist.

Parameters:
indexbool, default True

If True, return the index as the first element of the tuple.

namestr or None, default “Pandas”

The name of the returned namedtuples or None to return regular tuples.

Returns:
iterator

An object to iterate over namedtuples for each row in the DataFrame with the first field possibly being the index and following fields being the column values.

See also

DataFrame.iterrows

Iterate over DataFrame rows as (index, Series) pairs.

DataFrame.items

Iterate over (column name, Series) pairs.

Notes

The column names will be renamed to positional names if they are invalid Python identifiers, repeated, or start with an underscore.

Examples

>>> df = pd.DataFrame({'num_legs': [4, 2], 'num_wings': [0, 2]},  
...                   index=['dog', 'hawk'])
>>> df  
      num_legs  num_wings
dog          4          0
hawk         2          2
>>> for row in df.itertuples():  
...     print(row)
...
Pandas(Index='dog', num_legs=4, num_wings=0)
Pandas(Index='hawk', num_legs=2, num_wings=2)

By setting the index parameter to False we can remove the index as the first element of the tuple:

>>> for row in df.itertuples(index=False):  
...     print(row)
...
Pandas(num_legs=4, num_wings=0)
Pandas(num_legs=2, num_wings=2)

With the name parameter set we set a custom name for the yielded namedtuples:

>>> for row in df.itertuples(name='Animal'):  
...     print(row)
...
Animal(Index='dog', num_legs=4, num_wings=0)
Animal(Index='hawk', num_legs=2, num_wings=2)
join(other, on=None, how='left', lsuffix='', rsuffix='', shuffle_method=None, npartitions=None)#

Join columns of another DataFrame.

This docstring was copied from pandas.core.frame.DataFrame.join.

Some inconsistencies with the Dask version may exist.

Join columns with other DataFrame either on index or on a key column. Efficiently join multiple DataFrame objects by index at once by passing a list.

Parameters:
otherDataFrame, Series, or a list containing any combination of them

Index should be similar to one of the columns in this one. If a Series is passed, its name attribute must be set, and that will be used as the column name in the resulting joined DataFrame.

onstr, list of str, or array-like, optional

Column or index level name(s) in the caller to join on the index in other, otherwise joins index-on-index. If multiple values given, the other DataFrame must have a MultiIndex. Can pass an array as the join key if it is not already contained in the calling DataFrame. Like an Excel VLOOKUP operation.

how{‘left’, ‘right’, ‘outer’, ‘inner’, ‘cross’}, default ‘left’

How to handle the operation of the two objects.

  • left: use calling frame’s index (or column if on is specified)

  • right: use other’s index.

  • outer: form union of calling frame’s index (or column if on is specified) with other’s index, and sort it lexicographically.

  • inner: form intersection of calling frame’s index (or column if on is specified) with other’s index, preserving the order of the calling’s one.

  • cross: creates the cartesian product from both frames, preserves the order of the left keys.

lsuffixstr, default ‘’

Suffix to use from left frame’s overlapping columns.

rsuffixstr, default ‘’

Suffix to use from right frame’s overlapping columns.

sortbool, default False (Not supported in Dask)

Order result DataFrame lexicographically by the join key. If False, the order of the join key depends on the join type (how keyword).

validatestr, optional (Not supported in Dask)

If specified, checks if join is of specified type.

  • “one_to_one” or “1:1”: check if join keys are unique in both left and right datasets.

  • “one_to_many” or “1:m”: check if join keys are unique in left dataset.

  • “many_to_one” or “m:1”: check if join keys are unique in right dataset.

  • “many_to_many” or “m:m”: allowed, but does not result in checks.

New in version 1.5.0.

Returns:
DataFrame

A dataframe containing columns from both the caller and other.

See also

DataFrame.merge

For column(s)-on-column(s) operations.

Notes

Parameters on, lsuffix, and rsuffix are not supported when passing a list of DataFrame objects.

Examples

>>> df = pd.DataFrame({'key': ['K0', 'K1', 'K2', 'K3', 'K4', 'K5'],  
...                    'A': ['A0', 'A1', 'A2', 'A3', 'A4', 'A5']})
>>> df  
  key   A
0  K0  A0
1  K1  A1
2  K2  A2
3  K3  A3
4  K4  A4
5  K5  A5
>>> other = pd.DataFrame({'key': ['K0', 'K1', 'K2'],  
...                       'B': ['B0', 'B1', 'B2']})
>>> other  
  key   B
0  K0  B0
1  K1  B1
2  K2  B2

Join DataFrames using their indexes.

>>> df.join(other, lsuffix='_caller', rsuffix='_other')  
  key_caller   A key_other    B
0         K0  A0        K0   B0
1         K1  A1        K1   B1
2         K2  A2        K2   B2
3         K3  A3       NaN  NaN
4         K4  A4       NaN  NaN
5         K5  A5       NaN  NaN

If we want to join using the key columns, we need to set key to be the index in both df and other. The joined DataFrame will have key as its index.

>>> df.set_index('key').join(other.set_index('key'))  
      A    B
key
K0   A0   B0
K1   A1   B1
K2   A2   B2
K3   A3  NaN
K4   A4  NaN
K5   A5  NaN

Another option to join using the key columns is to use the on parameter. DataFrame.join always uses other’s index but we can use any column in df. This method preserves the original DataFrame’s index in the result.

>>> df.join(other.set_index('key'), on='key')  
  key   A    B
0  K0  A0   B0
1  K1  A1   B1
2  K2  A2   B2
3  K3  A3  NaN
4  K4  A4  NaN
5  K5  A5  NaN

Using non-unique key values shows how they are matched.

>>> df = pd.DataFrame({'key': ['K0', 'K1', 'K1', 'K3', 'K0', 'K1'],  
...                    'A': ['A0', 'A1', 'A2', 'A3', 'A4', 'A5']})
>>> df  
  key   A
0  K0  A0
1  K1  A1
2  K1  A2
3  K3  A3
4  K0  A4
5  K1  A5
>>> df.join(other.set_index('key'), on='key', validate='m:1')  
  key   A    B
0  K0  A0   B0
1  K1  A1   B1
2  K1  A2   B1
3  K3  A3  NaN
4  K0  A4   B0
5  K1  A5   B1
property known_divisions#

Whether the divisions are known.

This check can be expensive if the division calculation is expensive. DataFrame.set_index is a good example where the calculation needs an inspection of the data.

kurt(axis=0, fisher=True, bias=True, nan_policy='propagate', numeric_only=False)#

Return unbiased kurtosis over requested axis.

This docstring was copied from pandas.core.frame.DataFrame.kurtosis.

Some inconsistencies with the Dask version may exist.

Note

This implementation follows the dask.array.stats implementation of kurtosis and calculates kurtosis without taking into account a bias term for finite sample size, which corresponds to the default settings of the scipy.stats kurtosis calculation. This differs from pandas.

Further, this method currently does not support filtering out NaN values, which is again a difference to Pandas.

Kurtosis obtained using Fisher’s definition of kurtosis (kurtosis of normal == 0.0). Normalized by N-1.

Parameters:
axis{index (0), columns (1)}

Axis for the function to be applied on. For Series this parameter is unused and defaults to 0.

For DataFrames, specifying axis=None will apply the aggregation across both axes.

New in version 2.0.0.

skipnabool, default True (Not supported in Dask)

Exclude NA/null values when computing the result.

numeric_onlybool, default False

Include only float, int, boolean columns. Not implemented for Series.

**kwargs

Additional keyword arguments to be passed to the function.

Returns:
Series or scalar

Examples

>>> s = pd.Series([1, 2, 2, 3], index=['cat', 'dog', 'dog', 'mouse'])  
>>> s  
cat    1
dog    2
dog    2
mouse  3
dtype: int64
>>> s.kurt()  
1.5

With a DataFrame

>>> df = pd.DataFrame({'a': [1, 2, 2, 3], 'b': [3, 4, 4, 4]},  
...                   index=['cat', 'dog', 'dog', 'mouse'])
>>> df  
       a   b
  cat  1   3
  dog  2   4
  dog  2   4
mouse  3   4
>>> df.kurt()  
a   1.5
b   4.0
dtype: float64

With axis=None

>>> df.kurt(axis=None).round(6)  
-0.988693

Using axis=1

>>> df = pd.DataFrame({'a': [1, 2], 'b': [3, 4], 'c': [3, 4], 'd': [1, 2]},  
...                   index=['cat', 'dog'])
>>> df.kurt(axis=1)  
cat   -6.0
dog   -6.0
dtype: float64
kurtosis(axis=0, fisher=True, bias=True, nan_policy='propagate', numeric_only=False)#

Return unbiased kurtosis over requested axis.

This docstring was copied from pandas.core.frame.DataFrame.kurtosis.

Some inconsistencies with the Dask version may exist.

Note

This implementation follows the dask.array.stats implementation of kurtosis and calculates kurtosis without taking into account a bias term for finite sample size, which corresponds to the default settings of the scipy.stats kurtosis calculation. This differs from pandas.

Further, this method currently does not support filtering out NaN values, which is again a difference to Pandas.

Kurtosis obtained using Fisher’s definition of kurtosis (kurtosis of normal == 0.0). Normalized by N-1.

Parameters:
axis{index (0), columns (1)}

Axis for the function to be applied on. For Series this parameter is unused and defaults to 0.

For DataFrames, specifying axis=None will apply the aggregation across both axes.

New in version 2.0.0.

skipnabool, default True (Not supported in Dask)

Exclude NA/null values when computing the result.

numeric_onlybool, default False

Include only float, int, boolean columns. Not implemented for Series.

**kwargs

Additional keyword arguments to be passed to the function.

Returns:
Series or scalar

Examples

>>> s = pd.Series([1, 2, 2, 3], index=['cat', 'dog', 'dog', 'mouse'])  
>>> s  
cat    1
dog    2
dog    2
mouse  3
dtype: int64
>>> s.kurt()  
1.5

With a DataFrame

>>> df = pd.DataFrame({'a': [1, 2, 2, 3], 'b': [3, 4, 4, 4]},  
...                   index=['cat', 'dog', 'dog', 'mouse'])
>>> df  
       a   b
  cat  1   3
  dog  2   4
  dog  2   4
mouse  3   4
>>> df.kurt()  
a   1.5
b   4.0
dtype: float64

With axis=None

>>> df.kurt(axis=None).round(6)  
-0.988693

Using axis=1

>>> df = pd.DataFrame({'a': [1, 2], 'b': [3, 4], 'c': [3, 4], 'd': [1, 2]},  
...                   index=['cat', 'dog'])
>>> df.kurt(axis=1)  
cat   -6.0
dog   -6.0
dtype: float64
property loc#

Purely label-location based indexer for selection by label.

>>> df.loc["b"]  
>>> df.loc["b":"d"]  
map_overlap(func, before, after, *args, meta=_NoDefault.no_default, enforce_metadata=True, transform_divisions=True, clear_divisions=False, align_dataframes=False, **kwargs)#

Apply a function to each partition, sharing rows with adjacent partitions.

This can be useful for implementing windowing functions such as df.rolling(...).mean() or df.diff().

Parameters:
funcfunction

Function applied to each partition.

beforeint, timedelta or string timedelta

The rows to prepend to partition i from the end of partition i - 1.

afterint, timedelta or string timedelta

The rows to append to partition i from the beginning of partition i + 1.

args, kwargs

Positional and keyword arguments to pass to the function. Positional arguments are computed on a per-partition basis, while keyword arguments are shared across all partitions. The partition itself will be the first positional argument, with all other arguments passed after. Arguments can be Scalar, Delayed, or regular Python objects. DataFrame-like args (both dask and pandas) will be repartitioned to align (if necessary) before applying the function; see align_dataframes to control this behavior.

enforce_metadatabool, default True

Whether to enforce at runtime that the structure of the DataFrame produced by func actually matches the structure of meta. This will rename and reorder columns for each partition, and will raise an error if this doesn’t work, but it won’t raise if dtypes don’t match.

transform_divisionsbool, default True

Whether to apply the function onto the divisions and apply those transformed divisions to the output.

align_dataframesbool, default True

Whether to repartition DataFrame- or Series-like args (both dask and pandas) so their divisions align before applying the function. This requires all inputs to have known divisions. Single-partition inputs will be split into multiple partitions.

If False, all inputs must have either the same number of partitions or a single partition. Single-partition inputs will be broadcast to every partition of multi-partition inputs.

metapd.DataFrame, pd.Series, dict, iterable, tuple, optional

An empty pd.DataFrame or pd.Series that matches the dtypes and column names of the output. This metadata is necessary for many algorithms in dask dataframe to work. For ease of use, some alternative inputs are also available. Instead of a DataFrame, a dict of {name: dtype} or iterable of (name, dtype) can be provided (note that the order of the names should match the order of the columns). Instead of a series, a tuple of (name, dtype) can be used. If not provided, dask will try to infer the metadata. This may lead to unexpected results, so providing meta is recommended. For more information, see dask.dataframe.utils.make_meta.

Notes

Given positive integers before and after, and a function func, map_overlap does the following:

  1. Prepend before rows to each partition i from the end of partition i - 1. The first partition has no rows prepended.

  2. Append after rows to each partition i from the beginning of partition i + 1. The last partition has no rows appended.

  3. Apply func to each partition, passing in any extra args and kwargs if provided.

  4. Trim before rows from the beginning of all but the first partition.

  5. Trim after rows from the end of all but the last partition.

Examples

Given a DataFrame, Series, or Index, such as:

>>> import pandas as pd
>>> import dask_expr as dd
>>> df = pd.DataFrame({'x': [1, 2, 4, 7, 11],
...                    'y': [1., 2., 3., 4., 5.]})
>>> ddf = dd.from_pandas(df, npartitions=2)

A rolling sum with a trailing moving window of size 2 can be computed by overlapping 2 rows before each partition, and then mapping calls to df.rolling(2).sum():

>>> ddf.compute()
    x    y
0   1  1.0
1   2  2.0
2   4  3.0
3   7  4.0
4  11  5.0
>>> ddf.map_overlap(lambda df: df.rolling(2).sum(), 2, 0).compute()
      x    y
0   NaN  NaN
1   3.0  3.0
2   6.0  5.0
3  11.0  7.0
4  18.0  9.0

The pandas diff method computes a discrete difference shifted by a number of periods (can be positive or negative). This can be implemented by mapping calls to df.diff to each partition after prepending/appending that many rows, depending on sign:

>>> def diff(df, periods=1):
...     before, after = (periods, 0) if periods > 0 else (0, -periods)
...     return df.map_overlap(lambda df, periods=1: df.diff(periods),
...                           periods, 0, periods=periods)
>>> diff(ddf, 1).compute()
     x    y
0  NaN  NaN
1  1.0  1.0
2  2.0  1.0
3  3.0  1.0
4  4.0  1.0

If you have a DatetimeIndex, you can use a pd.Timedelta for time- based windows or any pd.Timedelta convertible string:

>>> ts = pd.Series(range(10), index=pd.date_range('2017', periods=10))
>>> dts = dd.from_pandas(ts, npartitions=2)
>>> dts.map_overlap(lambda df: df.rolling('2D').sum(),
...                 pd.Timedelta('2D'), 0).compute()
2017-01-01     0.0
2017-01-02     1.0
2017-01-03     3.0
2017-01-04     5.0
2017-01-05     7.0
2017-01-06     9.0
2017-01-07    11.0
2017-01-08    13.0
2017-01-09    15.0
2017-01-10    17.0
Freq: D, dtype: float64
map_partitions(func, *args, meta=_NoDefault.no_default, enforce_metadata=True, transform_divisions=True, clear_divisions=False, align_dataframes=False, parent_meta=None, **kwargs)#

Apply a Python function to each partition

Parameters:
funcfunction

Function applied to each partition.

args, kwargs

Arguments and keywords to pass to the function. Arguments and keywords may contain FrameBase or regular python objects. DataFrame-like args (both dask and pandas) must have the same number of partitions as self or comprise a single partition. Key-word arguments, Single-partition arguments, and general python-object arguments will be broadcasted to all partitions.

enforce_metadatabool, default True

Whether to enforce at runtime that the structure of the DataFrame produced by func actually matches the structure of meta. This will rename and reorder columns for each partition, and will raise an error if this doesn’t work, but it won’t raise if dtypes don’t match.

transform_divisionsbool, default True

Whether to apply the function onto the divisions and apply those transformed divisions to the output.

clear_divisionsbool, default False

Whether divisions should be cleared. If True, transform_divisions will be ignored.

metapd.DataFrame, pd.Series, dict, iterable, tuple, optional

An empty pd.DataFrame or pd.Series that matches the dtypes and column names of the output. This metadata is necessary for many algorithms in dask dataframe to work. For ease of use, some alternative inputs are also available. Instead of a DataFrame, a dict of {name: dtype} or iterable of (name, dtype) can be provided (note that the order of the names should match the order of the columns). Instead of a series, a tuple of (name, dtype) can be used. If not provided, dask will try to infer the metadata. This may lead to unexpected results, so providing meta is recommended. For more information, see dask.dataframe.utils.make_meta.

Examples

Given a DataFrame, Series, or Index, such as:

>>> import pandas as pd
>>> import dask_expr as dd
>>> df = pd.DataFrame({'x': [1, 2, 3, 4, 5],
...                    'y': [1., 2., 3., 4., 5.]})
>>> ddf = dd.from_pandas(df, npartitions=2)

One can use map_partitions to apply a function on each partition. Extra arguments and keywords can optionally be provided, and will be passed to the function after the partition.

Here we apply a function with arguments and keywords to a DataFrame, resulting in a Series:

>>> def myadd(df, a, b=1):
...     return df.x + df.y + a + b
>>> res = ddf.map_partitions(myadd, 1, b=2)
>>> res.dtype
dtype('float64')

Here we apply a function to a Series resulting in a Series:

>>> res = ddf.x.map_partitions(lambda x: len(x)) # ddf.x is a Dask Series Structure
>>> res.dtype
dtype('int64')

By default, dask tries to infer the output metadata by running your provided function on some fake data. This works well in many cases, but can sometimes be expensive, or even fail. To avoid this, you can manually specify the output metadata with the meta keyword. This can be specified in many forms, for more information see dask.dataframe.utils.make_meta.

Here we specify the output is a Series with no name, and dtype float64:

>>> res = ddf.map_partitions(myadd, 1, b=2, meta=(None, 'f8'))

Here we map a function that takes in a DataFrame, and returns a DataFrame with a new column:

>>> res = ddf.map_partitions(lambda df: df.assign(z=df.x * df.y))
>>> res.dtypes
x      int64
y    float64
z    float64
dtype: object

As before, the output metadata can also be specified manually. This time we pass in a dict, as the output is a DataFrame:

>>> res = ddf.map_partitions(lambda df: df.assign(z=df.x * df.y),
...                          meta={'x': 'i8', 'y': 'f8', 'z': 'f8'})

In the case where the metadata doesn’t change, you can also pass in the object itself directly:

>>> res = ddf.map_partitions(lambda df: df.head(), meta=ddf)

Also note that the index and divisions are assumed to remain unchanged. If the function you’re mapping changes the index/divisions, you’ll need to pass clear_divisions=True.

>>> ddf.map_partitions(func, clear_divisions=True)  

Your map function gets information about where it is in the dataframe by accepting a special partition_info keyword argument.

>>> def func(partition, partition_info=None):
...     pass

This will receive the following information:

>>> partition_info  
{'number': 1, 'division': 3}

For each argument and keyword arguments that are dask dataframes you will receive the number (n) which represents the nth partition of the dataframe and the division (the first index value in the partition). If divisions are not known (for instance if the index is not sorted) then you will get None as the division.

mask(cond, other=nan)#

Replace values where the condition is True.

This docstring was copied from pandas.core.frame.DataFrame.mask.

Some inconsistencies with the Dask version may exist.

Parameters:
condbool Series/DataFrame, array-like, or callable

Where cond is False, keep the original value. Where True, replace with corresponding value from other. If cond is callable, it is computed on the Series/DataFrame and should return boolean Series/DataFrame or array. The callable must not change input Series/DataFrame (though pandas doesn’t check it).

otherscalar, Series/DataFrame, or callable

Entries where cond is True are replaced with corresponding value from other. If other is callable, it is computed on the Series/DataFrame and should return scalar or Series/DataFrame. The callable must not change input Series/DataFrame (though pandas doesn’t check it). If not specified, entries will be filled with the corresponding NULL value (np.nan for numpy dtypes, pd.NA for extension dtypes).

inplacebool, default False (Not supported in Dask)

Whether to perform the operation in place on the data.

axisint, default None (Not supported in Dask)

Alignment axis if needed. For Series this parameter is unused and defaults to 0.

levelint, default None (Not supported in Dask)

Alignment level if needed.

Returns:
Same type as caller or None if inplace=True.

See also

DataFrame.where()

Return an object of same shape as self.

Notes

The mask method is an application of the if-then idiom. For each element in the calling DataFrame, if cond is False the element is used; otherwise the corresponding element from the DataFrame other is used. If the axis of other does not align with axis of cond Series/DataFrame, the misaligned index positions will be filled with True.

The signature for DataFrame.where() differs from numpy.where(). Roughly df1.where(m, df2) is equivalent to np.where(m, df1, df2).

For further details and examples see the mask documentation in indexing.

The dtype of the object takes precedence. The fill value is casted to the object’s dtype, if this can be done losslessly.

Examples

>>> s = pd.Series(range(5))  
>>> s.where(s > 0)  
0    NaN
1    1.0
2    2.0
3    3.0
4    4.0
dtype: float64
>>> s.mask(s > 0)  
0    0.0
1    NaN
2    NaN
3    NaN
4    NaN
dtype: float64
>>> s = pd.Series(range(5))  
>>> t = pd.Series([True, False])  
>>> s.where(t, 99)  
0     0
1    99
2    99
3    99
4    99
dtype: int64
>>> s.mask(t, 99)  
0    99
1     1
2    99
3    99
4    99
dtype: int64
>>> s.where(s > 1, 10)  
0    10
1    10
2    2
3    3
4    4
dtype: int64
>>> s.mask(s > 1, 10)  
0     0
1     1
2    10
3    10
4    10
dtype: int64
>>> df = pd.DataFrame(np.arange(10).reshape(-1, 2), columns=['A', 'B'])  
>>> df  
   A  B
0  0  1
1  2  3
2  4  5
3  6  7
4  8  9
>>> m = df % 3 == 0  
>>> df.where(m, -df)  
   A  B
0  0 -1
1 -2  3
2 -4 -5
3  6 -7
4 -8  9
>>> df.where(m, -df) == np.where(m, df, -df)  
      A     B
0  True  True
1  True  True
2  True  True
3  True  True
4  True  True
>>> df.where(m, -df) == df.mask(~m, -df)  
      A     B
0  True  True
1  True  True
2  True  True
3  True  True
4  True  True
max(axis=0, skipna=True, numeric_only=False, split_every=False, **kwargs)#

Return the maximum of the values over the requested axis.

This docstring was copied from pandas.core.frame.DataFrame.max.

Some inconsistencies with the Dask version may exist.

If you want the index of the maximum, use idxmax. This is the equivalent of the numpy.ndarray method argmax.

Parameters:
axis{index (0), columns (1)}

Axis for the function to be applied on. For Series this parameter is unused and defaults to 0.

For DataFrames, specifying axis=None will apply the aggregation across both axes.

New in version 2.0.0.

skipnabool, default True

Exclude NA/null values when computing the result.

numeric_onlybool, default False

Include only float, int, boolean columns. Not implemented for Series.

**kwargs

Additional keyword arguments to be passed to the function.

Returns:
Series or scalar

See also

Series.sum

Return the sum.

Series.min

Return the minimum.

Series.max

Return the maximum.

Series.idxmin

Return the index of the minimum.

Series.idxmax

Return the index of the maximum.

DataFrame.sum

Return the sum over the requested axis.

DataFrame.min

Return the minimum over the requested axis.

DataFrame.max

Return the maximum over the requested axis.

DataFrame.idxmin

Return the index of the minimum over the requested axis.

DataFrame.idxmax

Return the index of the maximum over the requested axis.

Examples

>>> idx = pd.MultiIndex.from_arrays([  
...     ['warm', 'warm', 'cold', 'cold'],
...     ['dog', 'falcon', 'fish', 'spider']],
...     names=['blooded', 'animal'])
>>> s = pd.Series([4, 2, 0, 8], name='legs', index=idx)  
>>> s  
blooded  animal
warm     dog       4
         falcon    2
cold     fish      0
         spider    8
Name: legs, dtype: int64
>>> s.max()  
8
mean(axis=0, skipna=True, numeric_only=False, split_every=False, **kwargs)#

Return the mean of the values over the requested axis.

This docstring was copied from pandas.core.frame.DataFrame.mean.

Some inconsistencies with the Dask version may exist.

Parameters:
axis{index (0), columns (1)}

Axis for the function to be applied on. For Series this parameter is unused and defaults to 0.

For DataFrames, specifying axis=None will apply the aggregation across both axes.

New in version 2.0.0.

skipnabool, default True

Exclude NA/null values when computing the result.

numeric_onlybool, default False

Include only float, int, boolean columns. Not implemented for Series.

**kwargs

Additional keyword arguments to be passed to the function.

Returns:
Series or scalar

Examples

>>> s = pd.Series([1, 2, 3])  
>>> s.mean()  
2.0

With a DataFrame

>>> df = pd.DataFrame({'a': [1, 2], 'b': [2, 3]}, index=['tiger', 'zebra'])  
>>> df  
       a   b
tiger  1   2
zebra  2   3
>>> df.mean()  
a   1.5
b   2.5
dtype: float64

Using axis=1

>>> df.mean(axis=1)  
tiger   1.5
zebra   2.5
dtype: float64

In this case, numeric_only should be set to True to avoid getting an error.

>>> df = pd.DataFrame({'a': [1, 2], 'b': ['T', 'Z']},  
...                   index=['tiger', 'zebra'])
>>> df.mean(numeric_only=True)  
a   1.5
dtype: float64
median(axis=0, numeric_only=False)#

Return the median of the values over the requested axis.

This docstring was copied from pandas.core.frame.DataFrame.median.

Some inconsistencies with the Dask version may exist.

Parameters:
axis{index (0), columns (1)}

Axis for the function to be applied on. For Series this parameter is unused and defaults to 0.

For DataFrames, specifying axis=None will apply the aggregation across both axes.

New in version 2.0.0.

skipnabool, default True (Not supported in Dask)

Exclude NA/null values when computing the result.

numeric_onlybool, default False

Include only float, int, boolean columns. Not implemented for Series.

**kwargs

Additional keyword arguments to be passed to the function.

Returns:
Series or scalar

Examples

>>> s = pd.Series([1, 2, 3])  
>>> s.median()  
2.0

With a DataFrame

>>> df = pd.DataFrame({'a': [1, 2], 'b': [2, 3]}, index=['tiger', 'zebra'])  
>>> df  
       a   b
tiger  1   2
zebra  2   3
>>> df.median()  
a   1.5
b   2.5
dtype: float64

Using axis=1

>>> df.median(axis=1)  
tiger   1.5
zebra   2.5
dtype: float64

In this case, numeric_only should be set to True to avoid getting an error.

>>> df = pd.DataFrame({'a': [1, 2], 'b': ['T', 'Z']},  
...                   index=['tiger', 'zebra'])
>>> df.median(numeric_only=True)  
a   1.5
dtype: float64
median_approximate(axis=0, method='default', numeric_only=False)#

Return the approximate median of the values over the requested axis.

Parameters:
axis{0, 1, “index”, “columns”} (default 0)

0 or "index" for row-wise, 1 or "columns" for column-wise

method{‘default’, ‘tdigest’, ‘dask’}, optional

What method to use. By default will use Dask’s internal custom algorithm ("dask"). If set to "tdigest" will use tdigest for floats and ints and fallback to the "dask" otherwise.

melt(id_vars=None, value_vars=None, var_name=None, value_name='value', col_level=None)#

Unpivot a DataFrame from wide to long format, optionally leaving identifiers set.

This docstring was copied from pandas.core.frame.DataFrame.melt.

Some inconsistencies with the Dask version may exist.

This function is useful to massage a DataFrame into a format where one or more columns are identifier variables (id_vars), while all other columns, considered measured variables (value_vars), are “unpivoted” to the row axis, leaving just two non-identifier columns, ‘variable’ and ‘value’.

Parameters:
id_varsscalar, tuple, list, or ndarray, optional

Column(s) to use as identifier variables.

value_varsscalar, tuple, list, or ndarray, optional

Column(s) to unpivot. If not specified, uses all columns that are not set as id_vars.

var_namescalar, default None

Name to use for the ‘variable’ column. If None it uses frame.columns.name or ‘variable’.

value_namescalar, default ‘value’

Name to use for the ‘value’ column, can’t be an existing column label.

col_levelscalar, optional

If columns are a MultiIndex then use this level to melt.

ignore_indexbool, default True (Not supported in Dask)

If True, original index is ignored. If False, the original index is retained. Index labels will be repeated as necessary.

Returns:
DataFrame

Unpivoted DataFrame.

See also

melt

Identical method.

pivot_table

Create a spreadsheet-style pivot table as a DataFrame.

DataFrame.pivot

Return reshaped DataFrame organized by given index / column values.

DataFrame.explode

Explode a DataFrame from list-like columns to long format.

Notes

Reference the user guide for more examples.

Examples

>>> df = pd.DataFrame({'A': {0: 'a', 1: 'b', 2: 'c'},  
...                    'B': {0: 1, 1: 3, 2: 5},
...                    'C': {0: 2, 1: 4, 2: 6}})
>>> df  
   A  B  C
0  a  1  2
1  b  3  4
2  c  5  6
>>> df.melt(id_vars=['A'], value_vars=['B'])  
   A variable  value
0  a        B      1
1  b        B      3
2  c        B      5
>>> df.melt(id_vars=['A'], value_vars=['B', 'C'])  
   A variable  value
0  a        B      1
1  b        B      3
2  c        B      5
3  a        C      2
4  b        C      4
5  c        C      6

The names of ‘variable’ and ‘value’ columns can be customized:

>>> df.melt(id_vars=['A'], value_vars=['B'],  
...         var_name='myVarname', value_name='myValname')
   A myVarname  myValname
0  a         B          1
1  b         B          3
2  c         B          5

Original index values can be kept around:

>>> df.melt(id_vars=['A'], value_vars=['B', 'C'], ignore_index=False)  
   A variable  value
0  a        B      1
1  b        B      3
2  c        B      5
0  a        C      2
1  b        C      4
2  c        C      6

If you have multi-index columns:

>>> df.columns = [list('ABC'), list('DEF')]  
>>> df  
   A  B  C
   D  E  F
0  a  1  2
1  b  3  4
2  c  5  6
>>> df.melt(col_level=0, id_vars=['A'], value_vars=['B'])  
   A variable  value
0  a        B      1
1  b        B      3
2  c        B      5
>>> df.melt(id_vars=[('A', 'D')], value_vars=[('B', 'E')])  
  (A, D) variable_0 variable_1  value
0      a          B          E      1
1      b          B          E      3
2      c          B          E      5
memory_usage(deep=False, index=True)#

Return the memory usage of each column in bytes.

This docstring was copied from pandas.core.frame.DataFrame.memory_usage.

Some inconsistencies with the Dask version may exist.

The memory usage can optionally include the contribution of the index and elements of object dtype.

This value is displayed in DataFrame.info by default. This can be suppressed by setting pandas.options.display.memory_usage to False.

Parameters:
indexbool, default True

Specifies whether to include the memory usage of the DataFrame’s index in returned Series. If index=True, the memory usage of the index is the first item in the output.

deepbool, default False

If True, introspect the data deeply by interrogating object dtypes for system-level memory consumption, and include it in the returned values.

Returns:
Series

A Series whose index is the original column names and whose values is the memory usage of each column in bytes.

See also

numpy.ndarray.nbytes

Total bytes consumed by the elements of an ndarray.

Series.memory_usage

Bytes consumed by a Series.

Categorical

Memory-efficient array for string values with many repeated values.

DataFrame.info

Concise summary of a DataFrame.

Notes

See the Frequently Asked Questions for more details.

Examples

>>> dtypes = ['int64', 'float64', 'complex128', 'object', 'bool']  
>>> data = dict([(t, np.ones(shape=5000, dtype=int).astype(t))  
...              for t in dtypes])
>>> df = pd.DataFrame(data)  
>>> df.head()  
   int64  float64            complex128  object  bool
0      1      1.0              1.0+0.0j       1  True
1      1      1.0              1.0+0.0j       1  True
2      1      1.0              1.0+0.0j       1  True
3      1      1.0              1.0+0.0j       1  True
4      1      1.0              1.0+0.0j       1  True
>>> df.memory_usage()  
Index           128
int64         40000
float64       40000
complex128    80000
object        40000
bool           5000
dtype: int64
>>> df.memory_usage(index=False)  
int64         40000
float64       40000
complex128    80000
object        40000
bool           5000
dtype: int64

The memory footprint of object dtype columns is ignored by default:

>>> df.memory_usage(deep=True)  
Index            128
int64          40000
float64        40000
complex128     80000
object        180000
bool            5000
dtype: int64

Use a Categorical for efficient storage of an object-dtype column with many repeated values.

>>> df['object'].astype('category').memory_usage(deep=True)  
5244
memory_usage_per_partition(index: bool = True, deep: bool = False)#

Return the memory usage of each partition

Parameters:
indexbool, default True

Specifies whether to include the memory usage of the index in returned Series.

deepbool, default False

If True, introspect the data deeply by interrogating object dtypes for system-level memory consumption, and include it in the returned values.

Returns:
Series

A Series whose index is the partition number and whose values are the memory usage of each partition in bytes.

merge(right, how='inner', on=None, left_on=None, right_on=None, left_index=False, right_index=False, suffixes=('_x', '_y'), indicator=False, shuffle_method=None, npartitions=None, broadcast=None)#

Merge the DataFrame with another DataFrame

This will merge the two datasets, either on the indices, a certain column in each dataset or the index in one dataset and the column in another.

Parameters:
right: dask.dataframe.DataFrame
how{‘left’, ‘right’, ‘outer’, ‘inner’, ‘leftsemi’}, default: ‘inner’

How to handle the operation of the two objects:

  • left: use calling frame’s index (or column if on is specified)

  • right: use other frame’s index

  • outer: form union of calling frame’s index (or column if on is specified) with other frame’s index, and sort it lexicographically

  • inner: form intersection of calling frame’s index (or column if on is specified) with other frame’s index, preserving the order of the calling’s one

  • leftsemi: Choose all rows in left where the join keys can be found in right. Won’t duplicate rows if the keys are duplicated in right. Drops all columns from right.

onlabel or list

Column or index level names to join on. These must be found in both DataFrames. If on is None and not merging on indexes then this defaults to the intersection of the columns in both DataFrames.

left_onlabel or list, or array-like

Column to join on in the left DataFrame. Other than in pandas arrays and lists are only support if their length is 1.

right_onlabel or list, or array-like

Column to join on in the right DataFrame. Other than in pandas arrays and lists are only support if their length is 1.

left_indexboolean, default False

Use the index from the left DataFrame as the join key.

right_indexboolean, default False

Use the index from the right DataFrame as the join key.

suffixes2-length sequence (tuple, list, …)

Suffix to apply to overlapping column names in the left and right side, respectively

indicatorboolean or string, default False

If True, adds a column to output DataFrame called “_merge” with information on the source of each row. If string, column with information on source of each row will be added to output DataFrame, and column will be named value of string. Information column is Categorical-type and takes on a value of “left_only” for observations whose merge key only appears in left DataFrame, “right_only” for observations whose merge key only appears in right DataFrame, and “both” if the observation’s merge key is found in both.

npartitions: int or None, optional

The ideal number of output partitions. This is only utilised when performing a hash_join (merging on columns only). If None then npartitions = max(lhs.npartitions, rhs.npartitions). Default is None.

shuffle_method: {‘disk’, ‘tasks’, ‘p2p’}, optional

Either 'disk' for single-node operation or 'tasks' and 'p2p'` for distributed operation. Will be inferred by your current scheduler.

broadcast: boolean or float, optional

Whether to use a broadcast-based join in lieu of a shuffle-based join for supported cases. By default, a simple heuristic will be used to select the underlying algorithm. If a floating-point value is specified, that number will be used as the broadcast_bias within the simple heuristic (a large number makes Dask more likely to choose the broacast_join code path). See broadcast_join for more information.

Notes

There are three ways to join dataframes:

  1. Joining on indices. In this case the divisions are aligned using the function dask.dataframe.multi.align_partitions. Afterwards, each partition is merged with the pandas merge function.

  2. Joining one on index and one on column. In this case the divisions of dataframe merged by index (\(d_i\)) are used to divide the column merged dataframe (\(d_c\)) one using dask.dataframe.multi.rearrange_by_divisions. In this case the merged dataframe (\(d_m\)) has the exact same divisions as (\(d_i\)). This can lead to issues if you merge multiple rows from (\(d_c\)) to one row in (\(d_i\)).

  3. Joining both on columns. In this case a hash join is performed using dask.dataframe.multi.hash_join.

In some cases, you may see a MemoryError if the merge operation requires an internal shuffle, because shuffling places all rows that have the same index in the same partition. To avoid this error, make sure all rows with the same on-column value can fit on a single partition.

min(axis=0, skipna=True, numeric_only=False, split_every=False, **kwargs)#

Return the minimum of the values over the requested axis.

This docstring was copied from pandas.core.frame.DataFrame.min.

Some inconsistencies with the Dask version may exist.

If you want the index of the minimum, use idxmin. This is the equivalent of the numpy.ndarray method argmin.

Parameters:
axis{index (0), columns (1)}

Axis for the function to be applied on. For Series this parameter is unused and defaults to 0.

For DataFrames, specifying axis=None will apply the aggregation across both axes.

New in version 2.0.0.

skipnabool, default True

Exclude NA/null values when computing the result.

numeric_onlybool, default False

Include only float, int, boolean columns. Not implemented for Series.

**kwargs

Additional keyword arguments to be passed to the function.

Returns:
Series or scalar

See also

Series.sum

Return the sum.

Series.min

Return the minimum.

Series.max

Return the maximum.

Series.idxmin

Return the index of the minimum.

Series.idxmax

Return the index of the maximum.

DataFrame.sum

Return the sum over the requested axis.

DataFrame.min

Return the minimum over the requested axis.

DataFrame.max

Return the maximum over the requested axis.

DataFrame.idxmin

Return the index of the minimum over the requested axis.

DataFrame.idxmax

Return the index of the maximum over the requested axis.

Examples

>>> idx = pd.MultiIndex.from_arrays([  
...     ['warm', 'warm', 'cold', 'cold'],
...     ['dog', 'falcon', 'fish', 'spider']],
...     names=['blooded', 'animal'])
>>> s = pd.Series([4, 2, 0, 8], name='legs', index=idx)  
>>> s  
blooded  animal
warm     dog       4
         falcon    2
cold     fish      0
         spider    8
Name: legs, dtype: int64
>>> s.min()  
0
mode(dropna=True, split_every=False, numeric_only=False)#

Get the mode(s) of each element along the selected axis.

This docstring was copied from pandas.core.frame.DataFrame.mode.

Some inconsistencies with the Dask version may exist.

The mode of a set of values is the value that appears most often. It can be multiple values.

Parameters:
axis{0 or ‘index’, 1 or ‘columns’}, default 0 (Not supported in Dask)

The axis to iterate over while searching for the mode:

  • 0 or ‘index’ : get mode of each column

  • 1 or ‘columns’ : get mode of each row.

numeric_onlybool, default False

If True, only apply to numeric columns.

dropnabool, default True

Don’t consider counts of NaN/NaT.

Returns:
DataFrame

The modes of each column or row.

See also

Series.mode

Return the highest frequency value in a Series.

Series.value_counts

Return the counts of values in a Series.

Examples

>>> df = pd.DataFrame([('bird', 2, 2),  
...                    ('mammal', 4, np.nan),
...                    ('arthropod', 8, 0),
...                    ('bird', 2, np.nan)],
...                   index=('falcon', 'horse', 'spider', 'ostrich'),
...                   columns=('species', 'legs', 'wings'))
>>> df  
           species  legs  wings
falcon        bird     2    2.0
horse       mammal     4    NaN
spider   arthropod     8    0.0
ostrich       bird     2    NaN

By default, missing values are not considered, and the mode of wings are both 0 and 2. Because the resulting DataFrame has two rows, the second row of species and legs contains NaN.

>>> df.mode()  
  species  legs  wings
0    bird   2.0    0.0
1     NaN   NaN    2.0

Setting dropna=False NaN values are considered and they can be the mode (like for wings).

>>> df.mode(dropna=False)  
  species  legs  wings
0    bird     2    NaN

Setting numeric_only=True, only the mode of numeric columns is computed, and columns of other types are ignored.

>>> df.mode(numeric_only=True)  
   legs  wings
0   2.0    0.0
1   NaN    2.0

To compute the mode over columns and not rows, use the axis parameter:

>>> df.mode(axis='columns', numeric_only=True)  
           0    1
falcon   2.0  NaN
horse    4.0  NaN
spider   0.0  8.0
ostrich  2.0  NaN
property ndim#

Return dimensionality

nlargest(n=5, columns=None, split_every=None)#

Return the first n rows ordered by columns in descending order.

This docstring was copied from pandas.core.frame.DataFrame.nlargest.

Some inconsistencies with the Dask version may exist.

Return the first n rows with the largest values in columns, in descending order. The columns that are not specified are returned as well, but not used for ordering.

This method is equivalent to df.sort_values(columns, ascending=False).head(n), but more performant.

Parameters:
nint

Number of rows to return.

columnslabel or list of labels

Column label(s) to order by.

keep{‘first’, ‘last’, ‘all’}, default ‘first’ (Not supported in Dask)

Where there are duplicate values:

  • first : prioritize the first occurrence(s)

  • last : prioritize the last occurrence(s)

  • all : keep all the ties of the smallest item even if it means selecting more than n items.

Returns:
DataFrame

The first n rows ordered by the given columns in descending order.

See also

DataFrame.nsmallest

Return the first n rows ordered by columns in ascending order.

DataFrame.sort_values

Sort DataFrame by the values.

DataFrame.head

Return the first n rows without re-ordering.

Notes

This function cannot be used with all column types. For example, when specifying columns with object or category dtypes, TypeError is raised.

Examples

>>> df = pd.DataFrame({'population': [59000000, 65000000, 434000,  
...                                   434000, 434000, 337000, 11300,
...                                   11300, 11300],
...                    'GDP': [1937894, 2583560 , 12011, 4520, 12128,
...                            17036, 182, 38, 311],
...                    'alpha-2': ["IT", "FR", "MT", "MV", "BN",
...                                "IS", "NR", "TV", "AI"]},
...                   index=["Italy", "France", "Malta",
...                          "Maldives", "Brunei", "Iceland",
...                          "Nauru", "Tuvalu", "Anguilla"])
>>> df  
          population      GDP alpha-2
Italy       59000000  1937894      IT
France      65000000  2583560      FR
Malta         434000    12011      MT
Maldives      434000     4520      MV
Brunei        434000    12128      BN
Iceland       337000    17036      IS
Nauru          11300      182      NR
Tuvalu         11300       38      TV
Anguilla       11300      311      AI

In the following example, we will use nlargest to select the three rows having the largest values in column “population”.

>>> df.nlargest(3, 'population')  
        population      GDP alpha-2
France    65000000  2583560      FR
Italy     59000000  1937894      IT
Malta       434000    12011      MT

When using keep='last', ties are resolved in reverse order:

>>> df.nlargest(3, 'population', keep='last')  
        population      GDP alpha-2
France    65000000  2583560      FR
Italy     59000000  1937894      IT
Brunei      434000    12128      BN

When using keep='all', the number of element kept can go beyond n if there are duplicate values for the smallest element, all the ties are kept:

>>> df.nlargest(3, 'population', keep='all')  
          population      GDP alpha-2
France      65000000  2583560      FR
Italy       59000000  1937894      IT
Malta         434000    12011      MT
Maldives      434000     4520      MV
Brunei        434000    12128      BN

However, nlargest does not keep n distinct largest elements:

>>> df.nlargest(5, 'population', keep='all')  
          population      GDP alpha-2
France      65000000  2583560      FR
Italy       59000000  1937894      IT
Malta         434000    12011      MT
Maldives      434000     4520      MV
Brunei        434000    12128      BN

To order by the largest values in column “population” and then “GDP”, we can specify multiple columns like in the next example.

>>> df.nlargest(3, ['population', 'GDP'])  
        population      GDP alpha-2
France    65000000  2583560      FR
Italy     59000000  1937894      IT
Brunei      434000    12128      BN
notnull()#

DataFrame.notnull is an alias for DataFrame.notna.

This docstring was copied from pandas.core.frame.DataFrame.notnull.

Some inconsistencies with the Dask version may exist.

Detect existing (non-missing) values.

Return a boolean same-sized object indicating if the values are not NA. Non-missing values get mapped to True. Characters such as empty strings '' or numpy.inf are not considered NA values (unless you set pandas.options.mode.use_inf_as_na = True). NA values, such as None or numpy.NaN, get mapped to False values.

Returns:
DataFrame

Mask of bool values for each element in DataFrame that indicates whether an element is not an NA value.

See also

DataFrame.notnull

Alias of notna.

DataFrame.isna

Boolean inverse of notna.

DataFrame.dropna

Omit axes labels with missing values.

notna

Top-level notna.

Examples

Show which entries in a DataFrame are not NA.

>>> df = pd.DataFrame(dict(age=[5, 6, np.nan],  
...                        born=[pd.NaT, pd.Timestamp('1939-05-27'),
...                              pd.Timestamp('1940-04-25')],
...                        name=['Alfred', 'Batman', ''],
...                        toy=[None, 'Batmobile', 'Joker']))
>>> df  
   age       born    name        toy
0  5.0        NaT  Alfred       None
1  6.0 1939-05-27  Batman  Batmobile
2  NaN 1940-04-25              Joker
>>> df.notna()  
     age   born  name    toy
0   True  False  True  False
1   True   True  True   True
2  False   True  True   True

Show which entries in a Series are not NA.

>>> ser = pd.Series([5, 6, np.nan])  
>>> ser  
0    5.0
1    6.0
2    NaN
dtype: float64
>>> ser.notna()  
0     True
1     True
2    False
dtype: bool
property npartitions#

Return number of partitions

nsmallest(n=5, columns=None, split_every=None)#

Return the first n rows ordered by columns in ascending order.

This docstring was copied from pandas.core.frame.DataFrame.nsmallest.

Some inconsistencies with the Dask version may exist.

Return the first n rows with the smallest values in columns, in ascending order. The columns that are not specified are returned as well, but not used for ordering.

This method is equivalent to df.sort_values(columns, ascending=True).head(n), but more performant.

Parameters:
nint

Number of items to retrieve.

columnslist or str

Column name or names to order by.

keep{‘first’, ‘last’, ‘all’}, default ‘first’ (Not supported in Dask)

Where there are duplicate values:

  • first : take the first occurrence.

  • last : take the last occurrence.

  • all : keep all the ties of the largest item even if it means selecting more than n items.

Returns:
DataFrame

See also

DataFrame.nlargest

Return the first n rows ordered by columns in descending order.

DataFrame.sort_values

Sort DataFrame by the values.

DataFrame.head

Return the first n rows without re-ordering.

Examples

>>> df = pd.DataFrame({'population': [59000000, 65000000, 434000,  
...                                   434000, 434000, 337000, 337000,
...                                   11300, 11300],
...                    'GDP': [1937894, 2583560 , 12011, 4520, 12128,
...                            17036, 182, 38, 311],
...                    'alpha-2': ["IT", "FR", "MT", "MV", "BN",
...                                "IS", "NR", "TV", "AI"]},
...                   index=["Italy", "France", "Malta",
...                          "Maldives", "Brunei", "Iceland",
...                          "Nauru", "Tuvalu", "Anguilla"])
>>> df  
          population      GDP alpha-2
Italy       59000000  1937894      IT
France      65000000  2583560      FR
Malta         434000    12011      MT
Maldives      434000     4520      MV
Brunei        434000    12128      BN
Iceland       337000    17036      IS
Nauru         337000      182      NR
Tuvalu         11300       38      TV
Anguilla       11300      311      AI

In the following example, we will use nsmallest to select the three rows having the smallest values in column “population”.

>>> df.nsmallest(3, 'population')  
          population    GDP alpha-2
Tuvalu         11300     38      TV
Anguilla       11300    311      AI
Iceland       337000  17036      IS

When using keep='last', ties are resolved in reverse order:

>>> df.nsmallest(3, 'population', keep='last')  
          population  GDP alpha-2
Anguilla       11300  311      AI
Tuvalu         11300   38      TV
Nauru         337000  182      NR

When using keep='all', the number of element kept can go beyond n if there are duplicate values for the largest element, all the ties are kept.

>>> df.nsmallest(3, 'population', keep='all')  
          population    GDP alpha-2
Tuvalu         11300     38      TV
Anguilla       11300    311      AI
Iceland       337000  17036      IS
Nauru         337000    182      NR

However, nsmallest does not keep n distinct smallest elements:

>>> df.nsmallest(4, 'population', keep='all')  
          population    GDP alpha-2
Tuvalu         11300     38      TV
Anguilla       11300    311      AI
Iceland       337000  17036      IS
Nauru         337000    182      NR

To order by the smallest values in column “population” and then “GDP”, we can specify multiple columns like in the next example.

>>> df.nsmallest(3, ['population', 'GDP'])  
          population  GDP alpha-2
Tuvalu         11300   38      TV
Anguilla       11300  311      AI
Nauru         337000  182      NR
nunique(axis=0, dropna=True, split_every=False)#

Count number of distinct elements in specified axis.

This docstring was copied from pandas.core.frame.DataFrame.nunique.

Some inconsistencies with the Dask version may exist.

Return Series with number of distinct elements. Can ignore NaN values.

Parameters:
axis{0 or ‘index’, 1 or ‘columns’}, default 0

The axis to use. 0 or ‘index’ for row-wise, 1 or ‘columns’ for column-wise.

dropnabool, default True

Don’t include NaN in the counts.

Returns:
Series

See also

Series.nunique

Method nunique for Series.

DataFrame.count

Count non-NA cells for each column or row.

Examples

>>> df = pd.DataFrame({'A': [4, 5, 6], 'B': [4, 1, 1]})  
>>> df.nunique()  
A    3
B    2
dtype: int64
>>> df.nunique(axis=1)  
0    1
1    2
2    2
dtype: int64
nunique_approx(split_every=None)#

Approximate number of unique rows.

This method uses the HyperLogLog algorithm for cardinality estimation to compute the approximate number of unique rows. The approximate error is 0.406%.

Parameters:
split_everyint, optional

Group partitions into groups of this size while performing a tree-reduction. If set to False, no tree-reduction will be used. Default is 8.

Returns:
a float representing the approximate number of elements
optimize(fuse: bool = True)#

Optimizes the DataFrame.

Runs the optimizer with all steps over the DataFrame and wraps the result in a new DataFrame collection. Only use this method if you want to analyze the optimized expression.

Parameters:
fuse: bool, default True

Whether to fuse the expression tree after running the optimizer. It is often easier to look at the non-fused expression when analyzing the result.

Returns:
The optimized Dask Dataframe
property partitions#

Slice dataframe by partitions

This allows partitionwise slicing of a Dask Dataframe. You can perform normal Numpy-style slicing, but now rather than slice elements of the array you slice along partitions so, for example, df.partitions[:5] produces a new Dask Dataframe of the first five partitions. Valid indexers are integers, sequences of integers, slices, or boolean masks.

Returns:
A Dask DataFrame

Examples

>>> df.partitions[0]  
>>> df.partitions[:3]  
>>> df.partitions[::10]  
persist(fuse=True, **kwargs)#

Persist this dask collection into memory

This turns a lazy Dask collection into a Dask collection with the same metadata, but now with the results fully computed or actively computing in the background.

The action of function differs significantly depending on the active task scheduler. If the task scheduler supports asynchronous computing, such as is the case of the dask.distributed scheduler, then persist will return immediately and the return value’s task graph will contain Dask Future objects. However if the task scheduler only supports blocking computation then the call to persist will block and the return value’s task graph will contain concrete Python results.

This function is particularly useful when using distributed systems, because the results will be kept in distributed memory, rather than returned to the local process as with compute.

Parameters:
schedulerstring, optional

Which scheduler to use like “threads”, “synchronous” or “processes”. If not provided, the default is to check the global settings first, and then fall back to the collection defaults.

optimize_graphbool, optional

If True [default], the graph is optimized before computation. Otherwise the graph is run as is. This can be useful for debugging.

**kwargs

Extra keywords to forward to the scheduler function.

Returns:
New dask collections backed by in-memory data

See also

dask.persist
pipe(func, *args, **kwargs)#

Apply chainable functions that expect Series or DataFrames.

This docstring was copied from pandas.core.frame.DataFrame.pipe.

Some inconsistencies with the Dask version may exist.

Parameters:
funcfunction

Function to apply to the Series/DataFrame. args, and kwargs are passed into func. Alternatively a (callable, data_keyword) tuple where data_keyword is a string indicating the keyword of callable that expects the Series/DataFrame.

*argsiterable, optional

Positional arguments passed into func.

**kwargsmapping, optional

A dictionary of keyword arguments passed into func.

Returns:
the return type of func.

See also

DataFrame.apply

Apply a function along input axis of DataFrame.

DataFrame.map

Apply a function elementwise on a whole DataFrame.

Series.map

Apply a mapping correspondence on a Series.

Notes

Use .pipe when chaining together functions that expect Series, DataFrames or GroupBy objects.

Examples

Constructing a income DataFrame from a dictionary.

>>> data = [[8000, 1000], [9500, np.nan], [5000, 2000]]  
>>> df = pd.DataFrame(data, columns=['Salary', 'Others'])  
>>> df  
   Salary  Others
0    8000  1000.0
1    9500     NaN
2    5000  2000.0

Functions that perform tax reductions on an income DataFrame.

>>> def subtract_federal_tax(df):  
...     return df * 0.9
>>> def subtract_state_tax(df, rate):  
...     return df * (1 - rate)
>>> def subtract_national_insurance(df, rate, rate_increase):  
...     new_rate = rate + rate_increase
...     return df * (1 - new_rate)

Instead of writing

>>> subtract_national_insurance(  
...     subtract_state_tax(subtract_federal_tax(df), rate=0.12),
...     rate=0.05,
...     rate_increase=0.02)  

You can write

>>> (  
...     df.pipe(subtract_federal_tax)
...     .pipe(subtract_state_tax, rate=0.12)
...     .pipe(subtract_national_insurance, rate=0.05, rate_increase=0.02)
... )
    Salary   Others
0  5892.48   736.56
1  6997.32      NaN
2  3682.80  1473.12

If you have a function that takes the data as (say) the second argument, pass a tuple indicating which keyword expects the data. For example, suppose national_insurance takes its data as df in the second argument:

>>> def subtract_national_insurance(rate, df, rate_increase):  
...     new_rate = rate + rate_increase
...     return df * (1 - new_rate)
>>> (  
...     df.pipe(subtract_federal_tax)
...     .pipe(subtract_state_tax, rate=0.12)
...     .pipe(
...         (subtract_national_insurance, 'df'),
...         rate=0.05,
...         rate_increase=0.02
...     )
... )
    Salary   Others
0  5892.48   736.56
1  6997.32      NaN
2  3682.80  1473.12
pivot_table(index, columns, values, aggfunc='mean')#

Create a spreadsheet-style pivot table as a DataFrame. Target columns must have category dtype to infer result’s columns. index, columns, values and aggfunc must be all scalar.

Parameters:
valuesscalar

column to aggregate

indexscalar

column to be index

columnsscalar

column to be columns

aggfunc{‘mean’, ‘sum’, ‘count’}, default ‘mean’
Returns:
tableDataFrame
pop(item)#

Return item and drop from frame. Raise KeyError if not found.

This docstring was copied from pandas.core.frame.DataFrame.pop.

Some inconsistencies with the Dask version may exist.

Parameters:
itemlabel

Label of column to be popped.

Returns:
Series

Examples

>>> df = pd.DataFrame([('falcon', 'bird', 389.0),  
...                    ('parrot', 'bird', 24.0),
...                    ('lion', 'mammal', 80.5),
...                    ('monkey', 'mammal', np.nan)],
...                   columns=('name', 'class', 'max_speed'))
>>> df  
     name   class  max_speed
0  falcon    bird      389.0
1  parrot    bird       24.0
2    lion  mammal       80.5
3  monkey  mammal        NaN
>>> df.pop('class')  
0      bird
1      bird
2    mammal
3    mammal
Name: class, dtype: object
>>> df  
     name  max_speed
0  falcon      389.0
1  parrot       24.0
2    lion       80.5
3  monkey        NaN
pprint()#

Outputs a string representation of the DataFrame.

The expression is returned as is. Please run optimize manually if necessary.

Returns:
None, the representation is put into stdout.
prod(axis=0, skipna=True, numeric_only=False, min_count=0, split_every=False, **kwargs)#

Return the product of the values over the requested axis.

This docstring was copied from pandas.core.frame.DataFrame.prod.

Some inconsistencies with the Dask version may exist.

Parameters:
axis{index (0), columns (1)}

Axis for the function to be applied on. For Series this parameter is unused and defaults to 0.

Warning

The behavior of DataFrame.prod with axis=None is deprecated, in a future version this will reduce over both axes and return a scalar To retain the old behavior, pass axis=0 (or do not pass axis).

New in version 2.0.0.

skipnabool, default True

Exclude NA/null values when computing the result.

numeric_onlybool, default False

Include only float, int, boolean columns. Not implemented for Series.

min_countint, default 0

The required number of valid values to perform the operation. If fewer than min_count non-NA values are present the result will be NA.

**kwargs

Additional keyword arguments to be passed to the function.

Returns:
Series or scalar

See also

Series.sum

Return the sum.

Series.min

Return the minimum.

Series.max

Return the maximum.

Series.idxmin

Return the index of the minimum.

Series.idxmax

Return the index of the maximum.

DataFrame.sum

Return the sum over the requested axis.

DataFrame.min

Return the minimum over the requested axis.

DataFrame.max

Return the maximum over the requested axis.

DataFrame.idxmin

Return the index of the minimum over the requested axis.

DataFrame.idxmax

Return the index of the maximum over the requested axis.

Examples

By default, the product of an empty or all-NA Series is 1

>>> pd.Series([], dtype="float64").prod()  
1.0

This can be controlled with the min_count parameter

>>> pd.Series([], dtype="float64").prod(min_count=1)  
nan

Thanks to the skipna parameter, min_count handles all-NA and empty series identically.

>>> pd.Series([np.nan]).prod()  
1.0
>>> pd.Series([np.nan]).prod(min_count=1)  
nan
product(axis=0, skipna=True, numeric_only=False, min_count=0, split_every=False, **kwargs)#

Return the product of the values over the requested axis.

This docstring was copied from pandas.core.frame.DataFrame.prod.

Some inconsistencies with the Dask version may exist.

Parameters:
axis{index (0), columns (1)}

Axis for the function to be applied on. For Series this parameter is unused and defaults to 0.

Warning

The behavior of DataFrame.prod with axis=None is deprecated, in a future version this will reduce over both axes and return a scalar To retain the old behavior, pass axis=0 (or do not pass axis).

New in version 2.0.0.

skipnabool, default True

Exclude NA/null values when computing the result.

numeric_onlybool, default False

Include only float, int, boolean columns. Not implemented for Series.

min_countint, default 0

The required number of valid values to perform the operation. If fewer than min_count non-NA values are present the result will be NA.

**kwargs

Additional keyword arguments to be passed to the function.

Returns:
Series or scalar

See also

Series.sum

Return the sum.

Series.min

Return the minimum.

Series.max

Return the maximum.

Series.idxmin

Return the index of the minimum.

Series.idxmax

Return the index of the maximum.

DataFrame.sum

Return the sum over the requested axis.

DataFrame.min

Return the minimum over the requested axis.

DataFrame.max

Return the maximum over the requested axis.

DataFrame.idxmin

Return the index of the minimum over the requested axis.

DataFrame.idxmax

Return the index of the maximum over the requested axis.

Examples

By default, the product of an empty or all-NA Series is 1

>>> pd.Series([], dtype="float64").prod()  
1.0

This can be controlled with the min_count parameter

>>> pd.Series([], dtype="float64").prod(min_count=1)  
nan

Thanks to the skipna parameter, min_count handles all-NA and empty series identically.

>>> pd.Series([np.nan]).prod()  
1.0
>>> pd.Series([np.nan]).prod(min_count=1)  
nan
quantile(q=0.5, axis=0, numeric_only=False, method='default')#

Approximate row-wise and precise column-wise quantiles of DataFrame

Parameters:
qlist/array of floats, default 0.5 (50%)

Iterable of numbers ranging from 0 to 1 for the desired quantiles

axis{0, 1, ‘index’, ‘columns’} (default 0)

0 or ‘index’ for row-wise, 1 or ‘columns’ for column-wise

method{‘default’, ‘tdigest’, ‘dask’}, optional

What method to use. By default will use dask’s internal custom algorithm ('dask'). If set to 'tdigest' will use tdigest for floats and ints and fallback to the 'dask' otherwise.

query(expr, **kwargs)#

Filter dataframe with complex expression

Blocked version of pd.DataFrame.query

Parameters:
expr: str

The query string to evaluate. You can refer to column names that are not valid Python variable names by surrounding them in backticks. Dask does not fully support referring to variables using the ‘@’ character, use f-strings or the local_dict keyword argument instead.

Notes

This is like the sequential version except that this will also happen in many threads. This may conflict with numexpr which will use multiple threads itself. We recommend that you set numexpr to use a single thread:

import numexpr
numexpr.set_num_threads(1)

Examples

>>> import pandas as pd
>>> import dask_expr as dd
>>> df = pd.DataFrame({'x': [1, 2, 1, 2],
...                    'y': [1, 2, 3, 4],
...                    'z z': [4, 3, 2, 1]})
>>> ddf = dd.from_pandas(df, npartitions=2)

Refer to column names directly:

>>> ddf.query('y > x').compute()
   x  y  z z
2  1  3    2
3  2  4    1

Refer to column name using backticks:

>>> ddf.query('`z z` > x').compute()
   x  y  z z
0  1  1    4
1  2  2    3
2  1  3    2

Refer to variable name using f-strings:

>>> value = 1
>>> ddf.query(f'x == {value}').compute()
   x  y  z z
0  1  1    4
2  1  3    2

Refer to variable name using local_dict:

>>> ddf.query('x == @value', local_dict={"value": value}).compute()
   x  y  z z
0  1  1    4
2  1  3    2
random_split(frac, random_state=None, shuffle=False)#

Pseudorandomly split dataframe into different pieces row-wise

Parameters:
fraclist

List of floats that should sum to one.

random_stateint or np.random.RandomState

If int or None create a new RandomState with this as the seed. Otherwise draw from the passed RandomState.

shufflebool, default False

If set to True, the dataframe is shuffled (within partition) before the split.

See also

dask.DataFrame.sample

Examples

50/50 split

>>> a, b = df.random_split([0.5, 0.5])  

80/10/10 split, consistent random_state

>>> a, b, c = df.random_split([0.8, 0.1, 0.1], random_state=123)  
reduction(chunk, aggregate=None, combine=None, meta=_NoDefault.no_default, token=None, split_every=None, chunk_kwargs=None, aggregate_kwargs=None, combine_kwargs=None, **kwargs)#

Generic row-wise reductions.

Parameters:
chunkcallable

Function to operate on each partition. Should return a pandas.DataFrame, pandas.Series, or a scalar.

aggregatecallable, optional

Function to operate on the concatenated result of chunk. If not specified, defaults to chunk. Used to do the final aggregation in a tree reduction.

The input to aggregate depends on the output of chunk. If the output of chunk is a:

  • scalar: Input is a Series, with one row per partition.

  • Series: Input is a DataFrame, with one row per partition. Columns are the rows in the output series.

  • DataFrame: Input is a DataFrame, with one row per partition. Columns are the columns in the output dataframes.

Should return a pandas.DataFrame, pandas.Series, or a scalar.

combinecallable, optional

Function to operate on intermediate concatenated results of chunk in a tree-reduction. If not provided, defaults to aggregate. The input/output requirements should match that of aggregate described above.

$META
tokenstr, optional

The name to use for the output keys.

split_everyint, optional

Group partitions into groups of this size while performing a tree-reduction. If set to False, no tree-reduction will be used, and all intermediates will be concatenated and passed to aggregate. Default is 8.

chunk_kwargsdict, optional

Keyword arguments to pass on to chunk only.

aggregate_kwargsdict, optional

Keyword arguments to pass on to aggregate only.

combine_kwargsdict, optional

Keyword arguments to pass on to combine only.

kwargs

All remaining keywords will be passed to chunk, combine, and aggregate.

Examples

>>> import pandas as pd
>>> import dask.dataframe as dd
>>> df = pd.DataFrame({'x': range(50), 'y': range(50, 100)})
>>> ddf = dd.from_pandas(df, npartitions=4)

Count the number of rows in a DataFrame. To do this, count the number of rows in each partition, then sum the results:

>>> res = ddf.reduction(lambda x: x.count(),
...                     aggregate=lambda x: x.sum())
>>> res.compute()
x    50
y    50
dtype: int64

Count the number of rows in a Series with elements greater than or equal to a value (provided via a keyword).

>>> def count_greater(x, value=0):
...     return (x >= value).sum()
>>> res = ddf.x.reduction(count_greater, aggregate=lambda x: x.sum(),
...                       chunk_kwargs={'value': 25})
>>> res.compute()
25

Aggregate both the sum and count of a Series at the same time:

>>> def sum_and_count(x):
...     return pd.Series({'count': x.count(), 'sum': x.sum()},
...                      index=['count', 'sum'])
>>> res = ddf.x.reduction(sum_and_count, aggregate=lambda x: x.sum())
>>> res.compute()
count      50
sum      1225
dtype: int64

Doing the same, but for a DataFrame. Here chunk returns a DataFrame, meaning the input to aggregate is a DataFrame with an index with non-unique entries for both ‘x’ and ‘y’. We groupby the index, and sum each group to get the final result.

>>> def sum_and_count(x):
...     return pd.DataFrame({'count': x.count(), 'sum': x.sum()},
...                         columns=['count', 'sum'])
>>> res = ddf.reduction(sum_and_count,
...                     aggregate=lambda x: x.groupby(level=0).sum())
>>> res.compute()
   count   sum
x     50  1225
y     50  3725
rename(index=None, columns=None)#

Rename columns or index labels.

This docstring was copied from pandas.core.frame.DataFrame.rename.

Some inconsistencies with the Dask version may exist.

Function / dict values must be unique (1-to-1). Labels not contained in a dict / Series will be left as-is. Extra labels listed don’t throw an error.

See the user guide for more.

Parameters:
mapperdict-like or function (Not supported in Dask)

Dict-like or function transformations to apply to that axis’ values. Use either mapper and axis to specify the axis to target with mapper, or index and columns.

indexdict-like or function (Not supported in Dask)

Alternative to specifying axis (mapper, axis=0 is equivalent to index=mapper).

columnsdict-like or function

Alternative to specifying axis (mapper, axis=1 is equivalent to columns=mapper).

axis{0 or ‘index’, 1 or ‘columns’}, default 0 (Not supported in Dask)

Axis to target with mapper. Can be either the axis name (‘index’, ‘columns’) or number (0, 1). The default is ‘index’.

copybool, default True (Not supported in Dask)

Also copy underlying data.

Note

The copy keyword will change behavior in pandas 3.0. Copy-on-Write will be enabled by default, which means that all methods with a copy keyword will use a lazy copy mechanism to defer the copy and ignore the copy keyword. The copy keyword will be removed in a future version of pandas.

You can already get the future behavior and improvements through enabling copy on write pd.options.mode.copy_on_write = True

inplacebool, default False (Not supported in Dask)

Whether to modify the DataFrame rather than creating a new one. If True then value of copy is ignored.

levelint or level name, default None (Not supported in Dask)

In case of a MultiIndex, only rename labels in the specified level.

errors{‘ignore’, ‘raise’}, default ‘ignore’ (Not supported in Dask)

If ‘raise’, raise a KeyError when a dict-like mapper, index, or columns contains labels that are not present in the Index being transformed. If ‘ignore’, existing keys will be renamed and extra keys will be ignored.

Returns:
DataFrame or None

DataFrame with the renamed axis labels or None if inplace=True.

Raises:
KeyError

If any of the labels is not found in the selected axis and “errors=’raise’”.

See also

DataFrame.rename_axis

Set the name of the axis.

Examples

DataFrame.rename supports two calling conventions

  • (index=index_mapper, columns=columns_mapper, ...)

  • (mapper, axis={'index', 'columns'}, ...)

We highly recommend using keyword arguments to clarify your intent.

Rename columns using a mapping:

>>> df = pd.DataFrame({"A": [1, 2, 3], "B": [4, 5, 6]})  
>>> df.rename(columns={"A": "a", "B": "c"})  
   a  c
0  1  4
1  2  5
2  3  6

Rename index using a mapping:

>>> df.rename(index={0: "x", 1: "y", 2: "z"})  
   A  B
x  1  4
y  2  5
z  3  6

Cast index labels to a different type:

>>> df.index  
RangeIndex(start=0, stop=3, step=1)
>>> df.rename(index=str).index  
Index(['0', '1', '2'], dtype='object')
>>> df.rename(columns={"A": "a", "B": "b", "C": "c"}, errors="raise")  
Traceback (most recent call last):
KeyError: ['C'] not found in axis

Using axis-style parameters:

>>> df.rename(str.lower, axis='columns')  
   a  b
0  1  4
1  2  5
2  3  6
>>> df.rename({1: 2, 2: 4}, axis='index')  
   A  B
0  1  4
2  2  5
4  3  6
rename_axis(mapper=_NoDefault.no_default, index=_NoDefault.no_default, columns=_NoDefault.no_default, axis=0)#

Set the name of the axis for the index or columns.

This docstring was copied from pandas.core.frame.DataFrame.rename_axis.

Some inconsistencies with the Dask version may exist.

Parameters:
mapperscalar, list-like, optional

Value to set the axis name attribute.

index, columnsscalar, list-like, dict-like or function, optional

A scalar, list-like, dict-like or functions transformations to apply to that axis’ values. Note that the columns parameter is not allowed if the object is a Series. This parameter only apply for DataFrame type objects.

Use either mapper and axis to specify the axis to target with mapper, or index and/or columns.

axis{0 or ‘index’, 1 or ‘columns’}, default 0

The axis to rename. For Series this parameter is unused and defaults to 0.

copybool, default None (Not supported in Dask)

Also copy underlying data.

Note

The copy keyword will change behavior in pandas 3.0. Copy-on-Write will be enabled by default, which means that all methods with a copy keyword will use a lazy copy mechanism to defer the copy and ignore the copy keyword. The copy keyword will be removed in a future version of pandas.

You can already get the future behavior and improvements through enabling copy on write pd.options.mode.copy_on_write = True

inplacebool, default False (Not supported in Dask)

Modifies the object directly, instead of creating a new Series or DataFrame.

Returns:
Series, DataFrame, or None

The same type as the caller or None if inplace=True.

See also

Series.rename

Alter Series index labels or name.

DataFrame.rename

Alter DataFrame index labels or name.

Index.rename

Set new names on index.

Notes

DataFrame.rename_axis supports two calling conventions

  • (index=index_mapper, columns=columns_mapper, ...)

  • (mapper, axis={'index', 'columns'}, ...)

The first calling convention will only modify the names of the index and/or the names of the Index object that is the columns. In this case, the parameter copy is ignored.

The second calling convention will modify the names of the corresponding index if mapper is a list or a scalar. However, if mapper is dict-like or a function, it will use the deprecated behavior of modifying the axis labels.

We highly recommend using keyword arguments to clarify your intent.

Examples

Series

>>> s = pd.Series(["dog", "cat", "monkey"])  
>>> s  
0       dog
1       cat
2    monkey
dtype: object
>>> s.rename_axis("animal")  
animal
0    dog
1    cat
2    monkey
dtype: object

DataFrame

>>> df = pd.DataFrame({"num_legs": [4, 4, 2],  
...                    "num_arms": [0, 0, 2]},
...                   ["dog", "cat", "monkey"])
>>> df  
        num_legs  num_arms
dog            4         0
cat            4         0
monkey         2         2
>>> df = df.rename_axis("animal")  
>>> df  
        num_legs  num_arms
animal
dog            4         0
cat            4         0
monkey         2         2
>>> df = df.rename_axis("limbs", axis="columns")  
>>> df  
limbs   num_legs  num_arms
animal
dog            4         0
cat            4         0
monkey         2         2

MultiIndex

>>> df.index = pd.MultiIndex.from_product([['mammal'],  
...                                        ['dog', 'cat', 'monkey']],
...                                       names=['type', 'name'])
>>> df  
limbs          num_legs  num_arms
type   name
mammal dog            4         0
       cat            4         0
       monkey         2         2
>>> df.rename_axis(index={'type': 'class'})  
limbs          num_legs  num_arms
class  name
mammal dog            4         0
       cat            4         0
       monkey         2         2
>>> df.rename_axis(columns=str.upper)  
LIMBS          num_legs  num_arms
type   name
mammal dog            4         0
       cat            4         0
       monkey         2         2
repartition(divisions: tuple | None = None, npartitions: int | None = None, partition_size: str = None, freq=None, force: bool = False)#

Repartition a collection

Exactly one of divisions, npartitions or partition_size should be specified. A ValueError will be raised when that is not the case.

Parameters:
divisionslist, optional

The “dividing lines” used to split the dataframe into partitions. For divisions=[0, 10, 50, 100], there would be three output partitions, where the new index contained [0, 10), [10, 50), and [50, 100), respectively. See https://docs.dask.org/en/latest/dataframe-design.html#partitions.

npartitionsint, Callable, optional

Approximate number of partitions of output. The number of partitions used may be slightly lower than npartitions depending on data distribution, but will never be higher. The Callable gets the number of partitions of the input as an argument and should return an int.

partition_sizestr, optional

Max number of bytes of memory for each partition. Use numbers or strings like 5MB. If specified npartitions and divisions will be ignored. Note that the size reflects the number of bytes used as computed by pandas.DataFrame.memory_usage, which will not necessarily match the size when storing to disk.

Warning

This keyword argument triggers computation to determine the memory size of each partition, which may be expensive.

forcebool, default False

Allows the expansion of the existing divisions. If False then the new divisions’ lower and upper bounds must be the same as the old divisions’.

freqstr, pd.Timedelta

A period on which to partition timeseries data like '7D' or '12h' or pd.Timedelta(hours=12). Assumes a datetime index.

Notes

Exactly one of divisions, npartitions, partition_size, or freq should be specified. A ValueError will be raised when that is not the case.

Also note that len(divisons) is equal to npartitions + 1. This is because divisions represents the upper and lower bounds of each partition. The first item is the lower bound of the first partition, the second item is the lower bound of the second partition and the upper bound of the first partition, and so on. The second-to-last item is the lower bound of the last partition, and the last (extra) item is the upper bound of the last partition.

Examples

>>> df = df.repartition(npartitions=10)  
>>> df = df.repartition(divisions=[0, 5, 10, 20])  
>>> df = df.repartition(freq='7d')  
replace(to_replace=None, value=_NoDefault.no_default, regex=False)#

Replace values given in to_replace with value.

This docstring was copied from pandas.core.frame.DataFrame.replace.

Some inconsistencies with the Dask version may exist.

Values of the Series/DataFrame are replaced with other values dynamically. This differs from updating with .loc or .iloc, which require you to specify a location to update with some value.

Parameters:
to_replacestr, regex, list, dict, Series, int, float, or None

How to find the values that will be replaced.

  • numeric, str or regex:

    • numeric: numeric values equal to to_replace will be replaced with value

    • str: string exactly matching to_replace will be replaced with value

    • regex: regexs matching to_replace will be replaced with value

  • list of str, regex, or numeric:

    • First, if to_replace and value are both lists, they must be the same length.

    • Second, if regex=True then all of the strings in both lists will be interpreted as regexs otherwise they will match directly. This doesn’t matter much for value since there are only a few possible substitution regexes you can use.

    • str, regex and numeric rules apply as above.

  • dict:

    • Dicts can be used to specify different replacement values for different existing values. For example, {'a': 'b', 'y': 'z'} replaces the value ‘a’ with ‘b’ and ‘y’ with ‘z’. To use a dict in this way, the optional value parameter should not be given.

    • For a DataFrame a dict can specify that different values should be replaced in different columns. For example, {'a': 1, 'b': 'z'} looks for the value 1 in column ‘a’ and the value ‘z’ in column ‘b’ and replaces these values with whatever is specified in value. The value parameter should not be None in this case. You can treat this as a special case of passing two lists except that you are specifying the column to search in.

    • For a DataFrame nested dictionaries, e.g., {'a': {'b': np.nan}}, are read as follows: look in column ‘a’ for the value ‘b’ and replace it with NaN. The optional value parameter should not be specified to use a nested dict in this way. You can nest regular expressions as well. Note that column names (the top-level dictionary keys in a nested dictionary) cannot be regular expressions.

  • None:

    • This means that the regex argument must be a string, compiled regular expression, or list, dict, ndarray or Series of such elements. If value is also None then this must be a nested dictionary or Series.

See the examples section for examples of each of these.

valuescalar, dict, list, str, regex, default None

Value to replace any values matching to_replace with. For a DataFrame a dict of values can be used to specify which value to use for each column (columns not in the dict will not be filled). Regular expressions, strings and lists or dicts of such objects are also allowed.

inplacebool, default False (Not supported in Dask)

If True, performs operation inplace and returns None.

limitint, default None (Not supported in Dask)

Maximum size gap to forward or backward fill.

Deprecated since version 2.1.0.

regexbool or same types as to_replace, default False

Whether to interpret to_replace and/or value as regular expressions. Alternatively, this could be a regular expression or a list, dict, or array of regular expressions in which case to_replace must be None.

method{‘pad’, ‘ffill’, ‘bfill’} (Not supported in Dask)

The method to use when for replacement, when to_replace is a scalar, list or tuple and value is None.

Deprecated since version 2.1.0.

Returns:
Series/DataFrame

Object after replacement.

Raises:
AssertionError
  • If regex is not a bool and to_replace is not None.

TypeError
  • If to_replace is not a scalar, array-like, dict, or None

  • If to_replace is a dict and value is not a list, dict, ndarray, or Series

  • If to_replace is None and regex is not compilable into a regular expression or is a list, dict, ndarray, or Series.

  • When replacing multiple bool or datetime64 objects and the arguments to to_replace does not match the type of the value being replaced

ValueError
  • If a list or an ndarray is passed to to_replace and value but they are not the same length.

See also

Series.fillna

Fill NA values.

DataFrame.fillna

Fill NA values.

Series.where

Replace values based on boolean condition.

DataFrame.where

Replace values based on boolean condition.

DataFrame.map

Apply a function to a Dataframe elementwise.

Series.map

Map values of Series according to an input mapping or function.

Series.str.replace

Simple string replacement.

Notes

  • Regex substitution is performed under the hood with re.sub. The rules for substitution for re.sub are the same.

  • Regular expressions will only substitute on strings, meaning you cannot provide, for example, a regular expression matching floating point numbers and expect the columns in your frame that have a numeric dtype to be matched. However, if those floating point numbers are strings, then you can do this.

  • This method has a lot of options. You are encouraged to experiment and play with this method to gain intuition about how it works.

  • When dict is used as the to_replace value, it is like key(s) in the dict are the to_replace part and value(s) in the dict are the value parameter.

Examples

Scalar `to_replace` and `value`

>>> s = pd.Series([1, 2, 3, 4, 5])  
>>> s.replace(1, 5)  
0    5
1    2
2    3
3    4
4    5
dtype: int64
>>> df = pd.DataFrame({'A': [0, 1, 2, 3, 4],  
...                    'B': [5, 6, 7, 8, 9],
...                    'C': ['a', 'b', 'c', 'd', 'e']})
>>> df.replace(0, 5)  
    A  B  C
0  5  5  a
1  1  6  b
2  2  7  c
3  3  8  d
4  4  9  e

List-like `to_replace`

>>> df.replace([0, 1, 2, 3], 4)  
    A  B  C
0  4  5  a
1  4  6  b
2  4  7  c
3  4  8  d
4  4  9  e
>>> df.replace([0, 1, 2, 3], [4, 3, 2, 1])  
    A  B  C
0  4  5  a
1  3  6  b
2  2  7  c
3  1  8  d
4  4  9  e
>>> s.replace([1, 2], method='bfill')  
0    3
1    3
2    3
3    4
4    5
dtype: int64

dict-like `to_replace`

>>> df.replace({0: 10, 1: 100})  
        A  B  C
0   10  5  a
1  100  6  b
2    2  7  c
3    3  8  d
4    4  9  e
>>> df.replace({'A': 0, 'B': 5}, 100)  
        A    B  C
0  100  100  a
1    1    6  b
2    2    7  c
3    3    8  d
4    4    9  e
>>> df.replace({'A': {0: 100, 4: 400}})  
        A  B  C
0  100  5  a
1    1  6  b
2    2  7  c
3    3  8  d
4  400  9  e

Regular expression `to_replace`

>>> df = pd.DataFrame({'A': ['bat', 'foo', 'bait'],  
...                    'B': ['abc', 'bar', 'xyz']})
>>> df.replace(to_replace=r'^ba.$', value='new', regex=True)  
        A    B
0   new  abc
1   foo  new
2  bait  xyz
>>> df.replace({'A': r'^ba.$'}, {'A': 'new'}, regex=True)  
        A    B
0   new  abc
1   foo  bar
2  bait  xyz
>>> df.replace(regex=r'^ba.$', value='new')  
        A    B
0   new  abc
1   foo  new
2  bait  xyz
>>> df.replace(regex={r'^ba.$': 'new', 'foo': 'xyz'})  
        A    B
0   new  abc
1   xyz  new
2  bait  xyz
>>> df.replace(regex=[r'^ba.$', 'foo'], value='new')  
        A    B
0   new  abc
1   new  new
2  bait  xyz

Compare the behavior of s.replace({'a': None}) and s.replace('a', None) to understand the peculiarities of the to_replace parameter:

>>> s = pd.Series([10, 'a', 'a', 'b', 'a'])  

When one uses a dict as the to_replace value, it is like the value(s) in the dict are equal to the value parameter. s.replace({'a': None}) is equivalent to s.replace(to_replace={'a': None}, value=None, method=None):

>>> s.replace({'a': None})  
0      10
1    None
2    None
3       b
4    None
dtype: object

When value is not explicitly passed and to_replace is a scalar, list or tuple, replace uses the method parameter (default ‘pad’) to do the replacement. So this is why the ‘a’ values are being replaced by 10 in rows 1 and 2 and ‘b’ in row 4 in this case.

>>> s.replace('a')  
0    10
1    10
2    10
3     b
4     b
dtype: object

Deprecated since version 2.1.0: The ‘method’ parameter and padding behavior are deprecated.

On the other hand, if None is explicitly passed for value, it will be respected:

>>> s.replace('a', None)  
0      10
1    None
2    None
3       b
4    None
dtype: object

Changed in version 1.4.0: Previously the explicit None was silently ignored.

When regex=True, value is not None and to_replace is a string, the replacement will be applied in all columns of the DataFrame.

>>> df = pd.DataFrame({'A': [0, 1, 2, 3, 4],  
...                    'B': ['a', 'b', 'c', 'd', 'e'],
...                    'C': ['f', 'g', 'h', 'i', 'j']})
>>> df.replace(to_replace='^[a-g]', value='e', regex=True)  
    A  B  C
0  0  e  e
1  1  e  e
2  2  e  h
3  3  e  i
4  4  e  j

If value is not None and to_replace is a dictionary, the dictionary keys will be the DataFrame columns that the replacement will be applied.

>>> df.replace(to_replace={'B': '^[a-c]', 'C': '^[h-j]'}, value='e', regex=True)  
    A  B  C
0  0  e  f
1  1  e  g
2  2  e  e
3  3  d  e
4  4  e  e
resample(rule, closed=None, label=None)#

Resample time-series data.

This docstring was copied from pandas.core.frame.DataFrame.resample.

Some inconsistencies with the Dask version may exist.

Convenience method for frequency conversion and resampling of time series. The object must have a datetime-like index (DatetimeIndex, PeriodIndex, or TimedeltaIndex), or the caller must pass the label of a datetime-like series/index to the on/level keyword parameter.

Parameters:
ruleDateOffset, Timedelta or str

The offset string or object representing target conversion.

axis{0 or ‘index’, 1 or ‘columns’}, default 0 (Not supported in Dask)

Which axis to use for up- or down-sampling. For Series this parameter is unused and defaults to 0. Must be DatetimeIndex, TimedeltaIndex or PeriodIndex.

Deprecated since version 2.0.0: Use frame.T.resample(…) instead.

closed{‘right’, ‘left’}, default None

Which side of bin interval is closed. The default is ‘left’ for all frequency offsets except for ‘ME’, ‘YE’, ‘QE’, ‘BME’, ‘BA’, ‘BQE’, and ‘W’ which all have a default of ‘right’.

label{‘right’, ‘left’}, default None

Which bin edge label to label bucket with. The default is ‘left’ for all frequency offsets except for ‘ME’, ‘YE’, ‘QE’, ‘BME’, ‘BA’, ‘BQE’, and ‘W’ which all have a default of ‘right’.

convention{‘start’, ‘end’, ‘s’, ‘e’}, default ‘start’ (Not supported in Dask)

For PeriodIndex only, controls whether to use the start or end of rule.

Deprecated since version 2.2.0: Convert PeriodIndex to DatetimeIndex before resampling instead.

kind{‘timestamp’, ‘period’}, optional, default None (Not supported in Dask)

Pass ‘timestamp’ to convert the resulting index to a DateTimeIndex or ‘period’ to convert it to a PeriodIndex. By default the input representation is retained.

Deprecated since version 2.2.0: Convert index to desired type explicitly instead.

onstr, optional (Not supported in Dask)

For a DataFrame, column to use instead of index for resampling. Column must be datetime-like.

levelstr or int, optional (Not supported in Dask)

For a MultiIndex, level (name or number) to use for resampling. level must be datetime-like.

originTimestamp or str, default ‘start_day’ (Not supported in Dask)

The timestamp on which to adjust the grouping. The timezone of origin must match the timezone of the index. If string, must be one of the following:

  • ‘epoch’: origin is 1970-01-01

  • ‘start’: origin is the first value of the timeseries

  • ‘start_day’: origin is the first day at midnight of the timeseries

  • ‘end’: origin is the last value of the timeseries

  • ‘end_day’: origin is the ceiling midnight of the last day

New in version 1.3.0.

Note

Only takes effect for Tick-frequencies (i.e. fixed frequencies like days, hours, and minutes, rather than months or quarters).

offsetTimedelta or str, default is None (Not supported in Dask)

An offset timedelta added to the origin.

group_keysbool, default False (Not supported in Dask)

Whether to include the group keys in the result index when using .apply() on the resampled object.

New in version 1.5.0: Not specifying group_keys will retain values-dependent behavior from pandas 1.4 and earlier (see pandas 1.5.0 Release notes for examples).

Changed in version 2.0.0: group_keys now defaults to False.

Returns:
pandas.api.typing.Resampler

Resampler object.

See also

Series.resample

Resample a Series.

DataFrame.resample

Resample a DataFrame.

groupby

Group Series/DataFrame by mapping, function, label, or list of labels.

asfreq

Reindex a Series/DataFrame with the given frequency without grouping.

Notes

See the user guide for more.

To learn more about the offset strings, please see this link.

Examples

Start by creating a series with 9 one minute timestamps.

>>> index = pd.date_range('1/1/2000', periods=9, freq='min')  
>>> series = pd.Series(range(9), index=index)  
>>> series  
2000-01-01 00:00:00    0
2000-01-01 00:01:00    1
2000-01-01 00:02:00    2
2000-01-01 00:03:00    3
2000-01-01 00:04:00    4
2000-01-01 00:05:00    5
2000-01-01 00:06:00    6
2000-01-01 00:07:00    7
2000-01-01 00:08:00    8
Freq: min, dtype: int64

Downsample the series into 3 minute bins and sum the values of the timestamps falling into a bin.

>>> series.resample('3min').sum()  
2000-01-01 00:00:00     3
2000-01-01 00:03:00    12
2000-01-01 00:06:00    21
Freq: 3min, dtype: int64

Downsample the series into 3 minute bins as above, but label each bin using the right edge instead of the left. Please note that the value in the bucket used as the label is not included in the bucket, which it labels. For example, in the original series the bucket 2000-01-01 00:03:00 contains the value 3, but the summed value in the resampled bucket with the label 2000-01-01 00:03:00 does not include 3 (if it did, the summed value would be 6, not 3).

>>> series.resample('3min', label='right').sum()  
2000-01-01 00:03:00     3
2000-01-01 00:06:00    12
2000-01-01 00:09:00    21
Freq: 3min, dtype: int64

To include this value close the right side of the bin interval, as shown below.

>>> series.resample('3min', label='right', closed='right').sum()  
2000-01-01 00:00:00     0
2000-01-01 00:03:00     6
2000-01-01 00:06:00    15
2000-01-01 00:09:00    15
Freq: 3min, dtype: int64

Upsample the series into 30 second bins.

>>> series.resample('30s').asfreq()[0:5]   # Select first 5 rows  
2000-01-01 00:00:00   0.0
2000-01-01 00:00:30   NaN
2000-01-01 00:01:00   1.0
2000-01-01 00:01:30   NaN
2000-01-01 00:02:00   2.0
Freq: 30s, dtype: float64

Upsample the series into 30 second bins and fill the NaN values using the ffill method.

>>> series.resample('30s').ffill()[0:5]  
2000-01-01 00:00:00    0
2000-01-01 00:00:30    0
2000-01-01 00:01:00    1
2000-01-01 00:01:30    1
2000-01-01 00:02:00    2
Freq: 30s, dtype: int64

Upsample the series into 30 second bins and fill the NaN values using the bfill method.

>>> series.resample('30s').bfill()[0:5]  
2000-01-01 00:00:00    0
2000-01-01 00:00:30    1
2000-01-01 00:01:00    1
2000-01-01 00:01:30    2
2000-01-01 00:02:00    2
Freq: 30s, dtype: int64

Pass a custom function via apply

>>> def custom_resampler(arraylike):  
...     return np.sum(arraylike) + 5
...
>>> series.resample('3min').apply(custom_resampler)  
2000-01-01 00:00:00     8
2000-01-01 00:03:00    17
2000-01-01 00:06:00    26
Freq: 3min, dtype: int64

For DataFrame objects, the keyword on can be used to specify the column instead of the index for resampling.

>>> d = {'price': [10, 11, 9, 13, 14, 18, 17, 19],  
...      'volume': [50, 60, 40, 100, 50, 100, 40, 50]}
>>> df = pd.DataFrame(d)  
>>> df['week_starting'] = pd.date_range('01/01/2018',  
...                                     periods=8,
...                                     freq='W')
>>> df  
   price  volume week_starting
0     10      50    2018-01-07
1     11      60    2018-01-14
2      9      40    2018-01-21
3     13     100    2018-01-28
4     14      50    2018-02-04
5     18     100    2018-02-11
6     17      40    2018-02-18
7     19      50    2018-02-25
>>> df.resample('ME', on='week_starting').mean()  
               price  volume
week_starting
2018-01-31     10.75    62.5
2018-02-28     17.00    60.0

For a DataFrame with MultiIndex, the keyword level can be used to specify on which level the resampling needs to take place.

>>> days = pd.date_range('1/1/2000', periods=4, freq='D')  
>>> d2 = {'price': [10, 11, 9, 13, 14, 18, 17, 19],  
...       'volume': [50, 60, 40, 100, 50, 100, 40, 50]}
>>> df2 = pd.DataFrame(  
...     d2,
...     index=pd.MultiIndex.from_product(
...         [days, ['morning', 'afternoon']]
...     )
... )
>>> df2  
                      price  volume
2000-01-01 morning       10      50
           afternoon     11      60
2000-01-02 morning        9      40
           afternoon     13     100
2000-01-03 morning       14      50
           afternoon     18     100
2000-01-04 morning       17      40
           afternoon     19      50
>>> df2.resample('D', level=0).sum()  
            price  volume
2000-01-01     21     110
2000-01-02     22     140
2000-01-03     32     150
2000-01-04     36      90

If you want to adjust the start of the bins based on a fixed timestamp:

>>> start, end = '2000-10-01 23:30:00', '2000-10-02 00:30:00'  
>>> rng = pd.date_range(start, end, freq='7min')  
>>> ts = pd.Series(np.arange(len(rng)) * 3, index=rng)  
>>> ts  
2000-10-01 23:30:00     0
2000-10-01 23:37:00     3
2000-10-01 23:44:00     6
2000-10-01 23:51:00     9
2000-10-01 23:58:00    12
2000-10-02 00:05:00    15
2000-10-02 00:12:00    18
2000-10-02 00:19:00    21
2000-10-02 00:26:00    24
Freq: 7min, dtype: int64
>>> ts.resample('17min').sum()  
2000-10-01 23:14:00     0
2000-10-01 23:31:00     9
2000-10-01 23:48:00    21
2000-10-02 00:05:00    54
2000-10-02 00:22:00    24
Freq: 17min, dtype: int64
>>> ts.resample('17min', origin='epoch').sum()  
2000-10-01 23:18:00     0
2000-10-01 23:35:00    18
2000-10-01 23:52:00    27
2000-10-02 00:09:00    39
2000-10-02 00:26:00    24
Freq: 17min, dtype: int64
>>> ts.resample('17min', origin='2000-01-01').sum()  
2000-10-01 23:24:00     3
2000-10-01 23:41:00    15
2000-10-01 23:58:00    45
2000-10-02 00:15:00    45
Freq: 17min, dtype: int64

If you want to adjust the start of the bins with an offset Timedelta, the two following lines are equivalent:

>>> ts.resample('17min', origin='start').sum()  
2000-10-01 23:30:00     9
2000-10-01 23:47:00    21
2000-10-02 00:04:00    54
2000-10-02 00:21:00    24
Freq: 17min, dtype: int64
>>> ts.resample('17min', offset='23h30min').sum()  
2000-10-01 23:30:00     9
2000-10-01 23:47:00    21
2000-10-02 00:04:00    54
2000-10-02 00:21:00    24
Freq: 17min, dtype: int64

If you want to take the largest Timestamp as the end of the bins:

>>> ts.resample('17min', origin='end').sum()  
2000-10-01 23:35:00     0
2000-10-01 23:52:00    18
2000-10-02 00:09:00    27
2000-10-02 00:26:00    63
Freq: 17min, dtype: int64

In contrast with the start_day, you can use end_day to take the ceiling midnight of the largest Timestamp as the end of the bins and drop the bins not containing data:

>>> ts.resample('17min', origin='end_day').sum()  
2000-10-01 23:38:00     3
2000-10-01 23:55:00    15
2000-10-02 00:12:00    45
2000-10-02 00:29:00    45
Freq: 17min, dtype: int64
reset_index(drop: bool = False)#

Reset the index to the default index.

Note that unlike in pandas, the reset index for a Dask DataFrame will not be monotonically increasing from 0. Instead, it will restart at 0 for each partition (e.g. index1 = [0, ..., 10], index2 = [0, ...]). This is due to the inability to statically know the full length of the index.

For DataFrame with multi-level index, returns a new DataFrame with labeling information in the columns under the index names, defaulting to ‘level_0’, ‘level_1’, etc. if any are None. For a standard index, the index name will be used (if set), otherwise a default ‘index’ or ‘level_0’ (if ‘index’ is already taken) will be used.

Parameters:
dropboolean, default False

Do not try to insert index into dataframe columns.

rolling(window, **kwargs)#

Provides rolling transformations.

Parameters:
windowint, str, offset

Size of the moving window. This is the number of observations used for calculating the statistic. When not using a DatetimeIndex, the window size must not be so large as to span more than one adjacent partition. If using an offset or offset alias like ‘5D’, the data must have a DatetimeIndex

min_periodsint, default None

Minimum number of observations in window required to have a value (otherwise result is NA).

centerboolean, default False

Set the labels at the center of the window.

win_typestring, default None

Provide a window type. The recognized window types are identical to pandas.

axisint, str, None, default 0

This parameter is deprecated with pandas>=2.1.

Returns:
a Rolling object on which to call a method to compute a statistic
round(decimals=0)#

Round a DataFrame to a variable number of decimal places.

This docstring was copied from pandas.core.frame.DataFrame.round.

Some inconsistencies with the Dask version may exist.

Parameters:
decimalsint, dict, Series

Number of decimal places to round each column to. If an int is given, round each column to the same number of places. Otherwise dict and Series round to variable numbers of places. Column names should be in the keys if decimals is a dict-like, or in the index if decimals is a Series. Any columns not included in decimals will be left as is. Elements of decimals which are not columns of the input will be ignored.

*args

Additional keywords have no effect but might be accepted for compatibility with numpy.

**kwargs

Additional keywords have no effect but might be accepted for compatibility with numpy.

Returns:
DataFrame

A DataFrame with the affected columns rounded to the specified number of decimal places.

See also

numpy.around

Round a numpy array to the given number of decimals.

Series.round

Round a Series to the given number of decimals.

Examples

>>> df = pd.DataFrame([(.21, .32), (.01, .67), (.66, .03), (.21, .18)],  
...                   columns=['dogs', 'cats'])
>>> df  
    dogs  cats
0  0.21  0.32
1  0.01  0.67
2  0.66  0.03
3  0.21  0.18

By providing an integer each column is rounded to the same number of decimal places

>>> df.round(1)  
    dogs  cats
0   0.2   0.3
1   0.0   0.7
2   0.7   0.0
3   0.2   0.2

With a dict, the number of places for specific columns can be specified with the column names as key and the number of decimal places as value

>>> df.round({'dogs': 1, 'cats': 0})  
    dogs  cats
0   0.2   0.0
1   0.0   1.0
2   0.7   0.0
3   0.2   0.0

Using a Series, the number of places for specific columns can be specified with the column names as index and the number of decimal places as value

>>> decimals = pd.Series([0, 1], index=['cats', 'dogs'])  
>>> df.round(decimals)  
    dogs  cats
0   0.2   0.0
1   0.0   1.0
2   0.7   0.0
3   0.2   0.0
sample(n=None, frac=None, replace=False, random_state=None)#

Random sample of items

Parameters:
nint, optional

Number of items to return is not supported by dask. Use frac instead.

fracfloat, optional

Approximate fraction of items to return. This sampling fraction is applied to all partitions equally. Note that this is an approximate fraction. You should not expect exactly len(df) * frac items to be returned, as the exact number of elements selected will depend on how your data is partitioned (but should be pretty close in practice).

replaceboolean, optional

Sample with or without replacement. Default = False.

random_stateint or np.random.RandomState

If an int, we create a new RandomState with this as the seed; Otherwise we draw from the passed RandomState.

select_dtypes(include=None, exclude=None)#

Return a subset of the DataFrame’s columns based on the column dtypes.

This docstring was copied from pandas.core.frame.DataFrame.select_dtypes.

Some inconsistencies with the Dask version may exist.

Parameters:
include, excludescalar or list-like

A selection of dtypes or strings to be included/excluded. At least one of these parameters must be supplied.

Returns:
DataFrame

The subset of the frame including the dtypes in include and excluding the dtypes in exclude.

Raises:
ValueError
  • If both of include and exclude are empty

  • If include and exclude have overlapping elements

  • If any kind of string dtype is passed in.

See also

DataFrame.dtypes

Return Series with the data type of each column.

Notes

  • To select all numeric types, use np.number or 'number'

  • To select strings you must use the object dtype, but note that this will return all object dtype columns

  • See the numpy dtype hierarchy

  • To select datetimes, use np.datetime64, 'datetime' or 'datetime64'

  • To select timedeltas, use np.timedelta64, 'timedelta' or 'timedelta64'

  • To select Pandas categorical dtypes, use 'category'

  • To select Pandas datetimetz dtypes, use 'datetimetz' or 'datetime64[ns, tz]'

Examples

>>> df = pd.DataFrame({'a': [1, 2] * 3,  
...                    'b': [True, False] * 3,
...                    'c': [1.0, 2.0] * 3})
>>> df  
        a      b  c
0       1   True  1.0
1       2  False  2.0
2       1   True  1.0
3       2  False  2.0
4       1   True  1.0
5       2  False  2.0
>>> df.select_dtypes(include='bool')  
   b
0  True
1  False
2  True
3  False
4  True
5  False
>>> df.select_dtypes(include=['float64'])  
   c
0  1.0