cudf.read_parquet#

cudf.read_parquet(filepath_or_buffer, engine='cudf', columns=None, filters=None, row_groups=None, skiprows=None, num_rows=None, strings_to_categorical=False, use_pandas_metadata=True, use_python_file_object=True, categorical_partitions=True, open_file_options=None, *args, **kwargs)#

Load a Parquet dataset into a DataFrame

Parameters
filepath_or_bufferstr, path object, bytes, file-like object, or a list

of such objects. Contains one or more of the following: either a path to a file (a str, pathlib.Path, or py._path.local.LocalPath), URL (including http, ftp, and S3 locations), Python bytes of raw binary data, or any object with a read() method (such as builtin open() file handler function or BytesIO).

engine{ ‘cudf’, ‘pyarrow’ }, default ‘cudf’

Parser engine to use.

columnslist, default None

If not None, only these columns will be read.

filterslist of tuple, list of lists of tuples default None

If not None, specifies a filter predicate used to filter out row groups using statistics stored for each row group as Parquet metadata. Row groups that do not match the given filter predicate are not read. The predicate is expressed in disjunctive normal form (DNF) like [[(‘x’, ‘=’, 0), …], …]. DNF allows arbitrary boolean logical combinations of single column predicates. The innermost tuples each describe a single column predicate. The list of inner predicates is interpreted as a conjunction (AND), forming a more selective and multiple column predicate. Finally, the most outer list combines these filters as a disjunction (OR). Predicates may also be passed as a list of tuples. This form is interpreted as a single conjunction. To express OR in predicates, one must use the (preferred) notation of list of lists of tuples.

row_groupsint, or list, or a list of lists default None

If not None, specifies, for each input file, which row groups to read. If reading multiple inputs, a list of lists should be passed, one list for each input.

skiprowsint, default None

If not None, the number of rows to skip from the start of the file.

num_rowsint, default None

If not None, the total number of rows to read.

strings_to_categoricalboolean, default False

If True, return string columns as GDF_CATEGORY dtype; if False, return a as GDF_STRING dtype.

categorical_partitionsboolean, default True

Whether directory-partitioned columns should be interpreted as categorical or raw dtypes.

use_pandas_metadataboolean, default True

If True and dataset has custom PANDAS schema metadata, ensure that index columns are also loaded.

use_python_file_objectboolean, default True

If True, Arrow-backed PythonFile objects will be used in place of fsspec AbstractBufferedFile objects at IO time. Setting this argument to False will require the entire file to be copied to host memory, and is highly discouraged.

open_file_optionsdict, optional

Dictionary of key-value pairs to pass to the function used to open remote files. By default, this will be fsspec.parquet.open_parquet_file. To deactivate optimized precaching, set the “method” to None under the “precache_options” key. Note that the open_file_func key can also be used to specify a custom file-open function.

Returns
DataFrame

Notes

  • cuDF supports local and remote data stores. See configuration details for available sources here.

Examples

>>> import cudf
>>> df = cudf.read_parquet(filename)
>>> df
  num1                datetime text
0  123 2018-11-13T12:00:00.000 5451
1  456 2018-11-14T12:35:01.000 5784
2  789 2018-11-15T18:02:59.000 6117