Input/output#

CSV#

read_csv(filepath_or_buffer[, sep, ...])

Load a comma-separated-values (CSV) dataset into a DataFrame

DataFrame.to_csv([path_or_buf, sep, na_rep, ...])

Write a dataframe to csv file format.

Text#

read_text(filepath_or_buffer[, delimiter, ...])

Configuration object for a text Datasource

JSON#

read_json(path_or_buf[, engine, orient, ...])

Load a JSON dataset into a DataFrame

DataFrame.to_json([path_or_buf])

Convert the cuDF object to a JSON string.

Parquet#

read_parquet(filepath_or_buffer[, engine, ...])

Load a Parquet dataset into a DataFrame

DataFrame.to_parquet(path[, engine, ...])

Write a DataFrame to the parquet format.

cudf.io.parquet.read_parquet_metadata(...)

Read a Parquet file's metadata and schema

cudf.io.parquet.ParquetDatasetWriter(path, ...)

Write a parquet file or dataset incrementally

cudf.io.parquet.ParquetDatasetWriter.close([...])

Close all open files and optionally return footer metadata as a binary blob

cudf.io.parquet.ParquetDatasetWriter.write_table(df)

Write a dataframe to the file/dataset

ORC#

read_orc(filepath_or_buffer[, engine, ...])

Load an ORC dataset into a DataFrame

DataFrame.to_orc(fname[, compression, ...])

Write a DataFrame to the ORC format.

HDFStore: PyTables (HDF5)#

read_hdf(path_or_buf, *args, **kwargs)

Read from the store, close it if we opened it.

DataFrame.to_hdf(path_or_buf, key, *args, ...)

Write the contained data to an HDF5 file using HDFStore.

Warning

HDF reader and writers are not GPU accelerated. These currently use CPU via Pandas. This may be GPU accelerated in the future.

Feather#

read_feather(path, *args, **kwargs)

Load an feather object from the file path, returning a DataFrame.

DataFrame.to_feather(path, *args, **kwargs)

Write a DataFrame to the feather format.

Warning

Feather reader and writers are not GPU accelerated. These currently use CPU via Pandas. This may be GPU accelerated in the future.

Avro#

read_avro(filepath_or_buffer[, columns, ...])

Load an Avro dataset into a DataFrame