awswrangler.s3.read_csv(path: str | list[str], path_suffix: str | list[str] | None = None, path_ignore_suffix: str | list[str] | None = None, version_id: str | dict[str, str] | None = None, ignore_empty: bool = True, use_threads: bool | int = True, last_modified_begin: datetime | None = None, last_modified_end: datetime | None = None, boto3_session: Session | None = None, s3_additional_kwargs: dict[str, Any] | None = None, dtype_backend: Literal['numpy_nullable', 'pyarrow'] = 'numpy_nullable', chunksize: int | None = None, dataset: bool = False, partition_filter: Callable[[dict[str, str]], bool] | None = None, ray_args: RaySettings | None = None, **pandas_kwargs: Any) DataFrame | Iterator[DataFrame]

Read CSV file(s) from a received S3 prefix or list of S3 objects paths.

This function accepts Unix shell-style wildcards in the path argument. * (matches everything), ? (matches any single character), [seq] (matches any character in seq), [!seq] (matches any character not in seq). If you want to use a path which includes Unix shell-style wildcard characters (*, ?, []), you can use glob.escape(path) before passing the path to this function.


For partial and gradual reading use the argument chunksize instead of iterator.


In case of use_threads=True the number of threads that will be spawned will be gotten from os.cpu_count().


The filter by last_modified begin last_modified end is applied after list all S3 files


Following arguments are not supported in distributed mode with engine EngineEnum.RAY:

  • boto3_session

  • path (Union[str, List[str]]) – S3 prefix (accepts Unix shell-style wildcards) (e.g. s3://bucket/prefix) or list of S3 objects paths (e.g. [s3://bucket/key0, s3://bucket/key1]).

  • path_suffix (Union[str, List[str], None]) – Suffix or List of suffixes to be read (e.g. [“.csv”]). If None, will try to read all files. (default)

  • path_ignore_suffix (Union[str, List[str], None]) – Suffix or List of suffixes for S3 keys to be ignored.(e.g. [“_SUCCESS”]). If None, will try to read all files. (default)

  • version_id (Optional[Union[str, Dict[str, str]]]) – Version id of the object or mapping of object path to version id. (e.g. {‘s3://bucket/key0’: ‘121212’, ‘s3://bucket/key1’: ‘343434’})

  • ignore_empty (bool) – Ignore files with 0 bytes.

  • use_threads (Union[bool, int]) – True to enable concurrent requests, False to disable multiple threads. If enabled os.cpu_count() will be used as the max number of threads. If integer is provided, specified number is used.

  • last_modified_begin – Filter the s3 files by the Last modified date of the object. The filter is applied only after list all s3 files.

  • last_modified_end (datetime, optional) – Filter the s3 files by the Last modified date of the object. The filter is applied only after list all s3 files.

  • boto3_session (boto3.Session(), optional) – Boto3 Session. The default boto3 session will be used if boto3_session receive None.

  • pyarrow_additional_kwargs (dict[str, Any], optional) – Forward to botocore requests, only “SSECustomerAlgorithm” and “SSECustomerKey” arguments will be considered.

  • dtype_backend (str, optional) –

    Which dtype_backend to use, e.g. whether a DataFrame should have NumPy arrays, nullable dtypes are used for all dtypes that have a nullable implementation when “numpy_nullable” is set, pyarrow is used for all dtypes if “pyarrow” is set.

    The dtype_backends are still experimential. The “pyarrow” backend is only supported with Pandas 2.0 or above.

  • chunksize (int, optional) – If specified, return an generator where chunksize is the number of rows to include in each chunk.

  • dataset (bool) – If True read a CSV dataset instead of simple file(s) loading all the related partitions as columns.

  • partition_filter (Optional[Callable[[Dict[str, str]], bool]]) – Callback Function filters to apply on PARTITION columns (PUSH-DOWN filter). This function MUST receive a single argument (Dict[str, str]) where keys are partitions names and values are partitions values. Partitions values will be always strings extracted from S3. This function MUST return a bool, True to read the partition or False to ignore it. Ignored if dataset=False. E.g lambda x: True if x["year"] == "2020" and x["month"] == "1" else False

  • s3_additional_kwargs (dict[str, Any], optional) – Forwarded to botocore requests.

  • ray_args (RaySettings, optional) – Parameters of the Ray Modin settings. Only used when distributed computing is used with Ray and Modin installed.

  • pandas_kwargs – KEYWORD arguments forwarded to pandas.read_csv(). You can NOT pass pandas_kwargs explicitly, just add valid Pandas arguments in the function call and awswrangler will accept it. e.g. wr.s3.read_csv(‘s3://bucket/prefix/’, sep=’|’, na_values=[‘null’, ‘none’], skip_blank_lines=True)


Pandas DataFrame or a Generator in case of chunksize != None.

Return type:

Union[pandas.DataFrame, Generator[pandas.DataFrame, None, None]]


Reading all CSV files under a prefix

>>> import awswrangler as wr
>>> df = wr.s3.read_csv(path='s3://bucket/prefix/')

Reading all CSV files under a prefix and using pandas_kwargs

>>> import awswrangler as wr
>>> df = wr.s3.read_csv('s3://bucket/prefix/', sep='|', na_values=['null', 'none'], skip_blank_lines=True)

Reading all CSV files from a list

>>> import awswrangler as wr
>>> df = wr.s3.read_csv(path=['s3://bucket/filename0.csv', 's3://bucket/filename1.csv'])

Reading in chunks of 100 lines

>>> import awswrangler as wr
>>> dfs = wr.s3.read_csv(path=['s3://bucket/filename0.csv', 's3://bucket/filename1.csv'], chunksize=100)
>>> for df in dfs:
>>>     print(df)  # 100 lines Pandas DataFrame

Reading CSV Dataset with PUSH-DOWN filter over partitions

>>> import awswrangler as wr
>>> my_filter = lambda x: True if x["city"].startswith("new") else False
>>> df = wr.s3.read_csv(path, dataset=True, partition_filter=my_filter)