awswrangler.s3.to_csv

awswrangler.s3.to_csv(df: DataFrame, path: str | None = None, sep: str = ',', index: bool = True, columns: list[str] | None = None, use_threads: bool | int = True, boto3_session: Session | None = None, s3_additional_kwargs: dict[str, Any] | None = None, sanitize_columns: bool = False, dataset: bool = False, filename_prefix: str | None = None, partition_cols: list[str] | None = None, bucketing_info: Tuple[List[str], int] | None = None, concurrent_partitioning: bool = False, mode: Literal['append', 'overwrite', 'overwrite_partitions'] | None = None, catalog_versioning: bool = False, schema_evolution: bool = False, dtype: dict[str, str] | None = None, database: str | None = None, table: str | None = None, glue_table_settings: GlueTableSettings | None = None, athena_partition_projection_settings: AthenaPartitionProjectionSettings | None = None, catalog_id: str | None = None, **pandas_kwargs: Any) _S3WriteDataReturnValue

Write CSV file or dataset on Amazon S3.

The concept of Dataset goes beyond the simple idea of ordinary files and enable more complex features like partitioning and catalog integration (Amazon Athena/AWS Glue Catalog).

Note

If database` and table arguments are passed, the table name and all column names will be automatically sanitized using wr.catalog.sanitize_table_name and wr.catalog.sanitize_column_name. Please, pass sanitize_columns=True to enforce this behaviour always.

Note

If table and database arguments are passed, pandas_kwargs will be ignored due restrictive quoting, date_format, escapechar and encoding required by Athena/Glue Catalog.

Note

In case of use_threads=True the number of threads that will be spawned will be gotten from os.cpu_count().

Note

Following arguments are not supported in distributed mode with engine EngineEnum.RAY:

  • boto3_session

Note

This function has arguments which can be configured globally through wr.config or environment variables:

  • catalog_id

  • concurrent_partitioning

  • database

Check out the Global Configurations Tutorial for details.

Parameters:
  • df (DataFrame) – Pandas DataFrame https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html

  • path (str | None) – Amazon S3 path (e.g. s3://bucket/prefix/filename.csv) (for dataset e.g. s3://bucket/prefix). Required if dataset=False or when creating a new dataset

  • sep (str) – String of length 1. Field delimiter for the output file.

  • index (bool) – Write row names (index).

  • columns (list[str] | None) – Columns to write.

  • use_threads (bool | int) – True to enable concurrent requests, False to disable multiple threads. If enabled os.cpu_count() will be used as the max number of threads. If integer is provided, specified number is used.

  • boto3_session (Session | None) – Boto3 Session. The default boto3 Session will be used if boto3_session receive None.

  • s3_additional_kwargs (dict[str, Any] | None) – Forwarded to botocore requests. e.g. s3_additional_kwargs={‘ServerSideEncryption’: ‘aws:kms’, ‘SSEKMSKeyId’: ‘YOUR_KMS_KEY_ARN’}

  • sanitize_columns (bool) – True to sanitize columns names or False to keep it as is. True value is forced if dataset=True.

  • dataset (bool) – If True store as a dataset instead of ordinary file(s) If True, enable all follow arguments: partition_cols, mode, database, table, description, parameters, columns_comments, concurrent_partitioning, catalog_versioning, projection_params, catalog_id, schema_evolution.

  • filename_prefix (str | None) – If dataset=True, add a filename prefix to the output files.

  • partition_cols (list[str] | None) – List of column names that will be used to create partitions. Only takes effect if dataset=True.

  • bucketing_info (Tuple[List[str], int] | None) – Tuple consisting of the column names used for bucketing as the first element and the number of buckets as the second element. Only str, int and bool are supported as column data types for bucketing.

  • concurrent_partitioning (bool) – If True will increase the parallelism level during the partitions writing. It will decrease the writing time and increase the memory usage. https://aws-sdk-pandas.readthedocs.io/en/3.10.0/tutorials/022%20-%20Writing%20Partitions%20Concurrently.html

  • mode (Literal['append', 'overwrite', 'overwrite_partitions'] | None) – append (Default), overwrite, overwrite_partitions. Only takes effect if dataset=True. For details check the related tutorial: https://aws-sdk-pandas.readthedocs.io/en/3.10.0/stubs/awswrangler.s3.to_parquet.html#awswrangler.s3.to_parquet

  • catalog_versioning (bool) – If True and mode=”overwrite”, creates an archived version of the table catalog before updating it.

  • schema_evolution (bool) – If True allows schema evolution (new or missing columns), otherwise a exception will be raised. (Only considered if dataset=True and mode in (“append”, “overwrite_partitions”)). False by default. Related tutorial: https://aws-sdk-pandas.readthedocs.io/en/3.10.0/tutorials/014%20-%20Schema%20Evolution.html

  • database (str | None) – Glue/Athena catalog: Database name.

  • table (str | None) – Glue/Athena catalog: Table name.

  • glue_table_settings (GlueTableSettings | None) – Settings for writing to the Glue table.

  • dtype (dict[str, str] | None) – Dictionary of columns names and Athena/Glue types to be casted. Useful when you have columns with undetermined or mixed data types. (e.g. {‘col name’: ‘bigint’, ‘col2 name’: ‘int’})

  • athena_partition_projection_settings (AthenaPartitionProjectionSettings | None) –

    Parameters of the Athena Partition Projection (https://docs.aws.amazon.com/athena/latest/ug/partition-projection.html). AthenaPartitionProjectionSettings is a TypedDict, meaning the passed parameter can be instantiated either as an instance of AthenaPartitionProjectionSettings or as a regular Python dict.

    Following projection parameters are supported:

    Projection Parameters

    Name

    Type

    Description

    projection_types

    Optional[Dict[str, str]]

    Dictionary of partitions names and Athena projections types. Valid types: “enum”, “integer”, “date”, “injected” https://docs.aws.amazon.com/athena/latest/ug/partition-projection-supported-types.html (e.g. {‘col_name’: ‘enum’, ‘col2_name’: ‘integer’})

    projection_ranges

    Optional[Dict[str, str]]

    Dictionary of partitions names and Athena projections ranges. https://docs.aws.amazon.com/athena/latest/ug/partition-projection-supported-types.html (e.g. {‘col_name’: ‘0,10’, ‘col2_name’: ‘-1,8675309’})

    projection_values

    Optional[Dict[str, str]]

    Dictionary of partitions names and Athena projections values. https://docs.aws.amazon.com/athena/latest/ug/partition-projection-supported-types.html (e.g. {‘col_name’: ‘A,B,Unknown’, ‘col2_name’: ‘foo,boo,bar’})

    projection_intervals

    Optional[Dict[str, str]]

    Dictionary of partitions names and Athena projections intervals. https://docs.aws.amazon.com/athena/latest/ug/partition-projection-supported-types.html (e.g. {‘col_name’: ‘1’, ‘col2_name’: ‘5’})

    projection_digits

    Optional[Dict[str, str]]

    Dictionary of partitions names and Athena projections digits. https://docs.aws.amazon.com/athena/latest/ug/partition-projection-supported-types.html (e.g. {‘col_name’: ‘1’, ‘col2_name’: ‘2’})

    projection_formats

    Optional[Dict[str, str]]

    Dictionary of partitions names and Athena projections formats. https://docs.aws.amazon.com/athena/latest/ug/partition-projection-supported-types.html (e.g. {‘col_date’: ‘yyyy-MM-dd’, ‘col2_timestamp’: ‘yyyy-MM-dd HH:mm:ss’})

    projection_storage_location_template

    Optional[str]

    Value which is allows Athena to properly map partition values if the S3 file locations do not follow a typical …/column=value/… pattern. https://docs.aws.amazon.com/athena/latest/ug/partition-projection-setting-up.html (e.g. s3://bucket/table_root/a=${a}/${b}/some_static_subdirectory/${c}/)

  • catalog_id (str | None) – The ID of the Data Catalog from which to retrieve Databases. If none is provided, the AWS account ID is used by default.

  • pandas_kwargs (Any) – KEYWORD arguments forwarded to pandas.DataFrame.to_csv(). You can NOT pass pandas_kwargs explicit, just add valid Pandas arguments in the function call and awswrangler will accept it. e.g. wr.s3.to_csv(df, path, sep=’|’, na_rep=’NULL’, decimal=’,’) https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.to_csv.html

Return type:

_S3WriteDataReturnValue

Returns:

Dictionary with:
  • ’paths’: List of all stored files paths on S3.

  • ’partitions_values’: Dictionary of partitions added with keys as S3 path locations and values as a list of partitions values as str.

Examples

Writing single file

>>> import awswrangler as wr
>>> import pandas as pd
>>> wr.s3.to_csv(
...     df=pd.DataFrame({'col': [1, 2, 3]}),
...     path='s3://bucket/prefix/my_file.csv',
... )
{
    'paths': ['s3://bucket/prefix/my_file.csv'],
    'partitions_values': {}
}

Writing single file with pandas_kwargs

>>> import awswrangler as wr
>>> import pandas as pd
>>> wr.s3.to_csv(
...     df=pd.DataFrame({'col': [1, 2, 3]}),
...     path='s3://bucket/prefix/my_file.csv',
...     sep='|',
...     na_rep='NULL',
...     decimal=','
... )
{
    'paths': ['s3://bucket/prefix/my_file.csv'],
    'partitions_values': {}
}

Writing single file encrypted with a KMS key

>>> import awswrangler as wr
>>> import pandas as pd
>>> wr.s3.to_csv(
...     df=pd.DataFrame({'col': [1, 2, 3]}),
...     path='s3://bucket/prefix/my_file.csv',
...     s3_additional_kwargs={
...         'ServerSideEncryption': 'aws:kms',
...         'SSEKMSKeyId': 'YOUR_KMS_KEY_ARN'
...     }
... )
{
    'paths': ['s3://bucket/prefix/my_file.csv'],
    'partitions_values': {}
}

Writing partitioned dataset

>>> import awswrangler as wr
>>> import pandas as pd
>>> wr.s3.to_csv(
...     df=pd.DataFrame({
...         'col': [1, 2, 3],
...         'col2': ['A', 'A', 'B']
...     }),
...     path='s3://bucket/prefix',
...     dataset=True,
...     partition_cols=['col2']
... )
{
    'paths': ['s3://.../col2=A/x.csv', 's3://.../col2=B/y.csv'],
    'partitions_values: {
        's3://.../col2=A/': ['A'],
        's3://.../col2=B/': ['B']
    }
}

Writing partitioned dataset with partition projection

>>> import awswrangler as wr
>>> import pandas as pd
>>> from datetime import datetime
>>> dt = lambda x: datetime.strptime(x, "%Y-%m-%d").date()
>>> wr.s3.to_csv(
...     df=pd.DataFrame({
...         "id": [1, 2, 3],
...         "value": [1000, 1001, 1002],
...         "category": ['A', 'B', 'C'],
...     }),
...     path='s3://bucket/prefix',
...     dataset=True,
...     partition_cols=['value', 'category'],
...     athena_partition_projection_settings={
...        "projection_types": {
...             "value": "integer",
...             "category": "enum",
...         },
...         "projection_ranges": {
...             "value": "1000,2000",
...             "category": "A,B,C",
...         },
...     },
... )
{
    'paths': [
        's3://.../value=1000/category=A/x.json', ...
    ],
    'partitions_values': {
        's3://.../value=1000/category=A/': [
            '1000',
            'A',
        ], ...
    }
}

Writing bucketed dataset

>>> import awswrangler as wr
>>> import pandas as pd
>>> wr.s3.to_csv(
...     df=pd.DataFrame({
...         'col': [1, 2, 3],
...         'col2': ['A', 'A', 'B']
...     }),
...     path='s3://bucket/prefix',
...     dataset=True,
...     bucketing_info=(["col2"], 2)
... )
{
    'paths': ['s3://.../x_bucket-00000.csv', 's3://.../col2=B/x_bucket-00001.csv'],
    'partitions_values: {}
}

Writing dataset to S3 with metadata on Athena/Glue Catalog.

>>> import awswrangler as wr
>>> import pandas as pd
>>> wr.s3.to_csv(
...     df=pd.DataFrame({
...         'col': [1, 2, 3],
...         'col2': ['A', 'A', 'B']
...     }),
...     path='s3://bucket/prefix',
...     dataset=True,
...     partition_cols=['col2'],
...     database='default',  # Athena/Glue database
...     table='my_table'  # Athena/Glue table
... )
{
    'paths': ['s3://.../col2=A/x.csv', 's3://.../col2=B/y.csv'],
    'partitions_values: {
        's3://.../col2=A/': ['A'],
        's3://.../col2=B/': ['B']
    }
}

Writing dataset casting empty column data type

>>> import awswrangler as wr
>>> import pandas as pd
>>> wr.s3.to_csv(
...     df=pd.DataFrame({
...         'col': [1, 2, 3],
...         'col2': ['A', 'A', 'B'],
...         'col3': [None, None, None]
...     }),
...     path='s3://bucket/prefix',
...     dataset=True,
...     database='default',  # Athena/Glue database
...     table='my_table'  # Athena/Glue table
...     dtype={'col3': 'date'}
... )
{
    'paths': ['s3://.../x.csv'],
    'partitions_values: {}
}