awswrangler.catalog.create_csv_table(database: str, table: str, path: str, columns_types: dict[str, str], table_type: str | None = None, partitions_types: dict[str, str] | None = None, bucketing_info: Tuple[List[str], int] | None = None, compression: str | None = None, description: str | None = None, parameters: dict[str, str] | None = None, columns_comments: dict[str, str] | None = None, mode: Literal['overwrite', 'append'] = 'overwrite', catalog_versioning: bool = False, schema_evolution: bool = False, sep: str = ',', skip_header_line_count: int | None = None, serde_library: str | None = None, serde_parameters: dict[str, str] | None = None, boto3_session: Session | None = None, athena_partition_projection_settings: AthenaPartitionProjectionSettings | None = None, catalog_id: str | None = None) None

Create a CSV Table (Metadata Only) in the AWS Glue Catalog.


Athena requires the columns in the underlying CSV files in S3 to be in the same order as the columns in the Glue data catalog.


This function has arguments which can be configured globally through wr.config or environment variables:

  • catalog_id

  • database

Check out the Global Configurations Tutorial for details.

  • database (str) – Database name.

  • table (str) – Table name.

  • path (str) – Amazon S3 path (e.g. s3://bucket/prefix/).

  • columns_types (Dict[str, str]) – Dictionary with keys as column names and values as data types (e.g. {‘col0’: ‘bigint’, ‘col1’: ‘double’}).

  • table_type (str, optional) – The type of the Glue Table. Set to EXTERNAL_TABLE if None.

  • partitions_types (Dict[str, str], optional) – Dictionary with keys as partition names and values as data types (e.g. {‘col2’: ‘date’}).

  • bucketing_info (Tuple[List[str], int], optional) – Tuple consisting of the column names used for bucketing as the first element and the number of buckets as the second element. Only str, int and bool are supported as column data types for bucketing.

  • compression (str, optional) – Compression style (None, gzip, etc).

  • description (str, optional) – Table description

  • parameters (Dict[str, str], optional) – Key/value pairs to tag the table.

  • columns_comments (Dict[str, str], optional) – Columns names and the related comments (e.g. {‘col0’: ‘Column 0.’, ‘col1’: ‘Column 1.’, ‘col2’: ‘Partition.’}).

  • mode (str) – ‘overwrite’ to recreate any possible existing table or ‘append’ to keep any possible existing table.

  • catalog_versioning (bool) – If True and mode=”overwrite”, creates an archived version of the table catalog before updating it.

  • schema_evolution (bool) – If True allows schema evolution (new or missing columns), otherwise a exception will be raised. (Only considered if dataset=True and mode in (“append”, “overwrite_partitions”)) Related tutorial:

  • sep (str) – String of length 1. Field delimiter for the output file.

  • skip_header_line_count (Optional[int]) – Number of Lines to skip regarding to the header.

  • serde_library (Optional[str]) – Specifies the SerDe Serialization library which will be used. You need to provide the Class library name as a string. If no library is provided the default is org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe.

  • serde_parameters (Optional[str]) – Dictionary of initialization parameters for the SerDe. The default is {“field.delim”: sep, “escape.delim”: “\”}.

  • athena_partition_projection_settings (AthenaPartitionProjectionSettings, optional) –

    Parameters of the Athena Partition Projection ( AthenaPartitionProjectionSettings is a TypedDict, meaning the passed parameter can be instantiated either as an instance of AthenaPartitionProjectionSettings or as a regular Python dict.

    Following projection parameters are supported:

    Projection Parameters





    Optional[Dict[str, str]]

    Dictionary of partitions names and Athena projections types. Valid types: “enum”, “integer”, “date”, “injected” (e.g. {‘col_name’: ‘enum’, ‘col2_name’: ‘integer’})


    Optional[Dict[str, str]]

    Dictionary of partitions names and Athena projections ranges. (e.g. {‘col_name’: ‘0,10’, ‘col2_name’: ‘-1,8675309’})


    Optional[Dict[str, str]]

    Dictionary of partitions names and Athena projections values. (e.g. {‘col_name’: ‘A,B,Unknown’, ‘col2_name’: ‘foo,boo,bar’})


    Optional[Dict[str, str]]

    Dictionary of partitions names and Athena projections intervals. (e.g. {‘col_name’: ‘1’, ‘col2_name’: ‘5’})


    Optional[Dict[str, str]]

    Dictionary of partitions names and Athena projections digits. (e.g. {‘col_name’: ‘1’, ‘col2_name’: ‘2’})


    Optional[Dict[str, str]]

    Dictionary of partitions names and Athena projections formats. (e.g. {‘col_date’: ‘yyyy-MM-dd’, ‘col2_timestamp’: ‘yyyy-MM-dd HH:mm:ss’})



    Value which is allows Athena to properly map partition values if the S3 file locations do not follow a typical …/column=value/… pattern. (e.g. s3://bucket/table_root/a=${a}/${b}/some_static_subdirectory/${c}/)

  • boto3_session (boto3.Session(), optional) – Boto3 Session. The default boto3 session will be used if boto3_session receive None.

  • catalog_id (str, optional) – The ID of the Data Catalog from which to retrieve Databases. If none is provided, the AWS account ID is used by default.



Return type:



>>> import awswrangler as wr
>>> wr.catalog.create_csv_table(
...     database='default',
...     table='my_table',
...     path='s3://bucket/prefix/',
...     columns_types={'col0': 'bigint', 'col1': 'double'},
...     partitions_types={'col2': 'date'},
...     compression='gzip',
...     description='My own table!',
...     parameters={'source': 'postgresql'},
...     columns_comments={'col0': 'Column 0.', 'col1': 'Column 1.', 'col2': 'Partition.'}
... )