awswrangler.timestream.unload_to_files¶
- awswrangler.timestream.unload_to_files(sql: str, path: str, unload_format: Literal['CSV', 'PARQUET'] | None = None, compression: Literal['GZIP', 'NONE'] | None = None, partition_cols: list[str] | None = None, encryption: Literal['SSE_KMS', 'SSE_S3'] | None = None, kms_key_id: str | None = None, field_delimiter: str | None = ',', escaped_by: str | None = '\\', boto3_session: Session | None = None) None ¶
Unload query results to Amazon S3.
https://docs.aws.amazon.com/timestream/latest/developerguide/export-unload.html
Note
This function has arguments which can be configured globally through wr.config or environment variables:
Check out the Global Configurations Tutorial for details.
Note
Following arguments are not supported in distributed mode with engine EngineEnum.RAY:
boto3_session
- Parameters:
sql (str) – SQL query
path (str) – S3 path to write stage files (e.g. s3://bucket_name/any_name/)
unload_format (str, optional) – Format of the unloaded S3 objects from the query. Valid values: “CSV”, “PARQUET”. Case sensitive. Defaults to “PARQUET”
compression (str, optional) – Compression of the unloaded S3 objects from the query. Valid values: “GZIP”, “NONE”. Defaults to “GZIP”
partition_cols (List[str], optional) – Specifies the partition keys for the unload operation
encryption (str, optional) – Encryption of the unloaded S3 objects from the query. Valid values: “SSE_KMS”, “SSE_S3”. Defaults to “SSE_S3”
kms_key_id (str, optional) – Specifies the key ID for an AWS Key Management Service (AWS KMS) key to be used to encrypt data files on Amazon S3
field_delimiter (str, optional) – A single ASCII character that is used to separate fields in the output file, such as pipe character (|), a comma (,), or tab (/t). Only used with CSV format
escaped_by (str, optional) – The character that should be treated as an escape character in the data file written to S3 bucket. Only used with CSV format
boto3_session (boto3.Session(), optional) – Boto3 Session. The default boto3 session is used if None
- Return type:
None
Examples
Unload and read as Parquet (default).
>>> import awswrangler as wr >>> wr.timestream.unload_to_files( ... sql="SELECT time, measure, dimension FROM database.mytable", ... path="s3://bucket/extracted_parquet_files/", ... )
Unload and read partitioned Parquet. Note: partition columns must be at the end of the table.
>>> import awswrangler as wr >>> wr.timestream.unload_to_files( ... sql="SELECT time, measure, dim1, dim2 FROM database.mytable", ... path="s3://bucket/extracted_parquet_files/", ... partition_cols=["dim2"], ... )
Unload and read as CSV.
>>> import awswrangler as wr >>> wr.timestream.unload_to_files( ... sql="SELECT time, measure, dimension FROM database.mytable", ... path="s3://bucket/extracted_parquet_files/", ... unload_format="CSV", ... )