awswrangler.timestream.unload_to_files¶
- awswrangler.timestream.unload_to_files(sql: str, path: str, unload_format: Literal['CSV', 'PARQUET'] | None = None, compression: Literal['GZIP', 'NONE'] | None = None, partition_cols: list[str] | None = None, encryption: Literal['SSE_KMS', 'SSE_S3'] | None = None, kms_key_id: str | None = None, field_delimiter: str | None = ',', escaped_by: str | None = '\\\\', boto3_session: Session | None = None) None ¶
Unload query results to Amazon S3.
https://docs.aws.amazon.com/timestream/latest/developerguide/export-unload.html
Note
This function has arguments which can be configured globally through wr.config or environment variables:
Check out the Global Configurations Tutorial for details.
Note
Following arguments are not supported in distributed mode with engine EngineEnum.RAY:
boto3_session
- Parameters:
sql (
str
) – SQL querypath (
str
) – S3 path to write stage files (e.g. s3://bucket_name/any_name/)unload_format (
Literal
['CSV'
,'PARQUET'
] |None
) – Format of the unloaded S3 objects from the query. Valid values: “CSV”, “PARQUET”. Case sensitive. Defaults to “PARQUET”compression (
Literal
['GZIP'
,'NONE'
] |None
) – Compression of the unloaded S3 objects from the query. Valid values: “GZIP”, “NONE”. Defaults to “GZIP”partition_cols (
list
[str
] |None
) – Specifies the partition keys for the unload operationencryption (
Literal
['SSE_KMS'
,'SSE_S3'
] |None
) – Encryption of the unloaded S3 objects from the query. Valid values: “SSE_KMS”, “SSE_S3”. Defaults to “SSE_S3”kms_key_id (
str
|None
) – Specifies the key ID for an AWS Key Management Service (AWS KMS) key to be used to encrypt data files on Amazon S3field_delimiter (
str
|None
) – A single ASCII character that is used to separate fields in the output file, such as pipe character (|), a comma (,), or tab (/t). Only used with CSV formatescaped_by (
str
|None
) – The character that should be treated as an escape character in the data file written to S3 bucket. Only used with CSV formatboto3_session (
Session
|None
) – The default boto3 session will be used if boto3_session isNone
.
- Return type:
None
Examples
Unload and read as Parquet (default).
>>> import awswrangler as wr >>> wr.timestream.unload_to_files( ... sql="SELECT time, measure, dimension FROM database.mytable", ... path="s3://bucket/extracted_parquet_files/", ... )
Unload and read partitioned Parquet. Note: partition columns must be at the end of the table.
>>> import awswrangler as wr >>> wr.timestream.unload_to_files( ... sql="SELECT time, measure, dim1, dim2 FROM database.mytable", ... path="s3://bucket/extracted_parquet_files/", ... partition_cols=["dim2"], ... )
Unload and read as CSV.
>>> import awswrangler as wr >>> wr.timestream.unload_to_files( ... sql="SELECT time, measure, dimension FROM database.mytable", ... path="s3://bucket/extracted_parquet_files/", ... unload_format="CSV", ... )