awswrangler.dynamodb.put_csv(path: str | Path, table_name: str, boto3_session: Session | None = None, use_threads: bool | int = True, **pandas_kwargs: Any) None

Write all items from a CSV file to a DynamoDB.


This function has arguments which can be configured globally through wr.config or environment variables:

Check out the Global Configurations Tutorial for details.

  • path (Union[str, Path]) – Path as str or Path object to the CSV file which contains the items.

  • table_name (str) – Name of the Amazon DynamoDB table.

  • boto3_session (boto3.Session(), optional) – Boto3 Session. The default boto3 Session will be used if boto3_session receive None.

  • use_threads (Union[bool, int]) – Used for Parallel Write requests. True (default) to enable concurrency, False to disable multiple threads. If enabled os.cpu_count() is used as the max number of threads. If integer is provided, specified number is used.

  • pandas_kwargs – KEYWORD arguments forwarded to pandas.read_csv(). You can NOT pass pandas_kwargs explicit, just add valid Pandas arguments in the function call and awswrangler will accept it. e.g. wr.dynamodb.put_csv(‘items.csv’, ‘my_table’, sep=’|’, na_values=[‘null’, ‘none’], skip_blank_lines=True)



Return type:



Writing contents of CSV file

>>> import awswrangler as wr
>>> wr.dynamodb.put_csv(
...     path='items.csv',
...     table_name='table'
... )

Writing contents of CSV file using pandas_kwargs

>>> import awswrangler as wr
>>> wr.dynamodb.put_csv(
...     path='items.csv',
...     table_name='table',
...     sep='|',
...     na_values=['null', 'none']
... )