awswrangler.data_api.rds.to_sql¶
- awswrangler.data_api.rds.to_sql(df: DataFrame, con: RdsDataApi, table: str, database: str, mode: Literal['append', 'overwrite'] = 'append', index: bool = False, dtype: dict[str, str] | None = None, varchar_lengths: dict[str, int] | None = None, use_column_names: bool = False, chunksize: int = 200, sql_mode: str = 'mysql') None ¶
Insert data using an SQL query on a Data API connection.
- Parameters:
df (
DataFrame
) – Pandas DataFramecon (
RdsDataApi
) – A RdsDataApi connection instancedatabase (
str
) – Database to run query on - defaults to the database specified by con.table (
str
) – Table namemode (
Literal
['append'
,'overwrite'
]) – append (inserts new records into table), overwrite (drops table and recreates)index (
bool
) – True to store the DataFrame index as a column in the table, otherwise False to ignore it.dtype (
dict
[str
,str
] |None
) – Dictionary of columns names and MySQL types to be casted. Useful when you have columns with undetermined or mixed data types. (e.g.`{'col name': 'TEXT', 'col2 name': 'FLOAT'}`
)varchar_lengths (
dict
[str
,int
] |None
) – Dict of VARCHAR length by columns. (e.g.`{"col1": 10, "col5": 200}`
).use_column_names (
bool
) – If set to True, will use the column names of the DataFrame for generating the INSERT SQL Query. E.g. If the DataFrame has two columns col1 and col3 and use_column_names is True, data will only be inserted into the database columns col1 and col3.chunksize (
int
) – Number of rows which are inserted with each SQL query. Defaults to inserting 200 rows per query.sql_mode (
str
) – “mysql” for default MySQL identifiers (backticks) or “ansi” for ANSI-compatible identifiers (double quotes).
- Return type:
None