awswrangler.mysql.read_sql_query

awswrangler.mysql.read_sql_query(sql: str, con: pymysql.connections.Connection[Any], index_col: str | List[str] | None = None, params: List[Any] | Tuple[Any, ...] | Dict[Any, Any] | None = None, chunksize: int | None = None, dtype: Dict[str, DataType] | None = None, safe: bool = True, timestamp_as_object: bool = False) DataFrame | Iterator[DataFrame]

Return a DataFrame corresponding to the result set of the query string.

Parameters:
  • sql (str) – SQL query.

  • con (pymysql.connections.Connection) – Use pymysql.connect() to use credentials directly or wr.mysql.connect() to fetch it from the Glue Catalog.

  • index_col (Union[str, List[str]], optional) – Column(s) to set as index(MultiIndex).

  • params (Union[List, Tuple, Dict], optional) – List of parameters to pass to execute method. The syntax used to pass parameters is database driver dependent. Check your database driver documentation for which of the five syntax styles, described in PEP 249’s paramstyle, is supported.

  • chunksize (int, optional) – If specified, return an iterator where chunksize is the number of rows to include in each chunk.

  • dtype (Dict[str, pyarrow.DataType], optional) – Specifying the datatype for columns. The keys should be the column names and the values should be the PyArrow types.

  • safe (bool) – Check for overflows or other unsafe data type conversions.

  • timestamp_as_object (bool) – Cast non-nanosecond timestamps (np.datetime64) to objects.

Returns:

Result as Pandas DataFrame(s).

Return type:

Union[pandas.DataFrame, Iterator[pandas.DataFrame]]

Examples

Reading from MySQL using a Glue Catalog Connections

>>> import awswrangler as wr
>>> con = wr.mysql.connect("MY_GLUE_CONNECTION")
>>> df = wr.mysql.read_sql_query(
...     sql="SELECT * FROM test.my_table",
...     con=con
... )
>>> con.close()