I have a data frame that I want to write to a Postgres database. This functionality needs to be part of a Flask app.
For now, I'm running this insertion part as a separate script by creating an SQLAlchemy engine and passing it to the df.to_sql()
to write the data frame to a database table.
But when I integrate this functionality into a Flask app, I already have existing connections to the Postgres database which were created using Psycopg2 connection pool.
When looked at df.to_sql()
documentation, it is mentioned that it uses the SQLAlchemy engine. I don't see any other connection mechanism. https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.to_sql.html#pandas-dataframe-to-sql
My question is why do I need this SQLAlchemy engine to be created when I have the existing connections. Why can't I use them?