I am not that experienced with SQL and I was wondering if there is a more efficient way to accomplish what I am doing.
I am using python with SQLAlchemy to import very large data sets from csv to a sql table. I only want to import certain columns in the csv, so I made a function that grabs the column names from the csv, then another that filters out the ones I don't need (conveniently, the ones I don't need are at the end of the column list). Then, I use a for loop that generates the INSERT INTO command and executes it by appending the column names (they are the same in the csv and the sql table) and the values for each row in the reader. Ie:
row1: INSERT INTO table (c1, c2, c3) VALUES (v1, v2, v3)
row2: INSERT INTO table (c1, c2, c3) VALUES (v4, v5, v6)
row3: INSERT INTO table (c1, c2, c3) VALUES (v7, v8, v9)
My csv files have around 15k lines so you can imagine this takes quite a while.
Is there a way to consolidate the queries so inserting these values takes a shorter amount of time?