0

My requirement is pre-process the text file separated by comma(,) and bulkload into PostgreSQL table.

I think if our file is separated by some separator then it is easy to COPY into PostgreSQL table. That's the reason why I decided to pre-process the file before load.

I have 573 columns in my table. Is there any easiest way prepare the file with separator to load into the table?

or kindly suggest better way to proceed?

Thanks a lot in advance.

Sekhar
  • 139
  • 3
  • 6
  • 20
  • 1
    Postgres has its own `COPY` command for loading CSV directly, [see here](https://stackoverflow.com/questions/2987433/how-to-import-csv-file-data-into-a-postgresql-table). I'm not sure you even need to involve Python if you don't want to. – Tim Biegeleisen Dec 08 '17 at 11:11
  • but my case is, I will receive the text file by CFT during night. That received file will be loaded into data base. In this case, I am managing in Python. – Sekhar Dec 08 '17 at 11:14
  • Well it must be possible to call `COPY` from Python. Then again, if you only need to load certain columns, or have other special requirements, then maybe you do need a Python script. – Tim Biegeleisen Dec 08 '17 at 11:17
  • Yes, I tried to load one test table and it was fine. but when it comes to the actual table, I am facing a difficulty to prepare a file to load because the file must be in the format of separated by some separator. Dealing with 573 columns are really painful. Is there any optimum way to slice the input records by separator? – Sekhar Dec 08 '17 at 11:28
  • 573 columns will always be painful. You''ll have to invent 573 names for them. And 573 data types. And 573 meanings. – joop Dec 08 '17 at 12:34
  • This library may be helpful: [pgcopy](https://pypi.python.org/pypi/pgcopy/1.1.0) – pault Dec 08 '17 at 17:50

0 Answers0