61

I need to transfer some data from another database. The old database is called paw1.moviesDB and the new database is paw1. The schema of each table are the following.

Awards (name of the table)(new DB)
Id [PK] Serial           Award

Nominations (name of the table) (old DB)
Id [PK] Serial           nominations

How do I copy the data from old database to the new database?

Peter Mortensen
  • 28,342
  • 21
  • 95
  • 123
Nicopuri
  • 791
  • 1
  • 10
  • 17

9 Answers9

90

I just had to do this exact thing so I figured I'd post the recipe here. This assumes that both databases are on the same server.

First, copy the table from the old db to the new db. At the commandline:

pg_dump -U postgres -t <old_table> <old_database> | psql -U postgres -d <new_database>

Next, grant permissions of the copied table to the user of the new database. Log into psql:

psql -U postgres -d <new_database>

ALTER TABLE <old_table> OWNER TO <new_user>;

\q

At this point your copied table in your new database still has the name <old_table> from your old database. Assuming you want to move the data somewhere else, say to <new_table>, you can just use regular SQL queries:

INSERT INTO <new_table> (field1, field2, field3) 
SELECT field1, field2, field3 from <old_table>;

Done!

Stephen
  • 6,056
  • 8
  • 40
  • 74
Nate
  • 2,783
  • 2
  • 20
  • 24
  • 9
    how to do it for remote server ? – Dev R Sep 24 '13 at 07:54
  • 5
    @DevR, just add -h – rdo Nov 17 '16 at 08:54
  • Note in the first command, if you need a password for both `pg_dump` and `psql`, they will both serve you the prompt at once. What worked for me is to enter the password, hit enter, then enter the password a second time, and hit enter again. The prompt itself looks messed up while you're doing this, but it works. – Stephen Mar 16 '21 at 14:36
39

Databases are isolated in PostgreSQL; when you connect to a PostgreSQL server you connect to just one database, you can't copy data from one database to another using a SQL query.

If you come from MySQL: what MySQL calls (loosely) "databases" are "schemas" in PostgreSQL - sort of namespaces. A PostgreSQL database can have many schemas, each one with its tables and views, and you can copy from one schema to another with the schema.table syntax.

If you really have two distinct PostgreSQL databases, the common way of transferring data from one to another would be to export your tables (with pg_dump -t ) to a file, and import them into the other database (with psql).

If you really need to get data from a distinct PostgreSQL database, another option - mentioned in Grant Johnson's answer - is dblink, which is an additional module (in contrib/).

Update:

Postgres introduced "foreign data wrapper" in 9.1 (which was released after the question was asked). Foreign data wrappers allow the creation of foreign tables through the Postgres FDW which makes it possible to access a remote table (on a different server and database) as if it was a local table.

a_horse_with_no_name
  • 440,273
  • 77
  • 685
  • 758
leonbloy
  • 65,169
  • 19
  • 130
  • 176
  • Setting the search_path variable using "SET search_path TO blah" is a good way to work with different schemas without hurting your pinkies. You can make your changes permanent with "ALTER USER user SET search_path TO blah" - love it! ;-) – Lester Cheung Apr 05 '12 at 05:03
  • 1
    Doing the move using dblink, see http://stackoverflow.com/questions/14797327/copy-data-between-two-tables-in-postgresql-using-dblink-sql – Simon B. Sep 17 '14 at 11:25
  • 1
    for future readers: canonical way of doing it now is to use `postgres_fdw` extension and foreign tables – dgan Dec 23 '19 at 22:22
25

This worked for me to copy a table remotely from my localhost to Heroku's postgresql:

pg_dump -C -t source_table -h localhost source_db | psql -h destination_host -U destination_user -p destination_port destination_db

This creates the table for you.

For the other direction (from Heroku to local) pg_dump -C -t source_table -h source_host -U source_user -p source_port source_db | psql -h localhost destination_db

crizCraig
  • 7,169
  • 4
  • 46
  • 50
  • In case you have different port and username in localhost the command is: pg_dump -C -t source_table -h localhost -p local_port -U local_user source_db | psql -h destination_host -U destination_user -p destination_port destination_db – Fil Feb 10 '15 at 17:48
13

From: hxxp://dbaspot.c om/postgresql/348627-pg_dump-t-give-where-condition.html (NOTE: the link is now broken)

# create temp table with the data
psql mydb
CREATE TABLE temp1 (LIKE mytable);
INSERT INTO temp1 SELECT * FROM mytable WHERE myconditions;
\q

# export the data to a sql file
pg_dump --data-only --column-inserts -t temp1 mtdb > out.sql
psql mydb
DROP TABLE temp1;
\q

# import temp1 rows in another database
cat out.sql | psql -d [other_db]
psql other_db
INSERT INTO mytable (SELECT * FROM temp1);
DROP TABLE temp1;

Another method useful in remotes

  # export a table csv and import in another database
  psql-remote> COPY elements TO '/tmp/elements.csv' DELIMITER ',' CSV HEADER;
  $ scp host.com:/tmp/elements.csv /tmp/elements.csv
  psql-local> COPY elements FROM '/tmp/elements.csv' DELIMITER ',' CSV;
Thiago Macedo
  • 5,575
  • 1
  • 19
  • 22
  • 1
    Thanks. This is the simplest solution that doesn't involve any extensions. Note that `--column-inserts` significantly slows it down, so you can remove that if the target database's table is known not to have any conflicts. – sudo Dec 31 '16 at 18:19
  • 1
    In case this is unclear to anyone, the second half is this: Create the table on the other DB: `psql -d [other_db] -c "CREATE TABLE temp1 (LIKE mytable);"`, then insert into your other DB: `cat out.sql | psql -d [other_db]`, then insert into the main table: `psql -d [other_db] -c "INSERT INTO mytable (SELECT * FROM temp1);"`. – sudo Dec 31 '16 at 18:27
  • I want to point out how much easy is this approach, if you have simple data, without binary, blobs etc.. – Vokail Feb 21 '19 at 13:04
9

There are three options for copying it if this is a one off:

  1. Use a db_link (I think it is still in contrib)
  2. Have the application do the work.
  3. Export/import

If this is an ongoing need, the answers are:

  1. Change to schemas in the same DB
  2. db_link
Grant Johnson
  • 1,186
  • 9
  • 12
3
  1. If your source and target database resides in the same local machine, you can use:

Note:- Sourcedb already exists in your database.

CREATE DATABASE targetdb WITH TEMPLATE sourcedb;

This statement copies the sourcedb to the targetdb.

  1. If your source and target databases resides on different servers, you can use following steps:

Step 1:- Dump the source database to a file.

pg_dump -U postgres -O sourcedb sourcedb.sql

Note:- Here postgres is the username so change the name accordingly.

Step 2:- Copy the dump file to the remote server.

Step 3:- Create a new database in the remote server

CREATE DATABASE targetdb;

Step 4:- Restore the dump file on the remote server

psql -U postgres -d targetdb -f sourcedb.sql

(pg_dump is a standalone application (i.e., something you run in a shell/command-line) and not an Postgres/SQL command.)

This should do it.

Tron
  • 372
  • 1
  • 6
1

You can not perform a cross-database query like SQL Server; PostgreSQL does not support this.

The DbLink extension of PostgreSQL is used to connect one database to another database. You have install and configure DbLink to execute a cross-database query.

I have already created a step-by-step script and example for executing cross database query in PostgreSQL. Please visit this post: PostgreSQL [Video]: Cross Database Queries using the DbLink Extension

Peter Mortensen
  • 28,342
  • 21
  • 95
  • 123
Anvesh
  • 5,248
  • 2
  • 40
  • 40
1

Actually, there is some possibility to send a table data from one PostgreSQL database to another. I use the procedural language plperlu (unsafe Perl procedural language) for it.

Description (all was done on a Linux server):

  1. Create plperlu language in your database A

  2. Then PostgreSQL can join some Perl modules through series of the following commands at the end of postgresql.conf for the database A:

    plperl.on_init='use DBI;'
    plperl.on_init='use DBD::Pg;'
    
  3. You build a function in A like this:

    CREATE OR REPLACE FUNCTION send_data( VARCHAR )
    RETURNS character varying AS
    $BODY$
    my $command = $_[0] || die 'No SQL command!';
    my $connection_string =
    "dbi:Pg:dbname=your_dbase;host=192.168.1.2;port=5432;";
    $dbh = DBI->connect($connection_string,'user','pass',
    {AutoCommit=>0,RaiseError=>1,PrintError=>1,pg_enable_utf8=>1,}
    );
    my $sql = $dbh-> prepare( $command );
    eval { $sql-> execute() };
    my $error = $dbh-> state;
    $sql-> finish;
    if ( $error ) { $dbh-> rollback() } else {  $dbh-> commit() }
    $dbh-> disconnect();
    $BODY$
    LANGUAGE plperlu VOLATILE;
    

And then you can call the function inside database A:

SELECT send_data( 'INSERT INTO jm (jm) VALUES (''zzzzzz'')' );

And the value "zzzzzz" will be added into table "jm" in database B.

Peter Mortensen
  • 28,342
  • 21
  • 95
  • 123
francisco
  • 41
  • 4
0

Just like leonbloy suggested, using two schemas in a database is the way to go. Suppose a source schema (old DB) and a target schema (new DB), you can try something like this (you should consider column names, types, etc.):

INSERT INTO target.Awards SELECT * FROM source.Nominations;
Federico Cristina
  • 2,055
  • 1
  • 17
  • 33
  • 1
    If you have a decently recent version of Postgres (>=8.1) you can do `ALTER TABLE Nominations SET SCHEMA target` – b0fh Oct 27 '11 at 11:19
  • 1
    Schemas are nothing but namespaces, and they don't really provide isolation. A different database could be running on a different computer, or maybe on the same but with different performance/memory settings, and you might have two databases that have the same schemas (which we do have). A different database could even be a different version of Postgres, assuming they're compatible. – sudo Dec 31 '16 at 18:30