350

I'm trying to create a cronjob to back up my database every night before something catastrophic happens. It looks like this command should meet my needs:

0 3 * * * pg_dump dbname | gzip > ~/backup/db/$(date +%Y-%m-%d).psql.gz

Except after running that, it expects me to type in a password. I can't do that if I run it from cron. How can I pass one in automatically?

mpen
  • 237,624
  • 230
  • 766
  • 1,119
  • 1
    possibly helpful post i wrote on automating pg_restore ! https://medium.com/@trinity/automating-postgres-import-from-heroku-db-to-other-db-449d5946e1fb – kittyminky Jul 27 '15 at 02:19
  • answer using a connection string here : https://stackoverflow.com/a/29101292/1579667 – Benj Feb 14 '20 at 16:02

17 Answers17

358

Create a .pgpass file in the home directory of the account that pg_dump will run as.

The format is:

hostname:port:database:username:password

Then, set the file's mode to 0600. Otherwise, it will be ignored.

chmod 600 ~/.pgpass

See the Postgresql documentation libpq-pgpass for more details.

Community
  • 1
  • 1
araqnid
  • 108,587
  • 20
  • 147
  • 127
250

Or you can set up crontab to run a script. Inside that script you can set an environment variable like this: export PGPASSWORD="$put_here_the_password"

This way if you have multiple commands that would require password you can put them all in the script. If the password changes you only have to change it in one place (the script).

And I agree with Joshua, using pg_dump -Fc generates the most flexible export format and is already compressed. For more info see: pg_dump documentation

E.g.

# dump the database in custom-format archive
pg_dump -Fc mydb > db.dump

# restore the database
pg_restore -d newdb db.dump
Max
  • 5,673
  • 4
  • 23
  • 32
  • 3
    this is not ideal. throwing it in `.pgpass` will keep everything in one place too, without adding an extra layer of indirection. plus, if all i wanted to do with export a variable, i'd do that in my `.bashrc` file, or whatever it is. – mpen Nov 15 '11 at 01:57
  • 9
    I can see why the `.pgpass` file would be a better solution. I was just giving an alternative, not sure if it deserves a downvote though :) – Max Nov 15 '11 at 13:28
  • 16
    I didn't downvote. That was someone else; I didn't think it warranted a downvote either. Have a +1 to make up for it. – mpen Nov 15 '11 at 16:37
  • Your suggestion “And I agree with Joshua, just use pg_dump -Fc, a lot easier.” does not really add anything to your answer/alternative. Specifically, it does not explain what it would make easier; thus it only adds confusion. – Kissaki Oct 03 '13 at 15:28
  • Thanks for the feedback Kissaki. I don't remember the exact context, nor do I see anything from Joshua but probably I had a good reason at that moment. I've expanded a bit on why it's a better way to dump a database using `pg_dump -Fc`: easy to reload the database + format is already compressed. And now there's a reference to `pg_dump` where anyone can dig into the details. – Max Oct 04 '13 at 16:16
  • 1
    This solutions poses an extreme security risk. The export keyword makes this variable a part of system enviroment which means any user can inspect its value. Using env command to set this only for the execution of pg_dump/pg_restore is way much better approach. – Jacek Prucia Dec 10 '14 at 12:17
  • 9
    So many haters. I appreciate this answer and am adopting it for my own application. – James T Snell Apr 15 '15 at 16:50
  • 10
    **Setting the PGPASSWORD environment variable is not a recommended practice by the documentation** (http://www.postgresql.org/docs/current/static/libpq-envars.html) : _Use of this environment variable is not recommended for security reasons, as some operating systems allow non-root users to see process environment variables via ps; instead consider using the ~/.pgpass file_ – Levure Nov 23 '15 at 08:35
  • 4
    In certain environments, such as AWS on an Ubuntu EC2 instance, the default `ubuntu` user can sudo without a password, therefore using an environment variable is not really less secure. In fact its probably more secure to use the the environment variable, then immediately remove it afterward, compared to leaving the .pgpass file on disk for that `ubuntu` user to access. – StartupGuy Apr 02 '18 at 17:23
  • 1
    _use this with docker run_ – mikezter Jul 10 '18 at 22:12
  • 5
    This would actually be the preferred way for docker containers. – Dev Aggarwal Sep 17 '18 at 09:38
204

If you want to do it in one command:

PGPASSWORD="mypass" pg_dump mydb > mydb.dump
gitaarik
  • 31,690
  • 11
  • 86
  • 92
  • 29
    **Setting the PGPASSWORD environment variable is not a recommended practice** by the documentation (http://www.postgresql.org/docs/current/static/libpq-envars.html) : _Use of this environment variable is not recommended for security reasons, as some operating systems allow non-root users to see process environment variables via ps; instead consider using the ~/.pgpass file_ – Levure Nov 23 '15 at 08:37
  • 23
    It's still a useful comment. There are lots of deployment cases where this is still helpful. – Iain Duncan Mar 30 '17 at 19:18
  • 2
    I always got the error 'Peer authentication failed for user "username"'. Solution was: PGPASSWORD="mypass" pg_dump -U username -h localhost > mydb.dump – Martin Pabst Sep 11 '17 at 16:18
  • 6
    My opinion is that it is far better to set up an environment variable (where you have control, *where* and *how* the password will be stored) as in a known, unencrypted location. This part of the postgresql doc is faulty, and this answer is a good one. – peterh Sep 13 '17 at 10:15
  • 2
    My password had an '@' in it. This worked. I couldnt figure out how to make it work with the `postgres://` syntax. Didn't try the `.pgpass` because my postgress user has no home directory. – jmathew Sep 29 '17 at 14:33
163

For a one-liner, like migrating a database you can use --dbname followed by a connection string (including the password) as stated in the pg_dump manual

In essence.

pg_dump --dbname=postgresql://username:password@127.0.0.1:5432/mydatabase

Note: Make sure that you use the option --dbname instead of the shorter -d and use a valid URI prefix, postgresql:// or postgres://.

The general URI form is:

postgresql://[user[:password]@][netloc][:port][/dbname][?param1=value1&...]

Best practice in your case (repetitive task in cron) this shouldn't be done because of security issues. If it weren't for .pgpass file I would save the connection string as an environment variable.

export MYDB=postgresql://username:password@127.0.0.1:5432/mydatabase

then have in your crontab

0 3 * * * pg_dump --dbname=$MYDB | gzip > ~/backup/db/$(date +%Y-%m-%d).psql.gz

Josue Alexander Ibarra
  • 6,667
  • 3
  • 25
  • 36
  • Version 9.1 of Postgre outputs an unknown option for dbname – akohout Oct 23 '15 at 11:27
  • This was tested with versions 9.4 and 9.3 on arch and RHEL respectively. can you post your connection string? anonymized of course. – Josue Alexander Ibarra Oct 24 '15 at 00:09
  • Thanks, @JosueIbarra. Tested successfully on PostgreSQL 9.3, Ubuntu 14.04. – Cao Minh Tu Dec 03 '15 at 07:08
  • I just tested again on 9.5.1 on Arch Linux, it works. Make sure to use a valid connection string with the option containing an equals sign. See **--dbname** http://www.postgresql.org/docs/9.5/static/app-pgdump.html Awesome username by the way. – Josue Alexander Ibarra Feb 23 '16 at 21:09
  • I tried this with 9.6 windows: pg_dump --dbname=postgresql://username:password@127.0.0.1:5432/mydatabase and while it prints out everything, it does not seem to save it anywhere, or am I missing something? – EntryLevelR Feb 05 '17 at 19:00
  • 1
    @EntryLevelR you need to pipe the output to a file in order to save it. see this relevant question http://askubuntu.com/questions/420981/how-do-i-save-terminal-output-to-a-file – Josue Alexander Ibarra Feb 18 '17 at 07:14
  • Thank you, that worked great! Would you be willing to show how to restore via command line using the psql file that is created. I am getting the following output when attempting to restore. "pg_restore: [archiver] input file appears to be a text format dump. Please use psql." – mcroteau Jun 11 '17 at 02:59
  • Worked great, for some reason as postgres or regular user .pgpass would not authenticate, this one liner worked. – edencorbin Aug 06 '17 at 12:59
  • 4
    this should be the accepted answer. One liner, clear. – swdev Dec 06 '17 at 22:05
  • but, password will get logged to terminal history this way – benzkji Feb 07 '20 at 14:08
53

This one liner helps me while creating dump of a single database.

PGPASSWORD="yourpassword" pg_dump -U postgres -h localhost mydb > mydb.pgsql
Aarvy
  • 1,108
  • 9
  • 18
51
$ PGPASSWORD="mypass" pg_dump -i -h localhost -p 5432 -U username -F c -b -v -f dumpfilename.dump databasename
Francisco Luz
  • 2,225
  • 1
  • 21
  • 31
19

@Josue Alexander Ibarra answer works on centos 7 and version 9.5 if --dbname is not passed.

pg_dump postgresql://username:password@127.0.0.1:5432/mydatabase 
Jauyzed
  • 442
  • 3
  • 15
  • 1
    You're right, that's how it's supposed to look, I think what was wrong a few years back was my shell configuration. That's why it was essential for me to use `--dbname` – Josue Alexander Ibarra Apr 21 '17 at 20:40
11

You can pass a password into pg_dump directly by using the following:

pg_dump "host=localhost port=5432 dbname=mydb user=myuser password=mypass" > mydb_export.sql
David Buck
  • 3,439
  • 29
  • 24
  • 31
Larry Spence
  • 121
  • 1
  • 3
  • 4
    Welcome to Stack Overflow! While your answer may work, it has serious security implications. **Arguments of a command are visible in ps(1)**, so if a process monitors ps(1) then the password is compromised. – Jonathan Rosa Jun 16 '20 at 22:06
9

Note that, in windows, the pgpass.conf file must be in the following folder:

%APPDATA%\postgresql\pgpass.conf

if there's no postgresql folder inside the %APPDATA% folder, create it.

the pgpass.conf file content is something like:

localhost:5432:dbname:dbusername:dbpassword

cheers

mpen
  • 237,624
  • 230
  • 766
  • 1,119
7

As detailed in this blog post , there are two ways to non interactively provide a password to PostgreSQL utilities such as the "pg_dump" command: using the ".pgpass" file or using the "PGPASSWORD" environment variable.

manfall19
  • 299
  • 3
  • 4
4

Correct me if I'm wrong, but if the system user is the same as the database user, PostgreSQL won't ask for the password - it relies on the system for authentication. This might be a matter of configuration.

Thus, when I wanted the database owner postgres to backup his databases every night, I could create a crontab for it: crontab -e -u postgres. Of course, postgres would need to be allowed to execute cron jobs; thus it must be listed in /etc/cron.allow, or /etc/cron.deny must be empty.

Tobias
  • 2,249
  • 3
  • 21
  • 37
  • You're sort of right here. Default Postgres configuration uses TRUST authentication for local system accounts. However most production setups get rid of this block right after installing RDBMS. – Jacek Prucia Dec 10 '14 at 12:14
4

Backup over ssh with password using temporary .pgpass credentials and push to S3:

#!/usr/bin/env bash
cd "$(dirname "$0")"

DB_HOST="*******.*********.us-west-2.rds.amazonaws.com"
DB_USER="*******"
SSH_HOST="my_user@host.my_domain.com"
BUCKET_PATH="bucket_name/backup"

if [ $# -ne 2 ]; then
    echo "Error: 2 arguments required"
    echo "Usage:"
    echo "  my-backup-script.sh <DB-name> <password>"
    echo "  <DB-name> = The name of the DB to backup"
    echo "  <password> = The DB password, which is also used for GPG encryption of the backup file"
    echo "Example:"
    echo "  my-backup-script.sh my_db my_password"
    exit 1
fi

DATABASE=$1
PASSWORD=$2

echo "set remote PG password .."
echo "$DB_HOST:5432:$DATABASE:$DB_USER:$PASSWORD" | ssh "$SSH_HOST" "cat > ~/.pgpass; chmod 0600 ~/.pgpass"
echo "backup over SSH and gzip the backup .."
ssh "$SSH_HOST" "pg_dump -U $DB_USER -h $DB_HOST -C --column-inserts $DATABASE" | gzip > ./tmp.gz
echo "unset remote PG password .."
echo "*********" | ssh "$SSH_HOST" "cat > ~/.pgpass"
echo "encrypt the backup .."
gpg --batch --passphrase "$PASSWORD" --cipher-algo AES256 --compression-algo BZIP2 -co "$DATABASE.sql.gz.gpg" ./tmp.gz

# Backing up to AWS obviously requires having your credentials to be set locally
# EC2 instances can use instance permissions to push files to S3
DATETIME=`date "+%Y%m%d-%H%M%S"`
aws s3 cp ./"$DATABASE.sql.gz.gpg" s3://"$BUCKET_PATH"/"$DATABASE"/db/"$DATETIME".sql.gz.gpg
# s3 is cheap, so don't worry about a little temporary duplication here
# "latest" is always good to have because it makes it easier for dev-ops to use
aws s3 cp ./"$DATABASE.sql.gz.gpg" s3://"$BUCKET_PATH"/"$DATABASE"/db/latest.sql.gz.gpg

echo "local clean-up .."
rm ./tmp.gz
rm "$DATABASE.sql.gz.gpg"

echo "-----------------------"
echo "To decrypt and extract:"
echo "-----------------------"
echo "gpg -d ./$DATABASE.sql.gz.gpg | gunzip > tmp.sql"
echo

Just substitute the first couple of config lines with whatever you need - obviously. For those not interested in the S3 backup part, take it out - obviously.

This script deletes the credentials in .pgpass afterward because in some environments, the default SSH user can sudo without a password, for example an EC2 instance with the ubuntu user, so using .pgpass with a different host account in order to secure those credential, might be pointless.

StartupGuy
  • 6,414
  • 1
  • 30
  • 41
  • Password will get logged to terminal `history` this way, no? – mpen Apr 02 '18 at 18:47
  • 1
    @mpen Locally, yes. Remotely, no. In my case its OK to have in my local history because its a secure VM that does not allow remote access. If in your case that is not OK, just do `history -c`. When using with Jenkins, use the `Inject passwords to the build as environment variables` option so that the password is masked – StartupGuy Apr 02 '18 at 19:11
1

A secure way of passing the password is to store it in .pgpass file

Content of the .pgpass file will be in the format:

db_host:db_port:db_name:db_user:db_pass

#Eg
localhost:5432:db1:admin:tiger
localhost:5432:db2:admin:tiger

Now, store this file in the home directory of the user with permissions u=rw (0600) or less

To find the home directory of the user, use echo $HOME

Restrict permissions of the file chmod 0600 /home/ubuntu/.pgpass

saintlyzero
  • 940
  • 1
  • 10
  • 19
0

You just need to open pg_hba.conf and sets trust in all methods. That's works for me. Therefore the security is null.

stefanoz
  • 1
  • 1
0

For Windows the pgpass.conf file should exist on path:

%APPDATA%\postgresql\pgpass.conf

On my Windows 10 absolute path it is:

C:\Users\Ognjen\AppData\Roaming\postgresql\pgpass.conf

Note: If there is no postgresql folder in %APPDATA%, create one with pgpass.conf file inside it.

Content of pgpass.conf could be:

*:5432:*:*:myDbPassword

Or more specific content could be:

localhost:5432:dbName:username:password

Note: Content of pgpass.conf must NOT end with white spaces (after password) or the error will occur.

ognjenkl
  • 637
  • 8
  • 7
-1

Another (probably not secure) way to pass password is using input redirection i.e. calling

pg_dump [params] < [path to file containing password]

szymond
  • 1,190
  • 2
  • 18
  • 39
  • Concerning security - this file would need to be readable by the intended user(s) only; however, anyone with root rights would be able to change the security settings, and thus to read the unencrypted password. So yes, this is insecure ... – Tobias Oct 25 '13 at 12:40
  • 3
    @Tobias is there any alternative? It would seem that anyone with root rights could always see the password no matter what technique other than entering the password interactively (and the question is about cron). http://www.postgresql.org/docs/9.3/static/auth-methods.html#GSSAPI-AUTH mentions GSSAPI supporting single sign-on but no mention if that works non-interactively. – Ross Bradbury Mar 18 '15 at 16:15
  • 4
    Anyone with root rights can also read the .pgpass which is the recommended way. Therefore, I would not consider root access a security risk. – max Feb 12 '16 at 22:02
-4

the easiest way in my opinion, this: you edit you main postgres config file: pg_hba.conf there you have to add the following line:

host <you_db_name> <you_db_owner> 127.0.0.1/32 trust

and after this you need start you cron thus:

pg_dump -h 127.0.0.1 -U <you_db_user> <you_db_name> | gzip > /backup/db/$(date +%Y-%m-%d).psql.gz

and it worked without password

Dofri
  • 126
  • 4