74

I'd like to get dumps of each mysql table into separate files. The manual indicates that the syntax for this is

mysqldump [options] db_name [tbl_name ...]

Which indicates that you know the table names before hand. I could set up the script that knows each table name now, but say I add a new table down the road and forget to update the dump script. Then I'm missing dumps for one or more table.

Is there a way to automagically dump each existing table into a separate file? Or am I going to have to do some script-fu; query the database, get all the table names, and dump them by name.

If I go the script-fu route, what scripting langauges can access a mysql database?

user151841
  • 15,348
  • 28
  • 93
  • 156

10 Answers10

78

Here's a script that dumps table data as SQL commands into separate, compressed files. It does not require being on the MySQL server host, doesn't hard-code the password in the script, and is just for a specific db, not all db's on the server:

#!/bin/bash

# dump-tables-mysql.sh
# Descr: Dump MySQL table data into separate SQL files for a specified database.
# Usage: Run without args for usage info.
# Author: @Trutane
# Ref: http://stackoverflow.com/q/3669121/138325
# Notes:
#  * Script will prompt for password for db access.
#  * Output files are compressed and saved in the current working dir, unless DIR is
#    specified on command-line.

[ $# -lt 3 ] && echo "Usage: $(basename $0) <DB_HOST> <DB_USER> <DB_NAME> [<DIR>]" && exit 1

DB_host=$1
DB_user=$2
DB=$3
DIR=$4

[ -n "$DIR" ] || DIR=.
test -d $DIR || mkdir -p $DIR

echo -n "DB password: "
read -s DB_pass
echo
echo "Dumping tables into separate SQL command files for database '$DB' into dir=$DIR"

tbl_count=0

for t in $(mysql -NBA -h $DB_host -u $DB_user -p$DB_pass -D $DB -e 'show tables') 
do 
    echo "DUMPING TABLE: $DB.$t"
    mysqldump -h $DB_host -u $DB_user -p$DB_pass $DB $t | gzip > $DIR/$DB.$t.sql.gz
    tbl_count=$(( tbl_count + 1 ))
done

echo "$tbl_count tables dumped from database '$DB' into dir=$DIR"
rubo77
  • 15,234
  • 23
  • 111
  • 195
Trutane
  • 1,030
  • 10
  • 10
  • 1
    the best solution imo... I'm using a modified version of this script which allows passing the password as env variable. If someone needs this functionality: http://sprunge.us/fBDL?bash – Florian Fida Jul 31 '14 at 14:14
  • 1
    imho best solution too : + mysqldump --extended-insert=FALSE -u "$mysql_user" -p"$mysql_user_pw" "$db" $t \ | gzip > "$tables_dump_dir/$t.sql.gz" a bit slower , but works on any data and any row data lengths ... – Yordan Georgiev May 16 '15 at 19:46
  • thx! I used your idea to generate a backup script that backups all databases: https://github.com/rubo77/mysql-backup.sh/blob/master/mysql-backup.sh – rubo77 Nov 06 '15 at 05:43
  • 8
    Unfortunately, this can't be done as a single transaction, so you don't necessarily get a series of files that can be re-loaded without any FK errors. – Christopher Schultz Nov 21 '15 at 15:14
  • probably you want to add some more arguments like `mysqldump --extended-insert=FALSE --order-by-primary --complete-insert` – Mladen Adamovic Aug 03 '16 at 16:02
  • super useful. Thanks – Dylan B Sep 15 '16 at 22:12
  • Thanks for tips about additional mysqldump options of interest. Note that since the `--opt` group of options is on by default, you can skip them all using `--skip-opt` or selectively skip the ones you don't want, e.g. `--skip-extended-insert`. See the docs for [option groups](https://dev.mysql.com/doc/refman/8.0/en/mysqldump.html#mysqldump-option-groups). – Trutane Dec 05 '18 at 21:11
64

The mysqldump command line program does this for you - although the docs are very unclear about this.

One thing to note is that ~/output/dir has to be writable by the user that owns mysqld. On Mac OS X:

sudo chown -R _mysqld:_mysqld ~/output/dir
mysqldump --user=dbuser --password --tab=~/output/dir dbname

After running the above, you will have one tablename.sql file containing each table's schema (create table statement) and tablename.txt file containing the data.

If you want a dump with schema only, add the --no-data flag:

mysqldump --user=dbuser --password --no-data --tab=~/output/dir dbname
Salient Digital
  • 680
  • 6
  • 3
  • 9
    Well, I really would like teh table data dumps to be SQL inserts also :Þ – user151841 Jun 10 '11 at 13:26
  • 2
    `mysqlimport` can be used to import the resulting data txt files. Also any table names containing a period `(.)` eg: `my.table` will only result in a data file of the same name (without .txt extension). http://dev.mysql.com/doc/refman/5.0/en/reloading-delimited-text-dumps.html Additionally this seems to be the only way to perform a `--single-transaction` table per file dump. – Will B. Dec 07 '14 at 04:47
  • 4
    Note that this solution requires the [`FILE`](http://dev.mysql.com/doc/refman/5.7/en/privileges-provided.html#priv_file) privilege on the server, *and* will write the files into the disk on the server. – Christopher Schultz Nov 21 '15 at 15:12
  • --tab doesn't generate inserts, which would be required if you want to manually join databases on two machines, in example – Mladen Adamovic Aug 03 '16 at 11:31
21

You can accomplish this by:

  1. Get the list of databases in mysql
  2. dump each database with mysqldump
# Optional variables for a backup script
MYSQL_USER="root"
MYSQL_PASS="something"
BACKUP_DIR=/srv/backup/$(date +%Y-%m-%dT%H_%M_%S);
test -d "$BACKUP_DIR" || mkdir -p "$BACKUP_DIR"
# Get the database list, exclude information_schema
for db in $(mysql -B -s -u $MYSQL_USER --password=$MYSQL_PASS -e 'show databases' | grep -v information_schema)
do
  # dump each database in a separate file
  mysqldump -u $MYSQL_USER --password=$MYSQL_PASS "$db" | gzip > "$BACKUP_DIR/$db.sql.gz"
done
Xavi Montero
  • 7,160
  • 3
  • 38
  • 64
Elias Torres Arroyo
  • 2,330
  • 1
  • 12
  • 9
7

Here is the corresponding import.

#!/bin/bash

# import-files-mysql.sh
# Descr: Import separate SQL files for a specified database.
# Usage: Run without args for usage info.
# Author: Will Rubel
# Notes:
#  * Script will prompt for password for db access.

[ $# -lt 3 ] && echo "Usage: $(basename $0) <DB_HOST> <DB_USER> <DB_NAME> [<DIR>]" && exit 1

DB_host=$1
DB_user=$2
DB=$3
DIR=$4

DIR=$DIR/*


echo -n "DB password: "
read -s DB_pass
echo
echo "Importing separate SQL command files for database '$DB' into '$DB'"

file_count=0


for f in $DIR

do 
    echo "IMPORTING FILE: $f"

    gunzip -c $f | mysql -h $DB_host -u $DB_user -p$DB_pass $DB

    (( file_count++ ))
done

echo "$file_count files importing to database '$DB'"
WillRubel
  • 81
  • 1
  • 1
  • 1
    Don't just code-dump; please provide some explanation as to what the code does. – rgettman Aug 06 '13 at 19:59
  • Here is how to use it: ./import-files-mysql.sh host_name dbname db ./ , I assume all your gz files are in current folder, please replace your db login info accordingly, the script will prompt you to enter password after the command. – Dylan B Sep 15 '16 at 22:15
3
#!/bin/bash

for i in $(mysql -uUser -pPASSWORD DATABASE -e "show tables;"|grep -v Tables_in_);do mysqldump -uUSER -pPASSWORD DATABASE $i > /backup/dir/$i".sql";done

tar -cjf "backup_mysql_"$(date +'%Y%m%d')".tar.bz2" /backup/dir/*.sql
djadk
  • 44
  • 3
2

It looks everybody here forgot of autocommit=0;SET unique_checks=0;SET foreign_key_checks=0; that is suppose to speed up the import process ...

#!/bin/bash
MYSQL_USER="USER"
MYSQL_PASS="PASS"

if [ -z "$1" ]
  then
    echo "Dumping all DB ... in separate files"
    for I in $(mysql -u $MYSQL_USER --password=$MYSQL_PASS -e 'show databases' -s --skip-column-names); 
    do 
      echo "SET autocommit=0;SET unique_checks=0;SET foreign_key_checks=0;" > "$I.sql"
      mysqldump -u $MYSQL_USER --password=$MYSQL_PASS $I >> "$I.sql"; 
      echo "SET autocommit=1;SET unique_checks=1;SET foreign_key_checks=1;commit;" >> "$I.sql"
      gzip "$I.sql"
    done
    echo "END."
else
      echo "Dumping $1 ..."
      echo "SET autocommit=0;SET unique_checks=0;SET foreign_key_checks=0;" > "$1.sql"
      mysqldump -u $MYSQL_USER --password=$MYSQL_PASS $1 >> "$1.sql"; 
      echo "SET autocommit=1;SET unique_checks=1;SET foreign_key_checks=1;commit;" >> "$1.sql"
      gzip "$1.sql"
fi
WonderLand
  • 4,853
  • 6
  • 50
  • 71
2

If You want to dump all tables from all databases just combine Elias Torres Arroyo's and Trutane's answer: And if You don't want to give Your password on terminal, just store Your password in an extra config file (chmod 0600)- see Mysqldump launched by cron and password security

#!/bin/bash

# this file
# a) gets all databases from mysql
# b) gets all tables from all databases in a)
# c) creates subfolders for every database in a)
# d) dumps every table from b) in a single file

    # this is a mixture of scripts from Trutane (http://stackoverflow.com/q/3669121/138325) 
    # and Elias Torres Arroyo (https://stackoverflow.com/a/14711298/8398149)

# usage: 
# sk-db.bash parameters
# where pararmeters are:

# d "dbs to leave"
# t " tables to leave"
# u "user who connects to database"
# h "db host"
# f "/backup/folder"



user='root'
host='localhost'
backup_folder=''
leave_dbs=(information_schema mysql)
leave_tables=()
while getopts ":d:t:u:h:f:" opt; do
  case $opt in
    d) leave_dbs=( $OPTARG )
    ;;
    t) leave_tables=( $OPTARG )
    ;;
    u) user=$OPTARG
    ;;
    h) host=$OPTARG
    ;;
    f) backup_folder=$OPTARG
    ;;

    \?) echo "Invalid option -$OPTARG" >&2
    ;;
  esac
done
echo '****************************************'
echo "Database Backup with these options"
echo "Host $host"
echo "User $user"
echo "Backup in $backup_folder"
echo '----------------------------------------'
echo "Databases to emit:"
printf "%s\n" "${leave_dbs[@]}"
echo '----------------------------------------'
echo "Tables to emit:"
printf "%s\n" "${leave_tables[@]}"
echo '----------------------------------------'


BACKUP_DIR=$backup_folder/$(date +%Y-%m-%dT%H_%M_%S);
CONFIG_FILE=/root/db-config.cnf

function contains() {
    local n=$#
    local value=${!n}
    for ((i=1;i < $#;i++)) {
        if [ "${!i}" == "${value}" ]; then
            echo "y"
            return 0
        fi
    }
    echo "n"
    return 1
}


test -d "$BACKUP_DIR" || mkdir -p "$BACKUP_DIR"
# Get the database list, exclude information_schema
database_count=0
tbl_count=0

for db in $(mysql --defaults-extra-file=$CONFIG_FILE -B -s -u $user -e 'show databases' )
do
    if [ $(contains "${leave_dbs[@]}" "$db") == "y" ]; then
        echo "leave database $db as requested"
    else

       # dump each database in a separate file
       (( database_count++ ))
       DIR=$BACKUP_DIR/$db
       [ -n "$DIR" ] || DIR=.

       test -d $DIR || mkdir -p $DIR

       echo
       echo "Dumping tables into separate SQL command files for database '$db' into dir=$DIR"

       for t in $(mysql --defaults-extra-file=$CONFIG_FILE -NBA -h $host -u $user -D $db -e 'show tables')
       do
           if [ $(contains "${leave_tables[@]}" "$db.$t") == "y" ]; then
               echo "leave table $db.$t as requested"
           else
               echo "DUMPING TABLE: $db.$t"
  #            mysqldump --defaults-extra-file=$CONFIG_FILE -h $host -u $user $db $t  > $DIR/$db.$t.sql
               tbl_count=$(( tbl_count + 1 ))
           fi
       done

       echo "Database $db is finished"
       echo '----------------------------------------'

    fi
done
echo '----------------------------------------'
echo "Backup completed"
echo '**********************************************'

And also, this helped:

Check if bash array contains value

arrays in bash

named arguments in script

sneaky
  • 407
  • 6
  • 15
0

I'm not bash master, but I'd just do it with a bash script. Without hitting MySQL, with knowledge of the data directory and database name, you could just scan for all .frm files (one for every table in that db/directory) for a list of tables.

I'm sure there are ways to make it slicker and accept arguments or whatnot, but this worked well for me.

tables_in_a_db_to_sql.sh

#!/bin/bash

database="this_is_my_database"
datadir="/var/lib/mysql/"
datadir_escaped="\/var\/lib\/mysql\/"

all_tables=($(ls $datadir$database/*.frm | sed s/"$datadir_escaped$database\/"/""/g | sed s/.frm//g))

for t in "${all_tables[@]}"; do
        outfile=$database.$t.sql
        echo "-- backing up $t to $outfile"
        echo "mysqldump [options] $database $t > $outfile"
        # mysqldump [options] $database $t > $outfile
done

Fill in the [options] and desired outfile convention as you need, and uncomment the last mysqldump line.

Riedsio
  • 9,165
  • 1
  • 22
  • 33
  • this code when i run show this error msg: Syntax error: "(" unexpected (maybe its line no: 9 ).any suggestion – Syed Ahmed Dec 24 '12 at 11:23
0

For Windows Servers, you can use a batch file like so:

set year=%DATE:~10,4%
set day=%DATE:~7,2%
set mnt=%DATE:~4,2%
set hr=%TIME:~0,2%
set min=%TIME:~3,2%

IF %day% LSS 10 SET day=0%day:~1,1%
IF %mnt% LSS 10 SET mnt=0%mnt:~1,1%
IF %hr% LSS 10 SET hr=0%hr:~1,1%
IF %min% LSS 10 SET min=0%min:~1,1%

set backuptime=%year%-%mnt%-%day%-%hr%-%min%
set backupfldr=C:\inetpub\wwwroot\backupfiles\
set datafldr="C:\Program Files\MySQL\MySQL Server 5.5\data"
set zipper="C:\inetpub\wwwroot\backupfiles\zip\7za.exe"
set retaindays=21

:: Switch to the data directory to enumerate the folders
pushd %datafldr%

:: Get all table names and save them in a temp file
mysql --skip-column-names --user=root --password=mypassword mydatabasename -e "show tables" > tables.txt

:: Loop through all tables in temp file so that we can save one backup file per table
for /f "skip=3 delims=|" %%i in (tables.txt) do (
  set tablename = %%i
  mysqldump --user=root --password=mypassword mydatabasename %%i > "%backupfldr%mydatabasename.%backuptime%.%%i.sql"
)
del tables.txt

:: Zip all files ending in .sql in the folder
%zipper% a -tzip "%backupfldr%backup.mydatabasename.%backuptime%.zip" "%backupfldr%*.sql"

echo "Deleting all the files ending in .sql only"
del "%backupfldr%*.sql"

echo "Deleting zip files older than 21 days now"
Forfiles /p %backupfldr% /m *.zip /d -%retaindays% /c "cmd /c del /q @path"

Then schedule it using Windows Task Scheduler.

Also, if you want to exclude certain tables in your backup, note that you can use a where clause on the "show tables" statement, but the column name depends on your database name.

So for example, if your database name is "blah" then your column name in the "show tables" result set will be "tables_in_blah". Which means you could add a where clause something similar to this:

show tables where tables_in_blah <> 'badtable'

or

show tables where tables_in_blah like '%goodtable%'
Vincent
  • 1,363
  • 15
  • 28
-1

See the following article by Pauli Marcus:

Howto split a SQL database dump into table-wise files

Splitting a sql file containing a whole database into per-table files is quite easy: Grep the .sql for any occurence of DROP TABLE. Generate the file name from the table name that is included in the DROP TABLE statement. Echo the output to a file. Here is a little script that expects a .sql file as input:

#!/bin/bash

file=$1 # the input file
directory="$file-splitted" # the output directory
output="$directory/header" # the first file containing the header
GREP="DROP TABLE" # what we are looking for

mkdir $directory # create the output directory

while read line
do
   # if the current line contains the wanted statement
   if [ $(echo "$line" | grep -c "$GREP") == "1" ]
   then
      # extract the file name
      myfile=$(echo $line | awk '{print $5}' | sed -e 's/`//g' -e 's/;//g')
      # set the new file name
      output="$directory/$myfile"
   fi
       echo "$line" >> $output # write to file
done < $file
kenorb
  • 118,428
  • 63
  • 588
  • 624