4

I am importing millions of rows from a tab file and SQLite .import .mode tabs is very slow. I have three indexes so probably the slowness comes from the indexing. But first I would like to check that .import add the rows grouping lots/all of them into a single commit. I was unable to find documentation how .import works. Do someone knows?. If the index is the problem (I had that issue before with mysql) how can I disable it and reindex at the end of the .import?

[Update 1]

Following @sixfeetsix comment.

My schema is:

CREATE TABLE ensembl_vf_b36 (
        variation_name  varchar(20),
        chr     varchar(4),
        start   integer,
        end     integer,
        strand  varchar(5),
        allele_string    varchar(3),
        map_weight      varchar(2),
        flags           varchar(50),
        validation_status       varchar(100),
        consequence_type        varchar(50)
);
CREATE INDEX pos_vf_b36_idx on ensembl_vf_b36 (chr, start, end);

data:

rs35701516      NT_113875       352     352     1       G/A     2       NULL    NULL    INTERGENIC
rs12090193      NT_113875       566     566     1       G/A     2       NULL    NULL    INTERGENIC
rs35448845      NT_113875       758     758     1       A/C     2       NULL    NULL    INTERGENIC
rs17274850      NT_113875       1758    1758    1       G/A     2       genotyped       cluster,freq    INTERGENIC

There are 15_608_032 entries in this table

And these are the stats

 $  time sqlite3 -separator '   ' test_import.db '.import variations_build_36_ens-54.tab ensembl_vf_b36'

real    29m27.643s
user    4m14.176s
sys     0m15.204s

[Update 2]

@sixfeetsix has a good answer and if you are reading this, you would be also interested in

Faster bulk inserts in sqlite3?

Sqlite3: Disabling primary key index while inserting?

[update3] Solution from 30 min -> 4 min

Even with all the optimisations (see accepted answer) still takes almost 30 minutes but if the indexes are not used and added at the end then total time is 4 minutes:

-- importing without indexes:
       real    2m22.274s
       user    1m38.836s
       sys     0m4.850s

 -- adding indexes
     $  time sqlite3 ensembl-test-b36.db < add_indexes-b36.sql

     real    2m18.344s
     user    1m26.264s
     sys     0m6.422s
Community
  • 1
  • 1
Pablo Marin-Garcia
  • 3,846
  • 2
  • 26
  • 45
  • 1
    Can you post: 1- a sample from your flat file of rows being imported; 2- the the structure of the table you are importing in, and; 3- some numbers about the speed at which .import is running on your machine? –  Jul 09 '11 at 18:08

1 Answers1

6

I believe the slowness indeed comes from building the index as more and more records are added. Depending on the RAM you have, you can tell sqlite to use enough memory so that all this index building activity is done in memory (ie without all the I/O that would happen otherwise with less memory).

For 15M records, I'd say you should set your cache size at 500000.

You can also tell sqlite to keep its transaction journal in memory.

Finally, you can set synchronous to OFF so sqlite never waits for writes to be committed to disk.

Using this I was able to divide the time it takes to import 15M records by 5 (14 minutes down to 2.5) with records made of random GUIDs split in 5 columns, using the three middle columns as an index:

b40c1c2f    912c    46c7    b7a0    3a7d8da724c1
9c1cdf2e    e2bc    4c60    b29d    e0a390abfd26
b9691a9b    b0db    4f33    a066    43cb4f7cf873
01a360aa    9e2e    4643    ba1f    2aae3fd013a6
f1391f8b    f32c    45f0    b137    b99e6c299528

So to try this I suggest you put all the instructions in some file, say import_test:

pragma journal_mode=memory;
pragma synchronous=0;
pragma cache_size=500000;
.mode tabs
.import variations_build_36_ens-54.tab ensembl_vf_b36

Then try it:

time sqlite3 test_import.db < import_test

EDIT

This is a reply to the Pablo's (the OP) comments following this answer (it's to long to fit as a comment): My (educated) guesses are that:

  1. Because .import is not sql per se, it hasn't much ado with transactions, I'm even inclined to think that it is written to go faster than even if you had all this done in one "normal" transaction; and,
  2. If you have enough memory to allocate, and you set up your environment as I suggest, the real (time) hog here is reading the flat file, then writing the final content of the database, because what happens in between happens extremely fast; i.e. fast enough that there ain't much time to gain by optimizing it when you compare such potential gains with the (probably) non-compressible time spent on disk I/O.

If I'm wrong though I'd be glad to hear why for my own benefit.

EDIT 2

I did a comparison test between having the index in place during .import, and having it added immediately after .import finishes. I used the same technique of generating a 15M record made of split random UUIDs:

import csv, uuid
w = csv.writer(open('bla.tab', 'wb'), dialect='excel-tab')
for i in xrange(15000000):
    w.writerow(str(uuid.uuid4()).split('-'))

Then I tested importing with the index created before and after (here the index is created after):

pragma journal_mode=memory;
pragma synchronous=0;
pragma cache_size=500000;
create table test (f1 text, f2 text, f3 text, f4 text, f5 text);
CREATE INDEX test_idx on test (f2, f3, f4);
.mode tabs
.import bla.tab test

So here is the time when adding the index before:

[someone@somewhere ~]$ time sqlite3 test_speed.sqlite < import_test 
memory

real   2m58.839s
user   2m21.411s
sys    0m6.086s

And when the index is added after:

[someone@somewhere ~]$ time sqlite3 test_speed.sqlite < import_test 
memory

real   2m19.261s
user   2m12.531s
sys    0m4.403s

You see how the "user" times difference (~9s) don't account for the full times difference (~40s)? I To me this means that there is some extra I/O happening when the index is created before, and so I was wrong to think that all was being done in memory with no extra I/O.

Conclusion: create the index after and you'll have even better import times (just as Donal mentioned).

  • 2
    +1: You could also try not having an index on any column during the insert of all the rows, adding the index afterwards (so it builds it all at once instead of piecemeal). That's in addition to what you suggest, which all sounds sensible. – Donal Fellows Jul 10 '11 at 12:23
  • Thanks @sixfeetsix and @Donal. For the records: I have just seen a similar question here http://stackoverflow.com/questions/788568/sqlite3-disabling-primary-key-index-while-inserting – Pablo Marin-Garcia Jul 10 '11 at 21:47
  • 1
    @Donald and @sixfeetsix. OK, my problem is with the indexes, and this is why I was asking if .import is doing each insert standalone or under big transactions. The indexes are only updated for each transaction so doing the inserts in big transactions should speed up things isn't it? Should I then assume that .import is not grouping the inserts in transactions? Or it is trying to do the whole import in one transaction and then get out of cache memory and went to disk slowing the process? – Pablo Marin-Garcia Jul 10 '11 at 21:54
  • 1
    comment @sixfeetsix update: I set up the file with the environment as you said and run again the import. Unfortunately it keeps writing to disk with a speed of 1 Mb per second (ls -h every 10 seconds and the file grows 10 Mb) until the 1.5Gb of the final db. $ time sqlite3 test_import.db < sql_import_variation_36_test.sql. ( real 22m50.148s user 3m57.333s sys 0m13.738s) . Probably I will need to keep playing with the cache memory – Pablo Marin-Garcia Jul 11 '11 at 00:28
  • 1
    @Pablo: 1- I updated my answer again (edit 2) so you might want to see what happens if you create the index after; 2- can you give me some details about your environment? (ie how much memory, what cpu(s), type of HDD, is it a VM?). –  Jul 11 '11 at 09:09
  • 1
    @Pablo: also, can you give me the times when you import without an index on the table? –  Jul 11 '11 at 09:24
  • 1
    @sixfeetsix. Added as a edit3 in the question. It goes from 30 min to 4 minutes (2 mins inserting data, 2 mins creating the indexes) – Pablo Marin-Garcia Jul 17 '11 at 16:53