4

Unfortunately, I'm working with an extremely large corpus which is spread into hundreds of .gz files -- 24 gigabytes (packed) worth, in fact. Python is really my native language (hah) but I was wondering if I haven't run up against a problem that will necessitate learning a "faster" language?

Each .gz file contains a single document in plain text, is about 56MB gzipped, and about 210MB unzipped.

On each line is an n-gram (bigram, trigram, quadrigram, etc.) and, to the right, a frequency count. I need to basically create a file that stores the substring frequencies for each quadrigram alongside its whole-string frequency count (i.e., 4 unigram frequencies, 3 bigram frequencies, and 2 trigram frequencies for a total of 10 data points). Each type of n-gram has its own directory (e.g., all bigrams appear in their own set of 33 .gz files).

I know an easy, brute force solution, and which module to import to work with gzipped files in Python, but I was wondering if there was something that wouldn't take me weeks of CPU time? Any advice on speeding this process up, however slightly, would be much appreciated!

Georgina
  • 291
  • 4
  • 11
  • And your question is...? – nobody May 27 '11 at 03:21
  • Woops. You posted just as I edited it. :) – Georgina May 27 '11 at 03:23
  • As Andrew says, you need to actually specify a question here, but generally I'd wager that processing language isn't your problem, but rather disk access speed is going to be a significant limiter. – Nick Bastin May 27 '11 at 03:23
  • Have you estimated the time based on how long it takes to work on a single file, or are you guessing? – Kathy Van Stone May 27 '11 at 03:24
  • Hi Kathy -- at this point, I don't recall how long it took to do this with a single file, but even for a single file containing 4-grams, it means searching through every file that includes 3-grams, bigrams, and unigrams. I'm not even sure which data structure is the fastest to work with. – Georgina May 27 '11 at 03:26
  • Georgina, a quick bit of advice to look at SQLite3 for your storage requirements. While you might be able to code a better storage backend for your task, SQLite3 is good general purpose code and might be adequately fast for your needs. – sarnold May 27 '11 at 03:38

1 Answers1

1

It would help to have an example of a few lines and expected output. But from what I understand, here are some ideas.

You certainly don't want to process all files every time you process a single file or, worse, a single 4-gram. Ideally you'd go through each file once. So my first suggestion is to maintain an intermediate list of frequencies (these sets of 10 data points), where they first only take into account one file. Then when you process the second file, you'll update all the frequencies for items that you encounter (and presumably add new items). Then you'll keep going like this, increasing frequencies as you find more matching n-grams. At the end write everything out.

More specifically, at each iteration I would read a new input file into memory as a map of string to number, where the string is, say, a space-separated n-gram, and the number is its frequency. I would then process the intermediate file from the last iteration, which would contain your expected output (with incomplete values), e.g. "a b c d : 10 20 30 40 5 4 3 2 1 1" (kind of guessing the output you are looking for here). For each line, I'd look up in the map all the sub-grams in my map, update the count, and write out the updated line to the new output file. That one will be used in the next iteration, until I've processed all input files.

DS.
  • 17,804
  • 4
  • 43
  • 46