Unfortunately, I'm working with an extremely large corpus which is spread into hundreds of .gz files -- 24 gigabytes (packed) worth, in fact. Python is really my native language (hah) but I was wondering if I haven't run up against a problem that will necessitate learning a "faster" language?
Each .gz file contains a single document in plain text, is about 56MB gzipped, and about 210MB unzipped.
On each line is an n-gram (bigram, trigram, quadrigram, etc.) and, to the right, a frequency count. I need to basically create a file that stores the substring frequencies for each quadrigram alongside its whole-string frequency count (i.e., 4 unigram frequencies, 3 bigram frequencies, and 2 trigram frequencies for a total of 10 data points). Each type of n-gram has its own directory (e.g., all bigrams appear in their own set of 33 .gz files).
I know an easy, brute force solution, and which module to import to work with gzipped files in Python, but I was wondering if there was something that wouldn't take me weeks of CPU time? Any advice on speeding this process up, however slightly, would be much appreciated!