45

Often a web service needs to zip up several large files for download by the client. The most obvious way to do this is to create a temporary zip file, then either echo it to the user or save it to disk and redirect (deleting it some time in the future).

However, doing things that way has drawbacks:

  • a initial phase of intensive CPU and disk thrashing, resulting in...
  • a considerable initial delay to the user while the archive is prepared
  • very high memory footprint per request
  • use of substantial temporary disk space
  • if the user cancels the download half way through, all resources used in the initial phase (CPU, memory, disk) will have been wasted

Solutions like ZipStream-PHP improve on this by shovelling the data into Apache file by file. However, the result is still high memory usage (files are loaded entirely into memory), and large, thrashy spikes in disk and CPU usage.

In contrast, consider the following bash snippet:

ls -1 | zip -@ - | cat > file.zip
  # Note -@ is not supported on MacOS

Here, zip operates in streaming mode, resulting in a low memory footprint. A pipe has an integral buffer – when the buffer is full, the OS suspends the writing program (program on the left of the pipe). This here ensures that zip works only as fast as its output can be written by cat.

The optimal way, then, would be to do the same: replace cat with a web server process, streaming the zip file to the user with it created on the fly. This would create little overhead compared to just streaming the files, and would have an unproblematic, non-spiky resource profile.

How can you achieve this on a LAMP stack?

Benji XVI
  • 2,058
  • 1
  • 23
  • 22
  • 1
    Please note: I am partly writing this because of the [various](http://stackoverflow.com/questions/3078266/zip-stream-in-php) [similar](http://stackoverflow.com/questions/2286639/open-file-write-to-file-save-file-as-a-zip-and-stream-to-user-for-download) [questions](http://stackoverflow.com/questions/1436239/creating-and-serving-zipped-files-with-php) – seems like a relatively common problem, and has not been very well put/answered yet. ie Have tried to write the streaming/PHP problem up thoroughly – serious answers only please! (Suggestions to improve the q much appreciated too.) – Benji XVI Dec 05 '10 at 02:47
  • You could probably use Node.js. I know it's been used to parse the headers of uploaded file (while they uploaded). Since your given more control over the I/O buffers than PHP, my guess it shouldn't be hard to write a zip file in realtime. – Kendall Hopkins Dec 05 '10 at 04:41

7 Answers7

49

You can use popen() (docs) or proc_open() (docs) to execute a unix command (eg. zip or gzip), and get back stdout as a php stream. flush() (docs) will do its very best to push the contents of php's output buffer to the browser.

Combining all of this will give you what you want (provided that nothing else gets in the way -- see esp. the caveats on the docs page for flush()).

(Note: don't use flush(). See the update below for details.)

Something like the following can do the trick:

<?php
// make sure to send all headers first
// Content-Type is the most important one (probably)
//
header('Content-Type: application/x-gzip');

// use popen to execute a unix command pipeline
// and grab the stdout as a php stream
// (you can use proc_open instead if you need to 
// control the input of the pipeline too)
//
$fp = popen('tar cf - file1 file2 file3 | gzip -c', 'r');

// pick a bufsize that makes you happy (64k may be a bit too big).
$bufsize = 65535;
$buff = '';
while( !feof($fp) ) {
   $buff = fread($fp, $bufsize);
   echo $buff;
}
pclose($fp);

You asked about "other technologies": to which I'll say, "anything that supports non-blocking i/o for the entire lifecycle of the request". You could build such a component as a stand-alone server in Java or C/C++ (or any of many other available languages), if you were willing to get into the "down and dirty" of non-blocking file access and whatnot.

If you want a non-blocking implementation, but you would rather avoid the "down and dirty", the easiest path (IMHO) would be to use nodeJS. There is plenty of support for all the features you need in the existing release of nodejs: use the http module (of course) for the http server; and use child_process module to spawn the tar/zip/whatever pipeline.

Finally, if (and only if) you're running a multi-processor (or multi-core) server, and you want the most from nodejs, you can use Spark2 to run multiple instances on the same port. Don't run more than one nodejs instance per-processor-core.


Update (from Benji's excellent feedback in the comments section on this answer)

1. The docs for fread() indicate that the function will read only up to 8192 bytes of data at a time from anything that is not a regular file. Therefore, 8192 may be a good choice of buffer size.

[editorial note] 8192 is almost certainly a platform dependent value -- on most platforms, fread() will read data until the operating system's internal buffer is empty, at which point it will return, allowing the os to fill the buffer again asynchronously. 8192 is the size of the default buffer on many popular operating systems.

There are other circumstances that can cause fread to return even less than 8192 bytes -- for example, the "remote" client (or process) is slow to fill the buffer - in most cases, fread() will return the contents of the input buffer as-is without waiting for it to get full. This could mean anywhere from 0..os_buffer_size bytes get returned.

The moral is: the value you pass to fread() as buffsize should be considered a "maximum" size -- never assume that you've received the number of bytes you asked for (or any other number for that matter).

2. According to comments on fread docs, a few caveats: magic quotes may interfere and must be turned off.

3. Setting mb_http_output('pass') (docs) may be a good idea. Though 'pass' is already the default setting, you may need to specify it explicitly if your code or config has previously changed it to something else.

4. If you're creating a zip (as opposed to gzip), you'd want to use the content type header:

Content-type: application/zip

or... 'application/octet-stream' can be used instead. (it's a generic content type used for binary downloads of all different kinds):

Content-type: application/octet-stream

and if you want the user to be prompted to download and save the file to disk (rather than potentially having the browser try to display the file as text), then you'll need the content-disposition header. (where filename indicates the name that should be suggested in the save dialog):

Content-disposition: attachment; filename="file.zip"

One should also send the Content-length header, but this is hard with this technique as you don’t know the zip’s exact size in advance. Is there a header that can be set to indicate that the content is "streaming" or is of unknown length? Does anybody know?


Finally, here's a revised example that uses all of @Benji's suggestions (and that creates a ZIP file instead of a TAR.GZIP file):

<?php
// make sure to send all headers first
// Content-Type is the most important one (probably)
//
header('Content-Type: application/octet-stream');
header('Content-disposition: attachment; filename="file.zip"');

// use popen to execute a unix command pipeline
// and grab the stdout as a php stream
// (you can use proc_open instead if you need to 
// control the input of the pipeline too)
//
$fp = popen('zip -r - file1 file2 file3', 'r');

// pick a bufsize that makes you happy (8192 has been suggested).
$bufsize = 8192;
$buff = '';
while( !feof($fp) ) {
   $buff = fread($fp, $bufsize);
   echo $buff;
}
pclose($fp);

Update: (2012-11-23) I have discovered that calling flush() within the read/echo loop can cause problems when working with very large files and/or very slow networks. At least, this is true when running PHP as cgi/fastcgi behind Apache, and it seems likely that the same problem would occur when running in other configurations too. The problem appears to result when PHP flushes output to Apache faster than Apache can actually send it over the socket. For very large files (or slow connections), this eventually causes in an overrun of Apache's internal output buffer. This causes Apache to kill the PHP process, which of course causes the download to hang, or complete prematurely, with only a partial transfer having taken place.

The solution is not to call flush() at all. I have updated the code examples above to reflect this, and I placed a note in the text at the top of the answer.

Community
  • 1
  • 1
Lee
  • 13,034
  • 1
  • 26
  • 45
  • Thanks for this, I have tested it fairly extensively, and there is little overhead – this appears to be a Good Solution. – Benji XVI Dec 06 '10 at 19:51
  • 2
    A couple of minor issues: (1.) Per docs (and bug reports that describe how the documentation is wrong!), `fread` will read only up to 8192 bytes of data at a time from anything that is not a regular file. 8192 may therefore be a good choice of buffer size. (2.) According to comments on `fread` docs, a few caveats: magic quotes may interfere and must be turned off; setting mb_http_encoding('pass')` may be a good idea. (3.)Perhaps as this question is specifically about zip, (which is the only option for serving users cross-platform), change those parts of the code? – Benji XVI Dec 06 '10 at 20:00
  • 1
    Useful headers: `"Content-type: application/zip"` (or `application/octet-stream`), and `Content-disposition: attachment; filename="file.zip"`. One should also set Content-length, but this is hard with this technique as you don’t know the zip’s exact size in advance. – Benji XVI Dec 06 '10 at 20:03
  • 1
    One further thing: interestingly, `flush()` seems to be unnecessary,. (Tested with apache running mod_fastcgi.) I suspect the normal PHP and Apache buffering behaviours become irrelevant for large downloads. It seems to work as follows: PHP fills the buffer, and is suspended until Apache sends it. The operative aspects of this script are 1. PHP never holds more than 8192 bytes in memory, 2. `zip` works in streaming mode and also uses little memory, 3. execution is suspended while Apache clears (sends) its buffers. – Benji XVI Dec 06 '10 at 20:13
  • @Benji: great feedback! thanks for that. I've added all your info to the main body of the answer above. ...As for `flush()` being unnecessary -- I suspect that, in some configurations (eg. mod_php) it may be quite necessary *if* you want to minimize the degree to which things are buffered by the server. However, in most cases, the server's built-in buffering is going to be appropriate for the environment it's running in, and so it may be best to omit the flush in those scenarios too. – Lee Dec 06 '10 at 21:47
  • Thanks Lee. This is looking like a great answer & useful resource. I had had much the same thoughts about `flush()`. Given the complexities of the various buffers on the way from PHP to the user’s browser, it is probably something people should test on the basis of their individual case/config. – Benji XVI Dec 06 '10 at 22:46
  • I know this is an old post and this may not get answered, but is there any (easy) way of not zipping the whole folder structure i.e just adding the files to the archive and outputting them as we're serving up images archives and I don't want it to include the file paths in the final zip, just the actual image files. – niggles Aug 24 '11 at 05:00
  • @niggles: check the manpage for `zip` on your server. on my workstation (OSX) the `-j` option causes zip to discard the path info, so `zip -j foo/file1.jpg bar/file2.jpg` would give you a zip archive that contained `file1.jpg` and `file2.jpg` "bare" (with no path info). Of course, if you have gathered all the source files together in a single directory, then you can just change to that directory before calling zip. In that case, you'd have something like this: `cd /some/directory ; zip - file1.jpg file2.jpg file3.jpg`. (This would all go into the call to `popen`, as in the example above). – Lee Aug 30 '11 at 19:57
  • how can i rename the file names in that solution ? – Utku Dalmaz Jul 10 '12 at 21:55
  • @Ahmetvardar: Are you asking how to rename the zip file that is downloaded? Do that in the `Content-Disposition` header, by changing `filename="file.zip"`. / Or are you asking how to rename individual files inside the zip file? The easiest way to do that would be to rename them on your server's filesystem before you create the zip file. As far as I know, there's no way to add a file into a zip archive using a name that's different from the file's name on disk. – Lee Jul 24 '12 at 16:46
  • 2
    to those who have used this approach: I just posted an update. The short summary is "don't use `flush()`". If you're using `flush()` in your implementation, please have a look at the info I've added above. – Lee Nov 23 '12 at 18:05
  • To zip an entire directory without having full path packed into the zip: `$fp = popen("cd /some/long/path && zip -r - ./", "r");` – Oliver Maksimovic Jan 23 '13 at 10:55
  • @lee I fondly remember working on this together 6 years ago – so dropping you a mention of the [remote-files followup](http://stackoverflow.com/q/40324268/196312). Crazy how the same problems come back around :) – Benji XVI Oct 29 '16 at 22:30
  • If you need to use 'find' command, it could be like: `find /path/ -iname "*.txt" -print | zip -@ -`. – Meetai.com Feb 14 '17 at 14:28
3

Another solution is my mod_zip module for Nginx, written specifically for this purpose:

https://github.com/evanmiller/mod_zip

It is extremely lightweight and does not invoke a separate "zip" process or communicate via pipes. You simply point to a script that lists the locations of files to be included, and mod_zip does the rest.

Emiller
  • 1,454
  • 12
  • 7
2

I wrote this s3 steaming file zipper microservice last weekend - might be useful: http://engineroom.teamwork.com/how-to-securely-provide-a-zip-download-of-a-s3-file-bundle/

2

Trying to implement a dynamic generated download with lots of files with different sizes i came across this solution but i run into various memory errors like "Allowed memory size of 134217728 bytes exhausted at ...".

After adding ob_flush(); right before the flush(); the memory errors disappear.

Together with sending the headers, my final solution looks like this (Just storing the files inside the zip without directory structure):

<?php

// Sending headers
header('Content-Type: application/zip');
header('Content-Disposition: attachment; filename="download.zip"');
header('Content-Transfer-Encoding: binary');
ob_clean();
flush();

// On the fly zip creation
$fp = popen('zip -0 -j -q -r - file1 file2 file3', 'r');

while (!feof($fp)) {
    echo fread($fp, 8192);
    ob_flush();
    flush();
}

pclose($fp);
Rico Sonntag
  • 1,404
  • 1
  • 15
  • 19
1

According to the PHP manual, the ZIP extension provides a zip: wrapper.

I have never used it and I don't know its internals, but logically it should be able to do what you're looking for, assuming that ZIP archives can be streamed, which I'm not entirely sure of.

As for your question about the "LAMP stack" it shouldn't be a problem as long as PHP is not configured to buffer output.


Edit: I'm trying to put a proof-of-concept together, but it seems not-trivial. If you're not experienced with PHP's streams, it might prove too complicated, if it's even possible.


Edit(2): rereading your question after taking a look at ZipStream, I found what's going to be your main problem here when you say (emphasis added)

the operative Zipping should operate in streaming mode, ie processing files and providing data at the rate of the download.

That part will be extremely hard to implement because I don't think PHP provides a way to determine how full Apache's buffer is. So, the answer to your question is no, you probably won't be able to do that in PHP.

Josh Davis
  • 26,422
  • 5
  • 47
  • 66
  • In answer to your first question, yes, Zipping can be done in a streamed fashion, and in fact as per the bash pseudosnippet above, the standard unix tool can do so. – Benji XVI Dec 05 '10 at 04:37
  • For more background: the reason the bash snippet works so exquisitely is that the pipe has an integral buffer (of 64k on linux) – when this is full, the operating system suspends the providing process (zip in this case). – Benji XVI Dec 05 '10 at 04:39
0

It seems, you can eliminate any output-buffer related problems by using fpassthru(). I also use -0 to save CPU time since my data is compact already. I use this code to serve a whole folder, zipped on-the-fly:

chdir($folder);
$fp = popen('zip -0 -r - .', 'r');
header('Content-Type: application/octet-stream');
header('Content-disposition: attachment; filename="'.basename($folder).'.zip"');
fpassthru($fp);
Hermann
  • 457
  • 5
  • 16
0

I just released a ZipStreamWriter class written in pure PHP userland here:

https://github.com/cubiclesoft/php-zipstreamwriter

Instead of using external applications (e.g. zip) or extensions like ZipArchive, it supports streaming data into and out of the class by implementing a full-blown ZIP writer.

How the streaming aspect works is by using the ZIP file format's "Data Descriptors" as described by section 4.3.5 of the PKWARE ZIP file specification:

4.3.5 File data MAY be followed by a "data descriptor" for the file. Data descriptors are used to facilitate ZIP file streaming.

There are some possible limitations to be aware of though. Not every tool can read streaming ZIP files. Also, support for Zip64 streaming ZIP files may have even less support but that's only of concern for files over 2GB with this class. However, both 7-Zip and the Windows 10 built-in ZIP file reader seem to be fine with handling all of crazy files that the ZipStreamWriter class threw at them. The hex editor I use got a good workout too.

When using the ZipStreamWriter class, I recommend allowing a buffer to build up to at least 4KB but no more than 65KB at a time before sending it on to the web server. Otherwise, for lots of really tiny files, you'll be flushing out tiny bits of piecemeal data and waste a bunch of extra CPU cycles on the Apache callback end of things.

When something doesn't exist or I don't like the existing options, I find both official and unofficial specifications, some examples to work with, and then I build it from scratch. It's a fairly solid approach to problem solving, if just a tad overkill.

CubicleSoft
  • 1,614
  • 14
  • 17