101

If I have a URL that, when submitted in a web browser, pops up a dialog box to save a zip file, how would I go about catching and downloading this zip file in Python?

Keyur Potdar
  • 6,669
  • 6
  • 23
  • 36
user1229108
  • 1,109
  • 3
  • 10
  • 8
  • 1
    I tried section **Downloading a binary file and writing it to disk** of [this page](http://www.compciv.org/practicum/shakefiles/b-downloading-the-shakespeare-zip/) which worked as a chram. – Zeinab Abbasimazar Oct 03 '18 at 13:23

8 Answers8

228

As far as I can tell, the proper way to do this is:

import requests, zipfile, StringIO
r = requests.get(zip_file_url, stream=True)
z = zipfile.ZipFile(StringIO.StringIO(r.content))
z.extractall()

of course you'd want to check that the GET was successful with r.ok.

For python 3+, sub the StringIO module with the io module and use BytesIO instead of StringIO: Here are release notes that mention this change.

import requests, zipfile, io
r = requests.get(zip_file_url)
z = zipfile.ZipFile(io.BytesIO(r.content))
z.extractall("/path/to/destination_directory")
kamran kausar
  • 2,569
  • 18
  • 14
yoavram
  • 3,383
  • 3
  • 19
  • 21
  • Thanks for this answer. I used it to solve [my issue getting a zip file with requests](http://stackoverflow.com/questions/36779870/python-requests-not-returning-same-header-as-browser-request-curl/36990934#36990934). – gr1zzly be4r May 02 '16 at 20:22
  • yoavram, in your code- where i enter the url of the webpage? – newGIS Jun 01 '16 at 05:33
  • 25
    If you'd like to save the downloaded file in a different location, replace `z.extractall()` with `z.extractall("/path/to/destination_directory")` – user799188 Oct 14 '16 at 08:14
  • @newGIS I hope you figured it out by now, but the url of the zip you want to download is `zip_file_url`. – yoavram Mar 08 '17 at 15:46
  • This is awesome. – AppleGate0 Nov 03 '17 at 06:07
  • @yoavram I was desperately looking for this answer. Can you tell me how to save the content as ".zip" file. If I do `extractall()` it extracts the content. I don't want that. – Anirban Nag 'tintinmj' Jan 01 '18 at 06:44
  • 1
    If you just want to save the file from the url you can do: `urllib.request.urlretrieve(url, filename)`. – yoavram Jan 02 '18 at 07:24
  • 5
    To help others connect the dots it took me 60minutes too long to, you can then use `pd.read_table(z.open('filename'))` with the above. Useful if you have a zip url link that contains multiple files and you're only interested in loading one. – Frikster Apr 20 '18 at 06:02
  • how to print the status of extracting? – Varadaraju G Mar 21 '19 at 06:29
  • @yoavram How can I test these 3 lines if I put it in a function using Mock? – Adil Blanco Aug 27 '19 at 00:43
  • not the right pattern according to https://2.python-requests.org/en/master/user/quickstart/#raw-response-content – karthik r Sep 09 '19 at 23:00
  • what if the .zip file is over 10GB, won't the get() mess up with the memory? – Yossarian42 Feb 25 '20 at 11:07
46

Most people recommend using requests if it is available, and the requests documentation recommends this for downloading and saving raw data from a url:

import requests 

def download_url(url, save_path, chunk_size=128):
    r = requests.get(url, stream=True)
    with open(save_path, 'wb') as fd:
        for chunk in r.iter_content(chunk_size=chunk_size):
            fd.write(chunk)

Since the answer asks about downloading and saving the zip file, I haven't gone into details regarding reading the zip file. See one of the many answers below for possibilities.

If for some reason you don't have access to requests, you can use urllib.request instead. It may not be quite as robust as the above.

import urllib.request

def download_url(url, save_path):
    with urllib.request.urlopen(url) as dl_file:
        with open(save_path, 'wb') as out_file:
            out_file.write(dl_file.read())

Finally, if you are using Python 2 still, you can use urllib2.urlopen.

from contextlib import closing

def download_url(url, save_path):
    with closing(urllib2.urlopen(url)) as dl_file:
        with open(save_path, 'wb') as out_file:
            out_file.write(dl_file.read())
senderle
  • 125,265
  • 32
  • 201
  • 223
14

With the help of this blog post, I've got it working with just requests. The point of the weird stream thing is so we don't need to call content on large requests, which would require it to all be processed at once, clogging the memory. The stream avoids this by iterating through the data one chunk at a time.

url = 'https://www2.census.gov/geo/tiger/GENZ2017/shp/cb_2017_02_tract_500k.zip'
target_path = 'alaska.zip'

response = requests.get(url, stream=True)
handle = open(target_path, "wb")
for chunk in response.iter_content(chunk_size=512):
    if chunk:  # filter out keep-alive new chunks
        handle.write(chunk)
handle.close()
Jeremiah England
  • 451
  • 6
  • 12
  • 2
    Answers should not rely on links for the bulk of their content. Links can go dead, or the content on the other side can be changed to no longer answer the question. Please edit your answer to include a summary or explanation of the information you link points to. – mypetlion Jul 11 '18 at 19:57
  • What is `chunk_size` here? And can this parameter affect the speed of downloading? – ayush thakur Feb 10 '21 at 18:12
  • 1
    @ayushthakur Here are some links that may help: [`requests.Response.iter_content`](https://2.python-requests.org/en/master/api/#requests.Response.iter_content) and [wikipedia:Chunk Transfer Encoding](https://en.wikipedia.org/wiki/Chunked_transfer_encoding). Someone else could probably give a better answer, but I wouldn't expect `chunk_size` to make of a difference for download speed if it's set large enough (reducing #pings/content ratio). 512 bytes seems super small in retrospect. – Jeremiah England Feb 10 '21 at 19:46
11

Here's what I got to work in Python 3:

import zipfile, urllib.request, shutil

url = 'http://www....myzipfile.zip'
file_name = 'myzip.zip'

with urllib.request.urlopen(url) as response, open(file_name, 'wb') as out_file:
    shutil.copyfileobj(response, out_file)
    with zipfile.ZipFile(file_name) as zf:
        zf.extractall()
Webucator
  • 1,452
  • 13
  • 24
  • Hello. How can avoid this error: `urllib.error.HTTPError: HTTP Error 302: The HTTP server returned a redirect error that would lead to an infinite loop.`? – Victor M Herasme Perez Jul 24 '19 at 07:36
  • @VictorHerasmePerez, an HTTP 302 response status code means that the page has been moved. I think the issue your facing is addressed here: https://stackoverflow.com/questions/32569934/urlopen-returning-redirect-error-for-valid-links – Webucator Jul 24 '19 at 11:34
  • @Webucator What if the zipped folder contains several files, then all those files will get extracted and stored in the system.I want to extract and get just one file from the zipped folder. Any way to achieve this? – Mujeebur Rahman Apr 28 '21 at 07:41
5

I came here searching how to save a .bzip2 file. Let me paste the code for others who might come looking for this.

url = "http://api.mywebsite.com"
filename = "swateek.tar.gz"

response = requests.get(url, headers=headers, auth=('myusername', 'mypassword'), timeout=50)
if response.status_code == 200:
with open(filename, 'wb') as f:
   f.write(response.content)

I just wanted to save the file as is.

swateek
  • 5,013
  • 8
  • 31
  • 46
5

Either use urllib2.urlopen, or you could try using the excellent Requests module and avoid urllib2 headaches:

import requests
results = requests.get('url')
#pass results.content onto secondary processing...
aravenel
  • 328
  • 1
  • 2
  • 8
  • 1
    But how do you parse results.content int a zip? – 0atman Mar 09 '12 at 12:02
  • Use the `zipfile` module: `zip = zipfile.ZipFile(results.content)`. Then just parse through the files using `ZipFile.namelist()`, `ZipFile.open()`, or `ZipFile.extractall()` – aravenel Mar 10 '12 at 16:30
3

Thanks to @yoavram for the above solution, my url path linked to a zipped folder, and encounter an error of BADZipfile (file is not a zip file), and it was strange if I tried several times it retrieve the url and unzipped it all of sudden so I amend the solution a little bit. using the is_zipfile method as per here

r = requests.get(url, stream =True)
check = zipfile.is_zipfile(io.BytesIO(r.content))
while not check:
    r = requests.get(url, stream =True)
    check = zipfile.is_zipfile(io.BytesIO(r.content))
else:
    z = zipfile.ZipFile(io.BytesIO(r.content))
    z.extractall()
hindamosh
  • 49
  • 5
0

Use requests, zipfile and io python packages.

Specially BytesIO function is used to keep the unzipped file in memory rather than saving it into the drive.

import requests
from zipfile import ZipFile
from io import BytesIO

r = requests.get(zip_file_url)
z = ZipFile(BytesIO(r.content))    
file = z.extract(a_file_to_extract, path_to_save)
with open(file) as f:
    print(f.read())