222

I would like to know if a key exists in boto3. I can loop the bucket contents and check the key if it matches.

But that seems longer and an overkill. Boto3 official docs explicitly state how to do this.

May be I am missing the obvious. Can anybody point me how I can achieve this.

Prabhakar Shanmugam
  • 3,916
  • 5
  • 20
  • 30

23 Answers23

246

Boto 2's boto.s3.key.Key object used to have an exists method that checked if the key existed on S3 by doing a HEAD request and looking at the the result, but it seems that that no longer exists. You have to do it yourself:

import boto3
import botocore

s3 = boto3.resource('s3')

try:
    s3.Object('my-bucket', 'dootdoot.jpg').load()
except botocore.exceptions.ClientError as e:
    if e.response['Error']['Code'] == "404":
        # The object does not exist.
        ...
    else:
        # Something else has gone wrong.
        raise
else:
    # The object does exist.
    ...

load() does a HEAD request for a single key, which is fast, even if the object in question is large or you have many objects in your bucket.

Of course, you might be checking if the object exists because you're planning on using it. If that is the case, you can just forget about the load() and do a get() or download_file() directly, then handle the error case there.

Wander Nauta
  • 14,707
  • 1
  • 39
  • 59
  • 1
    Thanks for the quick reply Wander. I just need the same for boto3. – Prabhakar Shanmugam Nov 21 '15 at 12:19
  • Oh, I don't know how I missed that. Sorry about that. – Wander Nauta Nov 21 '15 at 12:27
  • 18
    For `boto3`, it seems the best you can do at the moment is to call [`head_object`](https://boto3.readthedocs.org/en/latest/reference/services/s3.html#S3.Client.head_object) to try and fetch the metadata for the key, then handle the resulting error if it doesn't exist. – Wander Nauta Nov 21 '15 at 12:35
  • Similar functionality using "head_object" function: http://stackoverflow.com/questions/26871884/how-can-i-easily-determine-if-a-boto-3-s3-bucket-resource-exists – markonovak Mar 21 '16 at 16:07
  • In case someone using boto2 comes across this thread, it is `k = Key(bucket)` then `k.key='hello'` then you can use `k.exists()` – Carson Ip Jun 22 '16 at 22:59
  • Within the except block you should use `raise` rather than `raise e` to preserve the stack trace. – Tim Jul 25 '16 at 07:25
  • I'm using boto3 and exceptions isn't loaded from "import botocore". Had to do "from botocore.exceptions import ClientError". – Alec McGail Aug 17 '16 at 22:23
  • I use `head_bucket`, given that the Boto3 documentation says: `head_bucket(**kwargs) This operation is useful to determine if a bucket exists and you have permission to access it.` Furthermore, the Boto3 documentation links to S3 documentation, which has almost the same explanation and states that `head_bucket` returns a 200 code if "the object exists and you have permission to access it." https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketHEAD.html – Iron Pillow Oct 09 '17 at 06:41
  • Oh, the above `head_bucket` suggestion works only for buckets, not for objects. I withdraw my suggestion. :-) – Iron Pillow Oct 09 '17 at 07:14
  • Do you really need the `else:` clause in addition to `try/except` ? Could not you just return `True` right before the line that contains the `except` clause? – Leonid Oct 30 '17 at 23:17
  • 1
    @Leonid You certainly could, but only if you wrapped this in a function or method, which is up to you. I've modified the example code a bit so that the `exists` boolean is gone, and it's clearer (I hope!) that people are supposed to adapt this to their situation. – Wander Nauta Nov 08 '17 at 18:15
  • 4
    -1; doesn't work for me. On boto3 version 1.5.26 I see `e.response['Error']['Code']` having a value like `"NoSuchKey"`, not `"404"`. I haven't checked whether this is due to a difference in library versions or a change in the API itself since this answer was written. Either way, in my version of boto3, a shorter approach than checking `e.response['Error']['Code']` is to catch only `s3.meta.client.exceptions.NoSuchKey` in the first place. – Mark Amery Feb 20 '18 at 17:02
  • 7
    if you are using an s3 `client` (as opposed to a `resource`) then do `s3.head_object(Bucket='my_bucket', Key='my_key')` instead of `s3.Object(...).load()` – user2426679 Apr 27 '20 at 23:20
  • I don't know why they have closed the issue on github while I see issue is still there in 1.13.24 version. https://github.com/boto/boto3/issues/759 – Jaspreet Jolly Jun 08 '20 at 09:48
  • When you depend on exception, there is always a drawback that you are dependent on the 3rd party library to throw an exception for you, of course, the implementation could change and your logic will fail in that case. I like EvilPuppetMaster's answer. – Vishrant Jun 18 '20 at 14:41
  • This can cause a problem if the file is huge. – CrabbyPete Oct 15 '20 at 14:06
153

I'm not a big fan of using exceptions for control flow. This is an alternative approach that works in boto3:

import boto3

s3 = boto3.resource('s3')
bucket = s3.Bucket('my-bucket')
key = 'dootdoot.jpg'
objs = list(bucket.objects.filter(Prefix=key))
if any([w.key == path_s3 for w in objs]):
    print("Exists!")
else:
    print("Doesn't exist")
jlansey
  • 2,859
  • 2
  • 13
  • 15
EvilPuppetMaster
  • 7,052
  • 9
  • 31
  • 30
  • 2
    Thanks for the update EvilPuppetMaster. Unfortunately when I checked last I didn't have list bucket access rights. Your answer is apt for my question, so I have up voted you. But I had already marked the first reply as answer long before. Thanks for your help. – Prabhakar Shanmugam Jan 05 '16 at 07:19
  • 35
    Doesn't this count as a listing request (12.5x more expensive than get)? If you do this for 100 million objects, that could get a bit pricey... I have the feeling that the catching-exception method is unfortunately the best so far. – Pierre D May 05 '16 at 01:04
  • 27
    List may be 12.5x as expensive per request, but a single request can also return 100 million objects where a single get can only return one. So in your hypothetical case, it would be cheaper to fetch all 100 million with list and then compare locally, than to do 100m individual gets. Not to mention 1000x faster since you wouldn't need the http round trip for every object. – EvilPuppetMaster May 07 '16 at 09:31
  • It is not working when my file is inside folders within a s3 bucket – user3186866 Jan 31 '19 at 20:32
  • 2
    @user3186866 That's because S3 doesn't actually have "folders". All objects exist as files at their given paths. Folders are a tool to help us organize and understand the structure of our storage, but in reality, S3 buckets are just that, buckets. – ibtokin Dec 16 '19 at 17:39
  • 2
    Use ```list_objects_v2``` of S3 client and set ```MaxKeys``` to 1. – Fang Zhang Feb 27 '20 at 18:34
  • I git this error AttributeError: 'S3' object has no attribute 'Bucket' – Keval Mar 13 '20 at 08:30
  • When checking existence of a folder (In the world of S3: The given key is prefix of some objects) containing a lot of objects, you may want to limit the response size by: `bucket.objects.filter(Prefix=key).limit(1)` for saving time – Benav Jul 24 '20 at 01:05
  • 3
    After running again with debug, looks like `bucket.objects.filter(Prefix=key).limit(1)` doesn't limit the actual response from S3, only the returned collection on the client side. Instead, you should `bucket.objects.filter(Prefix=key, MaxKeys=1)` as @FangZhang suggested above – Benav Jul 30 '20 at 01:03
  • @EvilPuppetMaster - it may be cheaper to fetch all 100 million with list and compare locally, but if you're listing each one by their specific key as in your example then you're going to have to List 100 million times. to efficiently use List you need to know a common prefix or list of common prefixes, which for 100 million items becomes its own N^2 nightmare if you have to calculate it yourself – zyd Oct 27 '20 at 15:26
  • @zyd yes my response to the cost of 100m lookups was more about the general approach of listing being cheaper than lookups since it can get 1000 objects at a time. You would obviously implement that differently to the example given for only one object. If the 100m objects were not a significant proportion of your bucket or a single prefix in your bucket, then perhaps it wouldn't be the best approach. But if that was the case I would suggest other alternatives like s3 inventory for your problem. – EvilPuppetMaster Dec 24 '20 at 00:58
  • @ibtokin, I didnt understand the folder part, if i have a file in a folder key = 'dootdoot.jpg', will return not found, so do i need to type the key value differently? key = 'myfolder/dootdoot.jpg' – Manza Apr 17 '21 at 06:00
144

The easiest way I found (and probably the most efficient) is this:

import boto3
from botocore.errorfactory import ClientError

s3 = boto3.client('s3')
try:
    s3.head_object(Bucket='bucket_name', Key='file_path')
except ClientError:
    # Not found
    pass
Alan W. Smith
  • 21,861
  • 3
  • 64
  • 88
o_c
  • 2,677
  • 1
  • 16
  • 20
  • 2
    Note: You don't have to pass aws_access_key_id/aws_secret_access_key etc. if using a role or you have the keys in you .aws config, you can simply do `s3 = boto3.client('s3')` – Andy Hayden Jun 21 '17 at 00:46
  • 27
    I think adding this test gives you a little more confidence the object really doesn't exist, rather than some other error raising the exception - note that 'e' is the ClientError exception instance: `if e.response['ResponseMetadata']['HTTPStatusCode'] == 404:` – Richard Jul 10 '17 at 16:17
  • @AndyHayden What would each try count as in terms of aws cost? – loop Dec 20 '17 at 12:06
  • 2
    @Taylor it's a get request but with no data transfer. – Andy Hayden Dec 20 '17 at 19:36
  • 1
    ClientError is a catch all for 400, not just 404 therefore it is not robust. – mickzer Jan 07 '20 at 16:43
  • An error occurred (403) when calling the HeadObject operation: Forbidden – ifti Feb 26 '20 at 17:42
  • @ifti 403 occurs if you (the iam user/role being used to create the s3 client) don't have permission to call head_object on the object, or if the object doesn't exist and you don't have list_bucket permission in that folder. You'll want to make sure you have these permissions first. – TheJKFever Feb 28 '20 at 20:16
  • 2
    @mickzer you are right. It is better to except a S3.Client.exceptions.NoSuchKey. – Neau Adrien Jul 10 '20 at 12:53
27

In Boto3, if you're checking for either a folder (prefix) or a file using list_objects. You can use the existence of 'Contents' in the response dict as a check for whether the object exists. It's another way to avoid the try/except catches as @EvilPuppetMaster suggests

import boto3
client = boto3.client('s3')
results = client.list_objects(Bucket='my-bucket', Prefix='dootdoot.jpg')
return 'Contents' in results
Lucian Thorr
  • 1,113
  • 13
  • 23
  • 2
    Had a problem in this. list_objects("2000") will return keys like "2000-01", "2000-02" – G. Cheng Mar 27 '17 at 03:15
  • 4
    This only returns upto 1000 objects! https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/s3.html#S3.Client.list_objects – RoachLord Dec 17 '18 at 19:42
  • This is the most efficient solution as this does not require `s3:GetObject` permissions just the `s3:ListBucket` permissions – Vishrant Jun 27 '20 at 23:35
17

You can use S3Fs, which is essentially a wrapper around boto3 that exposes typical file-system style operations:

import s3fs
s3 = s3fs.S3FileSystem()
s3.exists('myfile.txt')
VinceP
  • 1,584
  • 15
  • 28
  • 3
    Although I think this would work, the question asks about how to do this with boto3; in this case, it is practical to solve the problem without installing an additional library. – paulkernfeld Jan 07 '20 at 15:13
11

Not only client but bucket too:

import boto3
import botocore
bucket = boto3.resource('s3', region_name='eu-west-1').Bucket('my-bucket')

try:
  bucket.Object('my-file').get()
except botocore.exceptions.ClientError as ex:
  if ex.response['Error']['Code'] == 'NoSuchKey':
    print('NoSuchKey')
Vitaly Zdanevich
  • 8,667
  • 5
  • 38
  • 64
  • 4
    You may not want to get the object, but just see if it is there. You could use a method that heads the object like other examples here, such as `bucket.Object(key).last_modified`. – ryanjdillon Feb 26 '19 at 14:30
10

Assuming you just want to check if a key exists (instead of quietly over-writing it), do this check first:

import boto3

def key_exists(mykey, mybucket):
  s3_client = boto3.client('s3')
  response = s3_client.list_objects_v2(Bucket=mybucket, Prefix=mykey)
  if response:
      for obj in response['Contents']:
          if mykey == obj['Key']:
              return True
  return False

if key_exists('someprefix/myfile-abc123', 'my-bucket-name'):
    print("key exists")
else:
    print("safe to put new bucket object")
    # try:
    #     resp = s3_client.put_object(Body="Your string or file-like object",
    #                                 Bucket=mybucket,Key=mykey)
    # ...check resp success and ClientError exception for errors...
marvls
  • 126
  • 1
  • 5
7

This could check both prefix and key, and fetches at most 1 key.

def prefix_exits(bucket, prefix):
    s3_client = boto3.client('s3')
    res = s3_client.list_objects_v2(Bucket=bucket, Prefix=prefix, MaxKeys=1)
    return 'Contents' in res
Fang Zhang
  • 947
  • 11
  • 14
6
import boto3
client = boto3.client('s3')
s3_key = 'Your file without bucket name e.g. abc/bcd.txt'
bucket = 'your bucket name'
content = client.head_object(Bucket=bucket,Key=s3_key)
    if content.get('ResponseMetadata',None) is not None:
        print "File exists - s3://%s/%s " %(bucket,s3_key) 
    else:
        print "File does not exist - s3://%s/%s " %(bucket,s3_key)
Vivek
  • 433
  • 9
  • 13
6

FWIW, here are the very simple functions that I am using

import boto3

def get_resource(config: dict={}):
    """Loads the s3 resource.

    Expects AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY to be in the environment
    or in a config dictionary.
    Looks in the environment first."""

    s3 = boto3.resource('s3',
                        aws_access_key_id=os.environ.get(
                            "AWS_ACCESS_KEY_ID", config.get("AWS_ACCESS_KEY_ID")),
                        aws_secret_access_key=os.environ.get("AWS_SECRET_ACCESS_KEY", config.get("AWS_SECRET_ACCESS_KEY")))
    return s3


def get_bucket(s3, s3_uri: str):
    """Get the bucket from the resource.
    A thin wrapper, use with caution.

    Example usage:

    >> bucket = get_bucket(get_resource(), s3_uri_prod)"""
    return s3.Bucket(s3_uri)


def isfile_s3(bucket, key: str) -> bool:
    """Returns T/F whether the file exists."""
    objs = list(bucket.objects.filter(Prefix=key))
    return len(objs) == 1 and objs[0].key == key


def isdir_s3(bucket, key: str) -> bool:
    """Returns T/F whether the directory exists."""
    objs = list(bucket.objects.filter(Prefix=key))
    return len(objs) > 1
Andy Reagan
  • 323
  • 3
  • 15
  • 1
    this is the only response i saw that addressed checking for existence for a 'folder' as compared to a 'file'. that is super-important for routines that need to know if a specific folder exists, not the specific files in a folder. – dave campbell Dec 05 '18 at 17:21
  • While this is a careful answer it is only useful if the user understand that the notion of a folder is misleading in this case. An empty 'folder' can exist in S3 inside a bucket and if so the isdir_s3 will return False took me a couple of minutes to sort that out I was thinking about editing the answer as if the expression is changed to >0 you will get the result you are expecting – PyNEwbie Oct 19 '19 at 00:11
3

Try This simple

import boto3
s3 = boto3.resource('s3')
bucket = s3.Bucket('mybucket_name') # just Bucket name
file_name = 'A/B/filename.txt'      # full file path
obj = list(bucket.objects.filter(Prefix=file_name))
if len(obj) > 0:
    print("Exists")
else:
    print("Not Exists")
Alkesh Mahajan
  • 419
  • 4
  • 15
2

There is one simple way by which we can check if file exists or not in S3 bucket. We donot need to use exception for this

sesssion = boto3.Session(aws_access_key_id, aws_secret_access_key)
s3 = session.client('s3')

object_name = 'filename'
bucket = 'bucketname'
obj_status = s3.list_objects(Bucket = bucket, Prefix = object_name)
if obj_status.get('Contents'):
    print("File exists")
else:
    print("File does not exists")
Mahesh Mogal
  • 470
  • 7
  • 9
  • This will be incorrect if a file that starts with `object_name` exists in the bucket. E.g. `my_file.txt.oldversion` will return a false positive if you check for `my_file.txt`. A bit of an edge case for most, but for something as broad as "does the file exist" that you're likely to use throughout your application probably worth taking into consideration. – Andrew Schwartz Sep 11 '19 at 15:31
2

If you seek a key that is equivalent to a directory then you might want this approach

session = boto3.session.Session()
resource = session.resource("s3")
bucket = resource.Bucket('mybucket')

key = 'dir-like-or-file-like-key'
objects = [o for o in bucket.objects.filter(Prefix=key).limit(1)]    
has_key = len(objects) > 0

This works for a parent key or a key that equates to file or a key that does not exist. I tried the favored approach above and failed on parent keys.

Peter Kahn
  • 10,918
  • 14
  • 64
  • 108
2

you can use Boto3 for this.

import boto3
s3 = boto3.resource('s3')
bucket = s3.Bucket('my-bucket')
objs = list(bucket.objects.filter(Prefix=key))
if(len(objs)>0):
    print("key exists!!")
else:
    print("key doesn't exist!")

Here key is the path you want to check exists or not

AshuGG
  • 323
  • 4
  • 8
1

If you have less than 1000 in a directory or bucket you can get set of them and after check if such key in this set:

files_in_dir = {d['Key'].split('/')[-1] for d in s3_client.list_objects_v2(
Bucket='mybucket',
Prefix='my/dir').get('Contents') or []}

Such code works even if my/dir is not exists.

http://boto3.readthedocs.io/en/latest/reference/services/s3.html#S3.Client.list_objects_v2

Vitaly Zdanevich
  • 8,667
  • 5
  • 38
  • 64
1
S3_REGION="eu-central-1"
bucket="mybucket1"
name="objectname"

import boto3
from botocore.client import Config
client = boto3.client('s3',region_name=S3_REGION,config=Config(signature_version='s3v4'))
list = client.list_objects_v2(Bucket=bucket,Prefix=name)
for obj in list.get('Contents', []):
    if obj['Key'] == name: return True
return False
1

For boto3, ObjectSummary can be used to check if an object exists.

Contains the summary of an object stored in an Amazon S3 bucket. This object doesn't contain contain the object's full metadata or any of its contents

import boto3
from botocore.errorfactory import ClientError
def path_exists(path, bucket_name):
    """Check to see if an object exists on S3"""
    s3 = boto3.resource('s3')
    try:
        s3.ObjectSummary(bucket_name=bucket_name, key=path).load()
    except ClientError as e:
        if e.response['Error']['Code'] == "404":
            return False
        else:
            raise e
    return True

path_exists('path/to/file.html')

In ObjectSummary.load

Calls s3.Client.head_object to update the attributes of the ObjectSummary resource.

This shows that you can use ObjectSummary instead of Object if you are planning on not using get(). The load() function does not retrieve the object it only obtains the summary.

Veedka
  • 356
  • 4
  • 6
1

I noticed that just for catching the exception using botocore.exceptions.ClientError we need to install botocore. botocore takes up 36M of disk space. This is particularly impacting if we use aws lambda functions. In place of that if we just use exception then we can skip using the extra library!

  • I am validating for the file extension to be '.csv'
  • This will not throw an exception if the bucket does not exist!
  • This will not throw an exception if the bucket exists but object does not exist!
  • This throws out an exception if the bucket is empty!
  • This throws out an exception if the bucket has no permissions!

The code looks like this. Please share your thoughts:

import boto3
import traceback

def download4mS3(s3bucket, s3Path, localPath):
    s3 = boto3.resource('s3')

    print('Looking for the csv data file ending with .csv in bucket: ' + s3bucket + ' path: ' + s3Path)
    if s3Path.endswith('.csv') and s3Path != '':
        try:
            s3.Bucket(s3bucket).download_file(s3Path, localPath)
        except Exception as e:
            print(e)
            print(traceback.format_exc())
            if e.response['Error']['Code'] == "404":
                print("Downloading the file from: [", s3Path, "] failed")
                exit(12)
            else:
                raise
        print("Downloading the file from: [", s3Path, "] succeeded")
    else:
        print("csv file not found in in : [", s3Path, "]")
        exit(12)
user 923227
  • 1,963
  • 3
  • 20
  • 39
  • AWS says that python runtimes come with boto3 preinstalled: https://docs.aws.amazon.com/lambda/latest/dg/lambda-runtimes.html – rinat.io Feb 21 '20 at 15:12
1

Here is a solution that works for me. One caveat is that I know the exact format of the key ahead of time, so I am only listing the single file

import boto3

# The s3 base class to interact with S3
class S3(object):
  def __init__(self):
    self.s3_client = boto3.client('s3')

  def check_if_object_exists(self, s3_bucket, s3_key):
    response = self.s3_client.list_objects(
      Bucket = s3_bucket,
      Prefix = s3_key
      )
    if 'ETag' in str(response):
      return True
    else:
      return False

if __name__ == '__main__':
  s3  = S3()
  if s3.check_if_object_exists(bucket, key):
    print "Found S3 object."
  else:
    print "No object found."
Rush S
  • 11
  • 1
1

Just following the thread, can someone conclude which one is the most efficient way to check if an object exists in S3?

I think head_object might win as it just checks the metadata which is lighter than the actual object itself

Sai
  • 1,312
  • 2
  • 23
  • 41
  • Yes, `head_object` is the fastest way -- it is also how `s3.Object('my-bucket', 'dootdoot.jpg').load()` checks under the hood if the object exists. You can see this if you look at the error message of this method when it fails. – Marco Feb 18 '21 at 13:48
1

Use this concise oneliner, makes it less intrusive when you have to throw it inside an existing project without modifying much of the code.

s3_file_exists = lambda filename: bool(list(bucket.objects.filter(Prefix=filename)))

The above function assumes the bucket variable was already declared.

You can extend the lambda to support additional parameter like

s3_file_exists = lambda filename, bucket: bool(list(bucket.objects.filter(Prefix=filename)))
nehem
  • 9,162
  • 6
  • 44
  • 71
0

Check out

bucket.get_key(
    key_name, 
    headers=None, 
    version_id=None, 
    response_headers=None, 
    validate=True
)

Check to see if a particular key exists within the bucket. This method uses a HEAD request to check for the existence of the key. Returns: An instance of a Key object or None

from Boto S3 Docs

You can just call bucket.get_key(keyname) and check if the returned object is None.

0

It's really simple with get() method

import botocore
from boto3.session import Session
session = Session(aws_access_key_id='AWS_ACCESS_KEY',
                aws_secret_access_key='AWS_SECRET_ACCESS_KEY')
s3 = session.resource('s3')
bucket_s3 = s3.Bucket('bucket_name')

def not_exist(file_key):
    try:
        file_details = bucket_s3.Object(file_key).get()
        # print(file_details) # This line prints the file details
        return False
    except botocore.exceptions.ClientError as e:
        if e.response['Error']['Code'] == "NoSuchKey": # or you can check with e.reponse['HTTPStatusCode'] == '404'
            return True
        return False # For any other error it's hard to determine whether it exists or not. so based on the requirement feel free to change it to True/ False / raise Exception

print(not_exist('hello_world.txt')) 
isambitd
  • 761
  • 7
  • 13
  • Not robust, exception could be thrown for many reasons e.g. HTTP 500 and this code would assume a 404. – mickzer Jan 07 '20 at 16:42
  • But we need info about, whether the file is accessible or not. It it exists and cannot be accessible then it is equivalent to not exist. right? – isambitd Jan 07 '20 at 17:41
  • @mickzer check the changes now. – isambitd Jan 10 '20 at 16:47
  • 1
    To reply to you previous comment, No, the behavior, on a HTTP 500 might be to retry, a 401/403 to fix auth etc. Its important to check for the actual error code. – mickzer Jan 13 '20 at 14:37