75

I'm trying to setup a Amazon Linux AMI(ami-f0091d91) and have a script that runs a copy command to copy from a S3 bucket.

 aws --debug s3 cp s3://aws-codedeploy-us-west-2/latest/codedeploy-agent.noarch.rpm .

This script works perfectly on my local machine but fails with the following error on the Amazon Image:

2016-03-22 01:07:47,110 - MainThread - botocore.auth - DEBUG - StringToSign:
HEAD


Tue, 22 Mar 2016 01:07:47 GMT
x-amz-security-token:AQoDYXdzEPr//////////wEa4ANtcDKVDItVq8Z5OKms8wpQ3MS4dxLtxVq6Om1aWDhLmZhL2zdqiasNBV4nQtVqwyPsRVyxl1Urq1BBCnZzDdl4blSklm6dvu+3efjwjhudk7AKaCEHWlTd/VR3cksSNMFTcI9aIUUwzGW8lD9y8MVpKzDkpxzNB7ZJbr9HQNu8uF/st0f45+ABLm8X4FsBPCl2I3wKqvwV/s2VioP/tJf7RGQK3FC079oxw3mOid5sEi28o0Qp4h/Vy9xEHQ28YQNHXOBafHi0vt7vZpOtOfCJBzXvKbk4zRXbLMamnWVe3V0dArncbNEgL1aAi1ooSQ8+Xps8ufFnqDp7HsquAj50p459XnPedv90uFFd6YnwiVkng9nNTAF+2Jo73+eKTt955Us25Chxvk72nAQsAZlt6NpfR+fF/Qs7jjMGSF6ucjkKbm0x5aCqCw6YknsoE1Rtn8Qz9tFxTmUzyCTNd7uRaxbswm7oHOdsM/Q69otjzqSIztlwgUh2M53LzgChQYx5RjYlrjcyAolRguJjpSq3LwZ5NEacm/W17bDOdaZL3y1977rSJrCxb7lmnHCOER5W0tsF9+XUGW1LMX69EWgFYdn5QNqFk6mcJsZWrR9dkehaQwjLPcv/29QcM+b5u/0goazCtwU=
/aws-codedeploy-us-west-2/latest/codedeploy-agent.noarch.rpm
2016-03-22 01:07:47,111 - MainThread - botocore.endpoint - DEBUG - Sending http request: <PreparedRequest [HEAD]>
2016-03-22 01:07:47,111 - MainThread - botocore.vendored.requests.packages.urllib3.connectionpool - INFO - Starting new HTTPS connection (1): aws-codedeploy-us-west-2.s3.amazonaws.com
2016-03-22 01:07:47,151 - MainThread - botocore.vendored.requests.packages.urllib3.connectionpool - DEBUG - "HEAD /latest/codedeploy-agent.noarch.rpm HTTP/1.1" 403 0
2016-03-22 01:07:47,151 - MainThread - botocore.parsers - DEBUG - Response headers: {'x-amz-id-2': '0mRvGge9ugu+KKyDmROm4jcTa1hAnA5Ax8vUlkKZXoJ//HVJAKxbpFHvOGaqiECa4sgon2F1kXw=', 'server': 'AmazonS3', 'transfer-encoding': 'chunked', 'x-amz-request-id': '6204CD88E880E5DD', 'date': 'Tue, 22 Mar 2016 01:07:46 GMT', 'content-type': 'application/xml'}
2016-03-22 01:07:47,152 - MainThread - botocore.parsers - DEBUG - Response body:

2016-03-22 01:07:47,152 - MainThread - botocore.hooks - DEBUG - Event needs-retry.s3.HeadObject: calling handler <botocore.retryhandler.RetryHandler object at 0x7f421075bcd0>
2016-03-22 01:07:47,152 - MainThread - botocore.retryhandler - DEBUG - No retry needed.
2016-03-22 01:07:47,152 - MainThread - botocore.hooks - DEBUG - Event after-call.s3.HeadObject: calling handler <function enhance_error_msg at 0x7f4211085758>
2016-03-22 01:07:47,152 - MainThread - botocore.hooks - DEBUG - Event after-call.s3.HeadObject: calling handler <awscli.errorhandler.ErrorHandler object at 0x7f421100cc90>
2016-03-22 01:07:47,152 - MainThread - awscli.errorhandler - DEBUG - HTTP Response Code: 403
2016-03-22 01:07:47,152 - MainThread - awscli.customizations.s3.s3handler - DEBUG - Exception caught during task execution: A client error (403) occurred when calling the HeadObject operation: Forbidden
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/site-packages/awscli/customizations/s3/s3handler.py", line 100, in call
    total_files, total_parts = self._enqueue_tasks(files)
  File "/usr/local/lib/python2.7/site-packages/awscli/customizations/s3/s3handler.py", line 178, in _enqueue_tasks
    for filename in files:
  File "/usr/local/lib/python2.7/site-packages/awscli/customizations/s3/fileinfobuilder.py", line 31, in call
    for file_base in files:
  File "/usr/local/lib/python2.7/site-packages/awscli/customizations/s3/filegenerator.py", line 142, in call
    for src_path, extra_information in file_iterator:
  File "/usr/local/lib/python2.7/site-packages/awscli/customizations/s3/filegenerator.py", line 314, in list_objects
    yield self._list_single_object(s3_path)
  File "/usr/local/lib/python2.7/site-packages/awscli/customizations/s3/filegenerator.py", line 343, in _list_single_object
    response = self._client.head_object(**params)
  File "/usr/local/lib/python2.7/site-packages/botocore/client.py", line 228, in _api_call
    return self._make_api_call(operation_name, kwargs)
  File "/usr/local/lib/python2.7/site-packages/botocore/client.py", line 488, in _make_api_call
    model=operation_model, context=request_context
  File "/usr/local/lib/python2.7/site-packages/botocore/hooks.py", line 226, in emit
    return self._emit(event_name, kwargs)
  File "/usr/local/lib/python2.7/site-packages/botocore/hooks.py", line 209, in _emit
    response = handler(**kwargs)
  File "/usr/local/lib/python2.7/site-packages/awscli/errorhandler.py", line 70, in __call__
    http_status_code=http_response.status_code)
ClientError: A client error (403) occurred when calling the HeadObject operation: Forbidden
2016-03-22 01:07:47,153 - Thread-1 - awscli.customizations.s3.executor - DEBUG - Received print task: PrintTask(message='A client error (403) occurred when calling the HeadObject operation: Forbidden', error=True, total_parts=None, warning=None)
A client error (403) occurred when calling the HeadObject operation: Forbidden

However, when I run it with the --no-sign-request option, it works perfectly:

 aws --debug --no-sign-request s3 cp s3://aws-codedeploy-us-west-2/latest/codedeploy-agent.noarch.rpm .

Can someone please explain what is going on?

MojoJojo
  • 2,936
  • 2
  • 20
  • 46
  • 4
    It *looks like* you're (maybe implicitly) using the instance's IAM role to make the request (that would explain `x-amz-security-token` -- temporary credentials from the role) and your role denies access to S3... or the bucket (not yours, I take it?) doesn't allow access with credentials -- though if it's public, that's strange. As always, make sure your system clock is correct, since with `HEAD` the error body is always suppressed. – Michael - sqlbot Mar 22 '16 at 01:55
  • Hi, thank you for the quick response. The bucket that i'm trying access is, indeed, public. Not sure why it is complaining about a signed request then. It fails with similar error on my own bucket as well without the --no-sign-request option. – MojoJojo Mar 22 '16 at 02:01
  • You do have an IAM role on this instance, right? It sounds as if that role may be restricting things, perhaps in unexpected ways. – Michael - sqlbot Mar 22 '16 at 02:14
  • My answer for a similar situation. https://stackoverflow.com/a/56743569/577652 – shlomiLan Jun 24 '19 at 20:34

24 Answers24

31

I figured it out. I had an error in my cloud formation template that was creating the EC2 instances. As a result, the EC2 instances that were trying to access the above code deploy buckets, were in different regions (not us-west-2). It seems like the access policies on the buckets (owned by Amazon) only allow access from the region they belong in. When I fixed the error in my template (it was wrong parameter map), the error disappeared

MojoJojo
  • 2,936
  • 2
  • 20
  • 46
  • 6
    You wrote: "access policies on the buckets (owned by Amazon) only allow access from the region they belong in." Buckets don't "belong to a region". They are Global. Wish I understood what fixed your error. – LeslieK Jun 17 '17 at 17:07
  • 9
    Buckets actually are defined in a region. – dmohr Aug 23 '17 at 17:07
  • Passing the bucket's region as parameter worked for me. – Giovane Jul 12 '18 at 17:20
  • I was getting something similar on boto3 in a cross-region request, and discovered a bug in my policy in the process. [in this answer](https://stackoverflow.com/a/53465600/5556676). Can't say it's the same fix here, but maybe that's a hint? @LeslieK – init_js Nov 27 '18 at 06:44
  • I have checked to know if `us-west-2a` would be different from `us-west-2b` and it turns out that it works either way. It does not contradict with your answer but it adds to it. Thanks. – Yevgeniy Afanasyev Nov 28 '18 at 07:12
  • @LeslieK AWS S3 bucket names are global, yes, but buckets themselves are always regional. To get the bucket region see https://docs.aws.amazon.com/cli/latest/reference/s3api/get-bucket-location.html Only in Google Cloud buckets can be multi-regional (but still confined to one multi-region like US or EU). – Dzmitry Lazerka Feb 19 '19 at 00:56
  • Basically what happens is if the API command does not specify the region, it will try to use the profile connected default so the bucket name will not be formed properly [see accessing bucket](https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingBucket.html#access-bucket-intro), as the URL includes the region. – Efren Jan 14 '20 at 06:50
29

in my case the problem was the Resource statement in the user access policy.

First we had "Resource": "arn:aws:s3:::BUCKET_NAME", but in order to have access to objects within a bucket you need a /* at the end: "Resource": "arn:aws:s3:::BUCKET_NAME/*"

From the AWS documentation:

Bucket access permissions specify which users are allowed access to the objects in a bucket and which types of access they have. Object access permissions specify which users are allowed access to the object and which types of access they have. For example, one user might have only read permission, while another might have read and write permissions.

Nick Brady
  • 4,708
  • 1
  • 30
  • 57
trudolf
  • 1,237
  • 13
  • 10
23

Trying to solve this problem myself, I discovered that there is no HeadBucket permission. It looks like there is, because that's what the error message tells you, but actually the HEAD operation requires the ListBucket permission. I also discovered that my IAM policy and my bucket policy were conflicting. Make sure you check both.

andrew lorien
  • 1,464
  • 18
  • 22
  • 1
    yes, this is the algorithm... https://youtu.be/YQsK4MtsELU?t=808 . you need to make sure resource policy does not conflict with IAM policy – overexchange Jul 31 '19 at 16:34
  • 1
    This should be the accepted answer - see also [this AWS support article](https://aws.amazon.com/premiumsupport/knowledge-center/s3-access-denied-listobjects-sync/). For some reason, this permission is required for `aws s3 cp` of an object from a bucket. – RichVel Mar 10 '21 at 12:02
7

I was getting the error A client error (403) occurred when calling the HeadObject operation: Forbidden for my aws cli copy command aws s3 cp s3://bucket/file file. I was using a IAM role which had full S3 access using an Inline Policy.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": "s3:*",
      "Resource": "*"
    }
  ]
}

If I give it the full S3 access from the Managed Policies instead, then the command works. I think this must be a bug from Amazon, because the policies in both cases were exactly the same.

Shadi
  • 7,343
  • 3
  • 34
  • 58
  • Btw, I was trying to use [goofys](https://github.com/kahing/goofys/) to mount an s3 bucket in my ubuntu server filesystem via an IAM user attached to a similar policy to the above but with `Resource: "example"` set (instead of `*`), and that caused the inability to create files there ([similar issue](https://github.com/kahing/goofys/issues/165)). I just changed it to the `managed policy` of `AmazonS3FullAccess` – Shadi Jul 02 '17 at 13:31
  • 63
    This is a bad answer - you should never allow policies that allow access to everything – Marco de Abreu May 07 '18 at 13:44
  • 1
    It's a workaround as long as the original bug is not fixed – Shadi May 07 '18 at 15:07
  • 9
    I can't believe I landed on my own post 3 years later again :/ – Shadi Oct 01 '19 at 10:36
  • It only allows access to all S3 buckets that the account has, and may be denied by bucket policies anyhow. However you should always deny public access to buckets unless you really know what you are doing ! ;-D – MikeW Oct 26 '20 at 12:12
6

Check your object owner if you copy the file from another aws account.

In my case, I copy the file from another aws account without acl, so file's owner is the other aws account, it's mean the file belongs to origin account.

To fix it, copy or sync s3 files with acl, example:

aws s3 cp --acl bucket-owner-full-control s3://bucket1/key s3://bucket2/key
dm03514
  • 50,477
  • 16
  • 96
  • 131
王信凱
  • 183
  • 1
  • 7
  • 1
    out of date because it has been replaced by `--acl` as per the help page ( accepts values of private, public-read, public-read-write, authenticated-read, aws-exec-read, bucket-owner-read, bucket-owner-full-control and log-delivery-write). Thanks for putting me on the right track – Tom Mar 13 '20 at 12:34
5
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "AllowAllS3ActionsInUserFolder",
            "Effect": "Allow",
            "Action": [
                "s3:*"
            ],
            "Resource": [
                "arn:aws:s3:::your_bucket_name",
                "arn:aws:s3:::your_bucket_name/*"
            ]
        }
    ]
}

Adding both "arn:aws:s3:::your_bucket_name" and "arn:aws:s3:::your_bucket_name/*" to policy congiguration fixed the issue for me.

Nisal Perera
  • 51
  • 1
  • 2
4

One of the reasons for this could be if you try accessing buckets of a region which requires V4-Signing. Try explicitly providing the region, as --region cn-north-1

metame
  • 1,982
  • 1
  • 14
  • 21
Saurabh
  • 109
  • 1
  • 8
4

I've had this issue, adding --recursive to the command will help.

At this point it doesn't quite make sense as you (like me) are only trying to copy a single file down, but it does the trick!

Scott Bennett-McLeish
  • 8,908
  • 10
  • 38
  • 45
4

In my case, i got this error trying to get an object on an S3 bucket folder. But in that folder my object was not here (i put the wrong folder), so S3 send this message. Hope it could help you too.

Vince
  • 51
  • 3
3

403 - means I know who you are but you are not authorized to do what you asking.

In my case, the problem was in a Policy - I didn't choose an object when specified the Policy in Visual Editor

enter image description here

SAndriy
  • 134
  • 1
  • 6
  • 16
2

I was getting this error message due to my EC2 instance's clock being out of sync.

I was able to fix on Ubuntu using this:

sudo ntpdate ntp.ubuntu.com
sudo apt-get install ntp
Tatsu
  • 93
  • 7
2

The minimal permissions that worked for me when running HeadObject on any object in mybucket:

        {
            "Effect": "Allow",
            "Action": [
                "s3:GetObject",
                "s3:ListBucket"
            ],
            "Resource": [
                "arn:aws:s3:::mybucket/*",
                "arn:aws:s3:::mybucket"
            ]
        }
Anemone
  • 67
  • 1
  • 5
1

I got this error with a mis-configured test event. I changed the source buckets ARN but forgot to edit the default S3 bucket name.

I.e. make sure that in the bucket section of the test event both the ARN and bucket name are set correctly:

"bucket": {
  "arn": "arn:aws:s3:::your_bucket_name",
  "name": "your_bucket_name",
  "ownerIdentity": {
    "principalId": "EXAMPLE"
  }
quax
  • 111
  • 4
1

I also experienced that behaviour. In my case I've found that if the IAM policy doesn't have access to read the object (s3:GetObject), the same error is raised.

I agree with you that the error raised from aws console & cli is not really well explained and may cause confusion.

Adrian Antunez
  • 730
  • 6
  • 12
1

I had a lambda function doing the same, copy from bucket to bucket.

The lambda had permissions to use the source bucket as trigger.

Configuration tab

enter image description here

But it also needs permissions to OPERATE with buckets.

Permissions tab

enter image description here

If s3 is not there, then you need to edit the Role used by the lambda and add it (see the s3FullAccess)

enter image description here

Rub
  • 790
  • 8
  • 15
0

I was getting a 403 on HEAD requests while the GET requests were working. It turned out to be the CORS config in s3 permissions. I had to add HEAD

<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
    <AllowedOrigin>*</AllowedOrigin>
    <AllowedMethod>HEAD</AllowedMethod>
    <AllowedMethod>PUT</AllowedMethod>
    <AllowedMethod>POST</AllowedMethod>
    <AllowedMethod>GET</AllowedMethod>
    <AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration>
Ioannis Tsiokos
  • 521
  • 7
  • 12
0

I have also experienced this scenario.

I have a bucket with policy that uses AWS4-HMAC-SHA256. Turns out my awscli is not updated to the latest version. Mine was aws-cli/1.10.8. Upgrading it have solved the problem.

pip install awscli --upgrade --user

https://docs.aws.amazon.com/cli/latest/userguide/installing.html

Renzo Sunico
  • 23
  • 2
  • 4
0

If running in an environment where the credential/role is not clear, be sure you included the --profile=yourprofile flag so the cli knows what credentials to use. For example:

aws s3 cp s3://yourbucket destination.txt --profile=yourprofile

will succeed while the following yielded the HeadObject error

aws s3 cp s3://yourbucket destination.txt

The profile settings reference entries in your config and credentials files.

rictionaryFever
  • 681
  • 7
  • 12
0

When it comes to cross-account S3 access

An IAM user policy will not over-ride the policy defined for the bucket in the foreign account.

s3:GetObject must be allowed for accountA/user as well as on the accountB/bucket

dank
  • 429
  • 3
  • 10
0

I got this fixed by setting the system time correctly.

Ensure the aws bucket region is right and your system time matches the aws region time

praveen
  • 94
  • 8
0

When I faced this issue, I discovered that my problem was that the files in the 'Source Account' were copied there by a 'third party' and the Owner was not the Source Account.

I had to recopy the objects to themselves in the same bucket with the --metadata-directive REPLACE

Detailed explanation in Amazon Documentation

codaddict
  • 21
  • 2
0

Permissions

You need the s3:GetObject permission for this operation. For more information, see Specifying Permissions in a Policy. If the object you request does not exist, the error Amazon S3 returns depends on whether you also have the s3:ListBucket permission.

If you have the s3:ListBucket permission on the bucket, Amazon S3 returns an HTTP status code 404 ("no such key") error.

If you don’t have the s3:ListBucket permission, Amazon S3 returns an HTTP status code 403 ("access denied") error.

The following operation is related to HeadObject:

GetObject

Source: https://docs.aws.amazon.com/AmazonS3/latest/API/API_HeadObject.html

paulopontesm
  • 250
  • 1
  • 4
0

It's a terrible practice to give away access to the entire s3 (all actions, all buckets), just to unblock yourself.

The 403 error above is usually due to the lack of "Read" permission of files. The Read action for reading a file in S3 is s3:GetObject.

        {
            "Effect": "Allow",
            "Action": "s3:GetObject",
            "Resource": [
                "arn:aws:s3:::mybucketname/path/*",
                "arn:aws:s3:::mybucketname"
            ]
        }

Solution 1: A new Policy in IAM (Tell Role/User to know S3)

You can create a Policy (e.g. MY_S3_READER) with the following, and attach it to the user or role that's doing the job. (e.g. EC2 Instance's IAM role)

Here is the exact JSON for your Policy: (just replace mybucketname and path)

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": "s3:GetObject",
            "Resource": [
                "arn:aws:s3:::mybucketname/path/*",
                "arn:aws:s3:::mybucketname"
            ]
        }
    ]
}

Create this Policy. Then, go to IAM > Roles > Attach Policy and attach it.

Solution 2: Edit Buckey Policy in S3 (Tell S3 to know User/Role)

Go to your bucket in S3, then add the following example: (replace mybucketname and myip)

{
    "Version": "2012-10-17",
    "Id": "SourceIP",
    "Statement": [
        {
            "Sid": "ValidIpAllowRead",
            "Effect": "Allow",
            "Principal": "*",
            "Action": "s3:GetObject",
            "Resource": [
                "arn:aws:s3:::mybucketname",
                "arn:aws:s3:::mybucketname/*"
            ],
            "Condition": {
                "IpAddress": {
                    "aws:SourceIp": "myip/32"
                }
            }
        }
    ]
}

If you want to change this read permission to by User or Role (instead of IP Address), remove the Condition part, and change "Principal" to "Principal": { "AWS": "<IAM User/Role's ARN>" },".

Additional Notes

  1. Check the permissions via aws s3 cp or aws s3 ls manually for faster debugging.

  2. It sometimes takes up to 30 seconds for the permission change to be effective. Be patient.

  3. Note that for doing "ls" (e.g. aws s3 ls s3://mybucket/mypath) you need s3:ListBucket access.

  4. IMPORTANT Accessing files by their HTTP(S) URL via cURL or similar tools (e.g. axios on AJAX calls) requires you to grant either IP access, or supply proper headers, manually, or get a signedUrl from the SDK first.

Aidin
  • 7,505
  • 2
  • 35
  • 33
0

Maybe this will help someone. In my case, I was running a CodeBuild job and the CodeBuild execution role had full access to S3. I was trying to list keys in an S3 bucket via the CLI. However, I was using the CLI from within the aws-cli Docker image and passing the credentials via environment variables per this article:

https://docs.aws.amazon.com/codebuild/latest/userguide/troubleshooting.html#troubleshooting-versions

No matter what I tried, any calls using aws s3api ... would fail with that same 403 error. The solution for me was to convert to a normal s3 CLI call (as opposed to s3api CLI call):

aws s3 ls s3://bucket/key --recursive

This change worked. The call using aws s3api did not.

GoForth
  • 169
  • 1
  • 9