1

I had lambdas from one region (us-west-2), receiving 403's for S3 operations (HeadObject, PutObject, CopyObject) against objects in a bucket from a different region (ca-central-1). The policy simulator assured me that the operations should work under my policy, but clearly there was something else at play. This policy is attached to a role, and I have a trust relationship between the lambda and that role.

One attempt I made at solving the problem was to specify the region name by appending it to the bucket name.

i.e., changing:

head_object(Bucket="foo", ...)

to the (slightly) more qualified naming:

head_object(Bucket="foo.us-west-2", Key="bar")

Interestingly, this would change the 403 to a 404.

I've stumbled upon this workaround (?) through guesswork, based on the required structure of the host header, and intro: working with buckets. But it's a stretch.

I can't find a reference in the docs where the various accepted forms of bucket names are listed (e.g. from the simple name, to a fully qualified ARN). Is the list of supported formats for specifying bucket and key names readily available?

Appending .<region> to the bucket name will allow HeadObject to work differently, but PutObject and CopyObject fail with NoSuchBucket if I try the same trick. Perhaps each S3 API call has a different syntax to specify source and destination regions?

I'm including the policy attached to my lambda's role. Maybe there's something specific to it that hinders cross-region operations, as was suggested in the comments? My source and destination buckets do not have any bucket policy attached. The lambda, and the two buckets are owned by the same account.

The lambda has a role with the following policy attached to it:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "AllowS3Ops",
            "Effect": "Allow",
            "Action": [
                "s3:DeleteObjectTagging",
                "s3:DeleteObjectVersion",
                "s3:GetObjectVersionTagging",
                "s3:DeleteObjectVersionTagging",
                "s3:GetObjectVersionTorrent",
                "s3:PutObject",
                "s3:GetObjectAcl",
                "s3:GetObject",
                "s3:GetObjectTorrent",
                "s3:AbortMultipartUpload",
                "s3:GetObjectVersionAcl",
                "s3:GetObjectTagging",
                "s3:GetObjectVersionForReplication",
                "s3:DeleteObject",
                "s3:GetObjectVersion"
            ],
            "Resource": [
                "arn:aws:s3:::a-specific-bucket-1/*",
                "arn:aws:s3:::a-specific-bucket-2/*",
                "arn:aws:s3:::*/*",
                "arn:aws:logs:*:*:*"
            ]
        },
        {
            "Sid": "AllowLogging",
            "Effect": "Allow",
            "Action": [
                "logs:CreateLogStream",
                "logs:PutLogEvents"
            ],
            "Resource": "arn:aws:logs:*:*:*"
        },
        {
            "Sid": "AllowPassingRoleToECSTaskRoles",
            "Effect": "Allow",
            "Action": "iam:PassRole",
            "Resource": "*"
        },
        {
            "Sid": "AllowStartingECSTasks",
            "Effect": "Allow",
            "Action": "ecs:RunTask",
            "Resource": "*"
        },
        {
            "Sid": "AllowCreatingLogGroups",
            "Effect": "Allow",
            "Action": "logs:CreateLogGroup",
            "Resource": "arn:aws:logs:*:*:*"
        }
    ]
}

Note: I've used both wildcards and specific bucket names in the list of resources. I used to only have the specific names, and then I threw in the wildcards for testing.

Note: This is very related to this question on S3 403s. Even though the accepted answer seems to claim it has to do with policy adjustment, I think it's just a matter of resource naming qualification.

init_js
  • 2,780
  • 16
  • 41
  • 1
    When I am in doubt I look for the low level API reference https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectHEAD.html . The bucket name should be globally unique and API accepts `BucketName.s3.amazonaws.com`. No region is mentioned anywhere. The referred question about 403 was because of access policy. IAM is a global service as well. Can you provide the Lambda policy. Possibly it does not cover cross-region access. – petrch Nov 24 '18 at 11:28
  • I've added the policy, and some notes on PutObject and CopyObject which need to be handled differently. I'll consult the homologous references to the link you provide. I'm using boto3 to run the code, and I'm thinking that perhaps I need to start passing explicit `region=X` in different calls, or create separate client sessions. `CopyObject` between two regions might be tricky. – init_js Nov 25 '18 at 05:25
  • You mention trust relationship, is the bucket in a different account? If so, the policy must be also in that account. The permission works for s3 as an intersection of both of them. How it is set in the other account? – petrch Nov 25 '18 at 08:18
  • There's just one account which owns the two buckets and the lambda and all the iam stuff. Lambdas need to assume a role. There's a trust relationship between the lambda and that iam role (["sts:assumerole"](https://docs.aws.amazon.com/cli/latest/reference/sts/assume-role.html)). Maybe I'm getting the nomenclature mixed up. – init_js Nov 27 '18 at 01:11

1 Answers1

1

There's a multilevel answer to this.

Documentation on parameters such as AmazonS3 latest API are useful, but there is variation as to how the region names are specified between the different language libraries.

In Boto3 (python), for instance, bucket names can always be specified in their short form, regardless of the region they are in.

The fact that doing client.head_object(Bucket=short_name, Key="foo") returned a 403, but that client.head_object(Bucket=short_name + ".us-west-2", Key="foo") returned a 404 is somewhat of a red herring. Boto3 performs bad validation, in my opinion. Adding the region suffix will cause boto3 to parse the parameters differently -- part of the bucket name will end up in the request Path:

# short form (my-bucket) - 403 forbidden
Starting new HTTPS connection (1): my-bucket.s3.ca-central-1.amazonaws.com
[INFO] Starting new HTTPS connection (1): my-bucket.s3.ca-central-1.amazonaws.com
[DEBUG] "HEAD /foo HTTP/1.1" 403 0

# short form + region ("my-bucket.us-west-2") -- 404 not found
# the bucket name has moved to the request Path (wrong!)
Starting new HTTPS connection (1): s3.us-west-2.amazonaws.com
[DEBUG] "HEAD /my-bucket.us-west-2/hosts HTTP/1.1" 404 0

I've discovered one root problem with the policy. Changing:

"Resource": [
            "arn:aws:s3:::a-specific-bucket-1/*",
            "arn:aws:s3:::a-specific-bucket-2/*",
            "arn:aws:s3:::*/*",
            "arn:aws:logs:*:*:*"
]

and adding :::* fixes the cross region issue. i.e, I've added one line to the previous block to obtain this:

"Resource": [
            "arn:aws:s3:::a-specific-bucket-1/*",
            "arn:aws:s3:::a-specific-bucket-2/*",
            "arn:aws:s3:::*/*",
            "arn:aws:s3:::*"       <--- *this line*
            "arn:aws:logs:*:*:*"
]

This modification has allowed these cross bucket requests to go through successfully. I was playing around a bit more with the policy simulator afterwards, and noticed that the added line was also necessary to support HeadBucket or ListBucket operations.

Also, given that the first two lines in that resource block are redundant after adding the wildcard entry, they can be omitted without any effect, to produce the final version:

"Resource": [
            "arn:aws:s3:::*/*",
            "arn:aws:s3:::*"
            "arn:aws:logs:*:*:*"
]

Note: I haven't checked whether :::* includes :::*/*. It could very well be that the :::* makes the :::*/* redundant. My suspicion is that */* is interpreted to mean anything within the bucket, but not the bucket itself.

Note: I think I may also have jumped too quickly to the (wrong) conclusion that this was a cross-region problem, because of the status code change. I initially did some testing against a-specific-bucket-1 and a-specific-bucket-2, which was working fine (because they were hardcoded in the policy), and it so happened that the first new bucket (different than those two) I got errors on happened to be in a different region. A third bucket in the same region might have also given me 403s.

init_js
  • 2,780
  • 16
  • 41