I had lambdas from one region (us-west-2), receiving 403's for S3 operations (HeadObject, PutObject, CopyObject) against objects in a bucket from a different region (ca-central-1). The policy simulator assured me that the operations should work under my policy, but clearly there was something else at play. This policy is attached to a role, and I have a trust relationship between the lambda and that role.
One attempt I made at solving the problem was to specify the region name by appending it to the bucket name.
i.e., changing:
head_object(Bucket="foo", ...)
to the (slightly) more qualified naming:
head_object(Bucket="foo.us-west-2", Key="bar")
Interestingly, this would change the 403 to a 404.
I've stumbled upon this workaround (?) through guesswork, based on the required structure of the host header, and intro: working with buckets. But it's a stretch.
I can't find a reference in the docs where the various accepted forms of bucket names are listed (e.g. from the simple name, to a fully qualified ARN). Is the list of supported formats for specifying bucket and key names readily available?
Appending .<region>
to the bucket name will allow HeadObject
to work differently, but PutObject
and CopyObject
fail with NoSuchBucket
if I try the same trick. Perhaps each S3 API call has a different syntax to specify source and destination regions?
I'm including the policy attached to my lambda's role. Maybe there's something specific to it that hinders cross-region operations, as was suggested in the comments? My source and destination buckets do not have any bucket policy attached. The lambda, and the two buckets are owned by the same account.
The lambda has a role with the following policy attached to it:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowS3Ops",
"Effect": "Allow",
"Action": [
"s3:DeleteObjectTagging",
"s3:DeleteObjectVersion",
"s3:GetObjectVersionTagging",
"s3:DeleteObjectVersionTagging",
"s3:GetObjectVersionTorrent",
"s3:PutObject",
"s3:GetObjectAcl",
"s3:GetObject",
"s3:GetObjectTorrent",
"s3:AbortMultipartUpload",
"s3:GetObjectVersionAcl",
"s3:GetObjectTagging",
"s3:GetObjectVersionForReplication",
"s3:DeleteObject",
"s3:GetObjectVersion"
],
"Resource": [
"arn:aws:s3:::a-specific-bucket-1/*",
"arn:aws:s3:::a-specific-bucket-2/*",
"arn:aws:s3:::*/*",
"arn:aws:logs:*:*:*"
]
},
{
"Sid": "AllowLogging",
"Effect": "Allow",
"Action": [
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": "arn:aws:logs:*:*:*"
},
{
"Sid": "AllowPassingRoleToECSTaskRoles",
"Effect": "Allow",
"Action": "iam:PassRole",
"Resource": "*"
},
{
"Sid": "AllowStartingECSTasks",
"Effect": "Allow",
"Action": "ecs:RunTask",
"Resource": "*"
},
{
"Sid": "AllowCreatingLogGroups",
"Effect": "Allow",
"Action": "logs:CreateLogGroup",
"Resource": "arn:aws:logs:*:*:*"
}
]
}
Note: I've used both wildcards and specific bucket names in the list of resources. I used to only have the specific names, and then I threw in the wildcards for testing.
Note: This is very related to this question on S3 403s. Even though the accepted answer seems to claim it has to do with policy adjustment, I think it's just a matter of resource naming qualification.