29

I'm trying to copy a file from a private s3-bucket via cli to my ec2 instance. The ec2 is in the same region as the bucket and has the following IAM role attached (AmazonS3FullAccess):

{
"Version": "2012-10-17",
"Statement": [
   {
     "Effect": "Allow",
     "Action": "s3:*",
     "Resource": "*"
   }
 ]
}

But the command aws s3 cp s3://[BUCKETNAME]/index.html fails with the error:

A client error (400) occurred when calling the HeadObject operation: Bad Request Completed 1 part(s) with ... file(s) remaining.

I already double checked the spelling of the bucket name...

Anthony Neace
  • 21,951
  • 6
  • 100
  • 116
shootoke
  • 669
  • 1
  • 6
  • 8
  • Is that the full cp command line you were running? cp also needs an argument for the local path name. – Karen B May 28 '16 at 02:59
  • sorry i forgot this part in the posting, but it was there: ' aws s3 cp s3://[bucketname]/index.html /var/www/html/ ' – shootoke May 28 '16 at 07:20

5 Answers5

37

I added the --region option to the statement and everything is working now:

aws s3 cp s3:/[BUCKETNAME]/ . --recursive --region [REGION]
shootoke
  • 669
  • 1
  • 6
  • 8
11

This error also happens when your session has expired if using temporary security credentials with an assumed role. Not a forbidden or unknown id as you would expect.

5

My problem got fixed as soon as I upgraded to the latest version of aws cli. Here is how you upgrade : pip install --upgrade --user awscli

specifying the region wasn't helpful

Tushar Goswami
  • 697
  • 1
  • 7
  • 19
4

The --region parameter did not work for me.

I tried using --profile, and it worked all fine.

aws s3api head-bucket --bucket xxxx --profile dev-profile
Jameson
  • 5,452
  • 5
  • 26
  • 44
0

When I changed my same policy from "inline policies" to "managed policies" it worked. Ref my answer at https://stackoverflow.com/a/37532132/4126114

Community
  • 1
  • 1
Shadi
  • 7,343
  • 3
  • 34
  • 58
  • My bad. I meant "inline" to "managed". Will edit. Blame it on programming late at night – Shadi May 31 '16 at 05:53
  • What's your output for `aws --version`? Mine is `aws-cli/1.10.16 Python/2.7.3 Linux/3.13.0-86-generic botocore/1.4.7` – Shadi May 31 '16 at 11:19
  • Mine is: aws-cli/1.10.8 Python/2.7.10 Linux/4.4.10-22.54.amzn1.x86_64 botocore/1.3.30 – shootoke May 31 '16 at 13:35
  • if i run the cp command with the option --region it works. Strange because the bucket as well as the ec2 instance remain to the same region. I'm confused... – shootoke Jun 01 '16 at 19:46
  • Looks like your awscli configuration is missing the `AWS_DEFAULT_REGION` parameter. Check [here](https://github.com/aws/aws-cli#accessing-services-with-global-endpoints). You can try using `aws configure` as described [here](https://github.com/aws/aws-cli#getting-started) and [here](https://github.com/aws/aws-cli#examples). Otherwise you can just upgrade your awscli with `pip install --upgrade awscli` to the most recent version. – Shadi Jun 02 '16 at 05:35
  • i added the option --region in the user data script so everything is just working fine now. – shootoke Jun 02 '16 at 15:01