147

I get an error AWS::S3::Errors::InvalidRequest The authorization mechanism you have provided is not supported. Please use AWS4-HMAC-SHA256. when I try upload file to S3 bucket in new Frankfurt region. All works properly with US Standard region.

Script:

backup_file = '/media/db-backup_for_dev/2014-10-23_02-00-07/slave_dump.sql.gz'
s3 = AWS::S3.new(
    access_key_id:     AMAZONS3['access_key_id'],
    secret_access_key: AMAZONS3['secret_access_key']
)

s3_bucket = s3.buckets['test-frankfurt']

# Folder and file name
s3_name = "database-backups-last20days/#{File.basename(File.dirname(backup_file))}_#{File.basename(backup_file)}"

file_obj = s3_bucket.objects[s3_name]
file_obj.write(file: backup_file)

aws-sdk (1.56.0)

How to fix it?

Thank you.

Pat Myron
  • 3,231
  • 2
  • 18
  • 34
Alexey
  • 1,996
  • 4
  • 15
  • 23
  • 1
    This answer solved my problem: http://stackoverflow.com/questions/34483795/cant-access-s3-pre-signed-url-due-to-authorization/34495454#34495454 – Bahadir Tasdemir Jul 15 '16 at 11:12

21 Answers21

160

AWS4-HMAC-SHA256, also known as Signature Version 4, ("V4") is one of two authentication schemes supported by S3.

All regions support V4, but US-Standard¹, and many -- but not all -- other regions, also support the other, older scheme, Signature Version 2 ("V2").

According to http://docs.aws.amazon.com/AmazonS3/latest/API/sig-v4-authenticating-requests.html ... new S3 regions deployed after January, 2014 will only support V4.

Since Frankfurt was introduced late in 2014, it does not support V2, which is what this error suggests you are using.

http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingAWSSDK.html explains how to enable V4 in the various SDKs, assuming you are using an SDK that has that capability.

I would speculate that some older versions of the SDKs might not support this option, so if the above doesn't help, you may need a newer release of the SDK you are using.


¹US Standard is the former name for the S3 regional deployment that is based in the us-east-1 region. Since the time this answer was originally written, "Amazon S3 renamed the US Standard Region to the US East (N. Virginia) Region to be consistent with AWS regional naming conventions." For all practical purposes, it's only a change in naming.

Michael - sqlbot
  • 139,456
  • 21
  • 252
  • 328
  • This stumps s3cmd-1.5.0-0.alpha3.fc20.noarch which comes with Fedora 20. And apparently also [stumps 1.5.0-rc1](https://github.com/s3tools/s3cmd/issues/402), the latest for now. – David Tonhofer Nov 22 '14 at 13:54
  • 1
    @DavidTonhofer that seems right. It looks like the s3cmd developers don't have `AWS4-HMAC-SHA256` implemented yet: https://github.com/s3tools/s3cmd/issues/402 – Michael - sqlbot Nov 23 '14 at 22:23
  • 2
    @"Michael - sqlbot" well I switched to "awscli" for now. For those in a hurry: yum install python-pip; pip install awscli; aws configure; aws --region=eu-central-1 s3 ls s3://$BUCKET etc... – David Tonhofer Nov 24 '14 at 00:23
  • aws-sdk v2 seems to support AWS4-HMAC-SHA256 "V4" authentication nicely (related [issue](https://github.com/aws/aws-sdk-ruby/issues/664)) – Jeewes Mar 28 '15 at 14:18
  • thnx.. this is useful for me – Manish Vadher Mar 16 '18 at 09:25
  • Here's a timeline of when AWS launched various regions: https://en.wikipedia.org/wiki/Timeline_of_Amazon_Web_Services – William Turrell Nov 27 '19 at 14:06
75

With node, try

var s3 = new AWS.S3( {
    endpoint: 's3-eu-central-1.amazonaws.com',
    signatureVersion: 'v4',
    region: 'eu-central-1'
} );
morris4
  • 1,817
  • 16
  • 14
37

You should set signatureVersion: 'v4' in config to use new sign version:

AWS.config.update({
    signatureVersion: 'v4'
});

Works for JS sdk.

praveen_programmer
  • 1,057
  • 10
  • 25
Denis Rizun
  • 471
  • 4
  • 3
31

For people using boto3 (Python SDK) use the below code

from botocore.client import Config


s3 = boto3.resource(
    's3',
    aws_access_key_id='xxxxxx',
    aws_secret_access_key='xxxxxx',
    config=Config(signature_version='s3v4')
)
Penkey Suresh
  • 4,888
  • 2
  • 32
  • 52
  • 4
    I get error `AuthorizationQueryParametersErrorError parsing the X-Amz-Credential parameter; the region 'us-east-1' is wrong; expecting 'us-east-2'us-east-2` So I added `region_name='us-east-2' ` to the above code – Aseem Apr 30 '19 at 05:28
19

I have been using Django, and I had to add these extra config variables to make this work. (in addition to settings mentioned in https://simpleisbetterthancomplex.com/tutorial/2017/08/01/how-to-setup-amazon-s3-in-a-django-project.html).

AWS_S3_REGION_NAME = "ap-south-1"

Or previous to boto3 version 1.4.4:

AWS_S3_REGION_NAME = "ap-south-1"

AWS_S3_SIGNATURE_VERSION = "s3v4"
Allan Veloso
  • 3,918
  • 1
  • 27
  • 32
SuperNova
  • 15,051
  • 5
  • 67
  • 45
14

Similar issue with the PHP SDK, this works:

$s3Client = S3Client::factory(array('key'=>YOUR_AWS_KEY, 'secret'=>YOUR_AWS_SECRET, 'signature' => 'v4', 'region'=>'eu-central-1'));

The important bit is the signature and the region

Soviut
  • 79,529
  • 41
  • 166
  • 227
Pascal
  • 276
  • 2
  • 10
5
AWS_S3_REGION_NAME = "ap-south-1"

AWS_S3_SIGNATURE_VERSION = "s3v4"

this also saved my time after surfing for 24Hours..

Jaimil Patel
  • 1,058
  • 4
  • 11
3

In Java I had to set a property

System.setProperty(SDKGlobalConfiguration.ENFORCE_S3_SIGV4_SYSTEM_PROPERTY, "true")

and add the region to the s3Client instance.

s3Client.setRegion(Region.getRegion(Regions.EU_CENTRAL_1))
GameScripting
  • 13,635
  • 12
  • 51
  • 88
3

With boto3, this is the code :

s3_client = boto3.resource('s3', region_name='eu-central-1')

or

s3_client = boto3.client('s3', region_name='eu-central-1')
Benoit
  • 419
  • 4
  • 13
2

For thumbor-aws, that used boto config, i needed to put this to the $AWS_CONFIG_FILE

[default]
aws_access_key_id = (your ID)
aws_secret_access_key = (your secret key)
s3 =
    signature_version = s3

So anything that used boto directly without changes, this may be useful

higuita
  • 1,617
  • 18
  • 20
2

Code for Flask (boto3)

Don't forget to import Config. Also If you have your own config class, then change its name.

from botocore.client import Config

s3 = boto3.client('s3',config=Config(signature_version='s3v4'),region_name=app.config["AWS_REGION"],aws_access_key_id=app.config['AWS_ACCESS_KEY'], aws_secret_access_key=app.config['AWS_SECRET_KEY'])
s3.upload_fileobj(file,app.config["AWS_BUCKET_NAME"],file.filename)
url = s3.generate_presigned_url('get_object', Params = {'Bucket':app.config["AWS_BUCKET_NAME"] , 'Key': file.filename}, ExpiresIn = 10000)
P.Gupta
  • 346
  • 2
  • 14
1

For Android SDK, setEndpoint solves the problem, although it's been deprecated.

CognitoCachingCredentialsProvider credentialsProvider = new CognitoCachingCredentialsProvider(
                context, "identityPoolId", Regions.US_EAST_1);
AmazonS3 s3 = new AmazonS3Client(credentialsProvider);
s3.setEndpoint("s3.us-east-2.amazonaws.com");
Ian Darke
  • 11
  • 1
  • 1
1

Basically the error was because I was using old version of aws-sdk and I updated the version so this error occured.

in my case with node js i was using signatureVersion in parmas object like this :

const AWS_S3 = new AWS.S3({
  params: {
    Bucket: process.env.AWS_S3_BUCKET,
    signatureVersion: 'v4',
    region: process.env.AWS_S3_REGION
  }
});

Then I put signature out of params object and worked like charm :

const AWS_S3 = new AWS.S3({
  params: {
    Bucket: process.env.AWS_S3_BUCKET,
    region: process.env.AWS_S3_REGION
  },
  signatureVersion: 'v4'
});
Salahudin Malik
  • 310
  • 2
  • 17
1

Check your AWS S3 Bucket Region and Pass proper Region in Connection Request.

In My Senario I have set 'APSouth1' for Asia Pacific (Mumbai)

using (var client = new AmazonS3Client(awsAccessKeyId, awsSecretAccessKey, RegionEndpoint.APSouth1))
{
    GetPreSignedUrlRequest request1 = new GetPreSignedUrlRequest
    {
        BucketName = bucketName,
        Key = keyName,
        Expires = DateTime.Now.AddMinutes(50),
    };
    urlString = client.GetPreSignedURL(request1);
}
Ravi Oza
  • 113
  • 1
  • 8
1

For Boto3 , use this code.

import boto3
from botocore.client import Config


s3 = boto3.resource('s3',
        aws_access_key_id='xxxxxx',
        aws_secret_access_key='xxxxxx',
        region_name='us-south-1',
        config=Config(signature_version='s3v4')
        )
Pushplata
  • 356
  • 2
  • 9
1

In my case, the request type was wrong. I was using GET(dumb) It must be PUT.

1

Supernova answer for django/boto3/django-storages worked with me:

AWS_S3_REGION_NAME = "ap-south-1"

Or previous to boto3 version 1.4.4:

AWS_S3_REGION_NAME = "ap-south-1"

AWS_S3_SIGNATURE_VERSION = "s3v4"

just add them to your settings.py and change region code accordingly

you can check aws regions from: enter link description here

Rezan Moh
  • 43
  • 5
1

Here is the function I used with Python

def uploadFileToS3(filePath, s3FileName):
    s3 = boto3.client('s3', 
                    endpoint_url=settings.BUCKET_ENDPOINT_URL,
                    aws_access_key_id=settings.BUCKET_ACCESS_KEY_ID,
                    aws_secret_access_key=settings.BUCKET_SECRET_KEY,
                    region_name=settings.BUCKET_REGION_NAME
                    )
    try:
        s3.upload_file(
            filePath, 
            settings.BUCKET_NAME, 
            s3FileName
            )

        # remove file from local to free up space
        os.remove(filePath)

        return True
    except Exception as e:
        logger.error('uploadFileToS3@Error')
        logger.error(e)
        return False
CodeMask
  • 41
  • 2
0

Sometime the default version will not update. Add this command

AWS_S3_SIGNATURE_VERSION = "s3v4"

in settings.py

Nick Roz
  • 3,002
  • 2
  • 30
  • 49
0

Try this combination.

const s3 = new AWS.S3({
  endpoint: 's3-ap-south-1.amazonaws.com',       // Bucket region
  accessKeyId: 'A-----------------U',
  secretAccessKey: 'k------ja----------------soGp',
  Bucket: 'bucket_name',
  useAccelerateEndpoint: true,
  signatureVersion: 'v4',
  region: 'ap-south-1'             // Bucket region
});
Ankit Kumar Rajpoot
  • 3,495
  • 23
  • 22
0

I was stuck for 3 days and finally, after reading a ton of blogs and answers I was able to configure Amazon AWS S3 Bucket.

On the AWS Side

I am assuming you have already

  1. Created an s3-bucket
  2. Created a user in IAM

Steps

  1. Configure CORS settings

    you bucket > permissions > CORS configuration

    <CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
    <CORSRule>
        <AllowedOrigin>*</AllowedOrigin>
        <AllowedMethod>GET</AllowedMethod>
        <AllowedMethod>POST</AllowedMethod>
        <AllowedMethod>PUT</AllowedMethod>
        <AllowedHeader>*</AllowedHeader>
    </CORSRule>
    </CORSConfiguration>```
    
    
  2. Generate A bucket policy

your bucket > permissions > bucket policy

It should be similar to this one

 {
     "Version": "2012-10-17",
     "Id": "Policy1602480700663",
     "Statement": [
         {
             "Sid": "Stmt1602480694902",
             "Effect": "Allow",
             "Principal": "*",
             "Action": "s3:GetObject",
             "Resource": "arn:aws:s3:::harshit-portfolio-bucket/*"
         }
     ]
 }
PS: Bucket policy should say `public` after this 
  1. Configure Access Control List

your bucket > permissions > acces control list

give public access

PS: Access Control List should say public after this

  1. Unblock public Access

your bucket > permissions > Block Public Access

Edit and turn all options Off

**On a side note if you are working on django add the following lines to you settings.py file of your project **

#S3 BUCKETS CONFIG

AWS_ACCESS_KEY_ID = '****not to be shared*****'
AWS_SECRET_ACCESS_KEY = '*****not to be shared******'
AWS_STORAGE_BUCKET_NAME = 'your-bucket-name'

AWS_S3_FILE_OVERWRITE = False
AWS_DEFAULT_ACL = None
DEFAULT_FILE_STORAGE = 'storages.backends.s3boto3.S3Boto3Storage'

# look for files first in aws 
STATICFILES_STORAGE = 'storages.backends.s3boto3.S3Boto3Storage'

# In India these settings work
AWS_S3_REGION_NAME = "ap-south-1"
AWS_S3_SIGNATURE_VERSION = "s3v4"

Harshit Gangwar
  • 160
  • 2
  • 9