7

Due to certain enterprise limitations, I'm only able to access AWS through the command line, and I cannot set environment variables. I was wondering if there is any way to pass in my keys with the command in a manner like this:

aws s3 cp <file> s3://testbucket --aws-access-key <accesskey> --aws-secret-key <secretkey>

I noticed that this question is fairly similar, although it seems that the answers are either not applicable to my situation or referencing the ec2din command, which I could not translate into copying files to s3. I just get the response Unknown options: --aws-access-key,--aws-secret-key.

Dan Mandel
  • 413
  • 1
  • 5
  • 18
  • Why is it OK to use the CLI, but not the SDK? They both essentially do the same thing (make HTTPS calls to AWS endpoints). – jarmod Jul 31 '17 at 20:37

4 Answers4

19

Try this:

AWS_ACCESS_KEY_ID=AAAA AWS_SECRET_ACCESS_KEY=BBB aws s3 cp <file> s3://testbucket

This will set the keys for this command only. If you need the keys for the session, export them like below:

export AWS_ACCESS_KEY_ID=AAAA ; export AWS_SECRET_ACCESS_KEY=BBB ; aws s3 cp <file> s3://testbucket
krishna_mee2004
  • 4,731
  • 1
  • 25
  • 38
  • 2
    @DanMandel If you want the command to be unavailable in bash history, just add a space before it. – gargsms Jul 31 '17 at 19:57
  • I receive the error `java.io.IOException: Cannot run program "AWS_ACCESS_KEY_ID=": error=2, No such file or directory` and `java.io.IOException: Cannot run program "export": error=2, No such file or directory`. I think this is because I am running these commands through a scala program, which is why I am unable to set environment variables by exporting. – Dan Mandel Jul 31 '17 at 20:03
  • 1
    The "program" you want to run, then, is another shell to interpret the command as intended: `/bin/bash -c 'AWS_ACCESS_KEY_ID=AAAA AWS_SECRET_ACCESS_KEY=BBB aws s3 cp s3://testbucket'` Note the `'` quotes, they are needed. – Michael - sqlbot Aug 01 '17 at 02:18
2

Are you allowed to save the AK/SK to a file? (very much like an SSH private key would be saved in ~/.ssh/id_rsa for example)

If so, you can run the command aws configure, which will prompt for your AK and SK (plus default region and default output format). The credentials will be saved to ~/.aws/credentials, and the region and output (if you chose to specify them) will be saved to ~/.aws/config.

If you are not allowed to write your credentials to a file, be careful with commands passing credentials through the command like - those credentials might get into a "command history" file! In some shells, you can configure so that adding a space in front of a command will prevent it from being written into the history file.

Bruno Reis
  • 34,789
  • 11
  • 109
  • 148
0

add environmental variables in the project https://circleci.com/docs/2.0/env-vars/

screen: enter image description here

And then configure config .circleci/config.yml :

# deploy to aws s3
  deploy:
    docker: 
      - image: cibuilds/aws:1.15.73
    environment:
      aws_access_key_id: $AWS_ACCESS_KEY_ID
      aws_secret_access_key: $AWS_SECRET_ACCESS_KEY
    steps:
      - attach_workspace:
          at: ./workspace
      - run: 
          name: Deploy to S3 if tests pass and branch is develop
          command: aws s3 sync workspace/public s3://your.bucket/ --delete
Anja Ishmukhametova
  • 1,308
  • 14
  • 14
0

You can pass keys within aws configure itself, below is an example:

aws configure set aws_access_key_id <accessKeyID>
aws configure set aws_secret_access_key <secretAccessKey>

then run your command:

aws <command> help
aws <command> <subcommand> help

If you want to have all of them in one line:

aws configure set aws_access_key_id "xxx" && \
aws configure set aws_secret_access_key "yyy" && \
aws s3 ls 
Muhammad Soliman
  • 14,761
  • 4
  • 84
  • 60