59

I'm developing a web application and I currently have the following ACL assigned to the AWS account it uses to access its data:

{
  "Statement": [
    {
      "Sid": "xxxxxxxxx", // don't know if this is supposed to be confidential
      "Action": [
        "s3:*"
      ],
      "Effect": "Allow",
      "Resource": [
        "arn:aws:s3:::cdn.crayze.com/*"
      ]
    }
  ]
}

However I'd like to make this a bit more restrictive so that if our AWS credentials were ever compromised, an attacker could not destroy any data.

From the documentation, it looks like I want to allow just the following actions: s3:GetObject and s3:PutObject, but I specifically want the account to only be able to create objects that don't exist already - i.e. a PUT request on an existing object should be denied. Is this possible?

Steffen Opel
  • 61,065
  • 11
  • 183
  • 208
Jake Petroules
  • 21,796
  • 34
  • 136
  • 218

4 Answers4

54

This is not possible in Amazon S3 like you probably envisioned it; however, you can work around this limitation by Using Versioning which is a means of keeping multiple variants of an object in the same bucket and has been developed with use cases like this in mind:

You might enable versioning to prevent objects from being deleted or overwritten by mistake, or to archive objects so that you can retrieve previous versions of them.

There are a couple of related FAQs as well, for example:

  • What is Versioning? - Versioning allows you to preserve, retrieve, and restore every version of every object stored in an Amazon S3 bucket. Once you enable Versioning for a bucket, Amazon S3 preserves existing objects anytime you perform a PUT, POST, COPY, or DELETE operation on them. By default, GET requests will retrieve the most recently written version. Older versions of an overwritten or deleted object can be retrieved by specifying a version in the request.

  • Why should I use Versioning? - Amazon S3 provides customers with a highly durable storage infrastructure. Versioning offers an additional level of protection by providing a means of recovery when customers accidentally overwrite or delete objects. This allows you to easily recover from unintended user actions and application failures. You can also use Versioning for data retention and archiving. [emphasis mine]

  • How does Versioning protect me from accidental deletion of my objects? - When a user performs a DELETE operation on an object, subsequent default requests will no longer retrieve the object. However, all versions of that object will continue to be preserved in your Amazon S3 bucket and can be retrieved or restored. Only the owner of an Amazon S3 bucket can permanently delete a version. [emphasis mine]

If you are really paramount about the AWS credentials of the bucket owner (who can be different than the accessing users of course), you can take that one step further even, see How can I ensure maximum protection of my preserved versions?:

Versioning’s MFA Delete capability, which uses multi-factor authentication, can be used to provide an additional layer of security. [...] If you enable Versioning with MFA Delete on your Amazon S3 bucket, two forms of authentication are required to permanently delete a version of an object: your AWS account credentials and a valid six-digit code and serial number from an authentication device in your physical possession. [...]

Steffen Opel
  • 61,065
  • 11
  • 183
  • 208
  • 9
    It is unfortunate that this is the only solution available for a very common and obvious backup requirement ("write new only".) If you use S3 versioning, it precludes using S3's lifecycle management policies. So now you are forced to choose between having solid backup security, or having a convenient way to remove old backups. I don't think it's too much to expect both. – Tom Wilson Feb 03 '14 at 23:45
  • 5
    I use both versioning and the lifecycle system within the same bucket quite often where it is needed - using one does not preclude the other. From the description of versioning within the s3 interface: ```You can use Lifecycle rules to manage all versions of your objects as well as their associated costs. Lifecycle rules enable you to automatically archive your objects to the Glacier Storage Class and/or remove them after a specified time period.``` – keba Feb 11 '15 at 21:33
  • 2
    Sounds good. Is it possible for an attacker to disable versioning? Or does it not matter, because they wouldn't be able to delete the already-versioned objects anyway? – z0r Aug 06 '15 at 04:19
  • 1
    "Is it possible for an attacker to disable versioning". If an attacker has the "PutBucketVersioning" permission, they can disable versioning, which stops the creation of versions going forward, but does not delete previously created versions. But whatever has write-only access to the bucket probably shouldn't have "PutBucketVersioning" access – Thayne Nov 20 '19 at 17:24
6

If this is accidental overwrite you are trying to avoid, and your business requirements allow a short time window of inconsistency, you can do the rollback in the Lambda function:

  1. Make it a policy that "no new objects with the same name". Most of the time it will not happen. To enforce it:
  2. Listen for S3:PutObject events in an AWS Lambda function.
  3. When the event is fired, check whether more than one version is present.
  4. If there is more than one version present, delete all but the newest one.
  5. Notify the uploader what happened (it's useful to have the original uploader in x-amz-meta-* of the object. More info here).
Motiejus Jakštys
  • 2,337
  • 2
  • 21
  • 24
  • 9
    If you are trying to prevent overwrite, shouldn't #4 be "delete all but the *oldest* one"? – TTT May 16 '19 at 20:43
5

You can now lock versions of objects with S3 Object Lock. It's a per-bucket setting, and allows you to place one of two kinds of WORM locks.

  • "retention period" - can't be changed
  • "legal hold" - can be changed by the bucket owner at any time

https://docs.aws.amazon.com/AmazonS3/latest/dev/object-lock.html

As mentioned by @Kijana Woodard below, this does not prevent creation of new versions of objects.

Dan Pritts
  • 1,124
  • 15
  • 14
  • This is a great solution to the often requested functionality from S3. – David J Eddy Jan 21 '19 at 14:54
  • 9
    You can still get new versions created. "Placing a retention period or legal hold on an object protects only the version specified in the request, and doesn't prevent new versions of the object from being created. " – Kijana Woodard Feb 21 '19 at 17:11
1

Edit: Applicable if you came here from this question.

Object Locks only work in versioned buckets. If you can not enable versioning for your bucket, but can tolerate brief inconsistencies where files are presumed to exist while DELETE-ing them is still in-flight (S3 is only eventually-consistent) possibly resulting in PUT-after-DELETE failing intermittently if used in a tight-loop, or conversely, successive PUTs falsely succeeding intermittently, then the following solution may be appropriate.

Given the object path, read the Object's Content-Length header (from metadata, HeadObject request). Write the object only if the request succeeds, and where applicable, if length is greater than zero.

toasterpic
  • 11
  • 2