173

After all the tough works of migration etc. Just realise that If need to serve the content using CNAME (e.g media.abc.com). The bucket name need to start with media.abc.com/S3/amazon.com to ensure it work perfectly.

Just realise that S3 don't allow direct rename from the console.

Is there any ways to work around for this?

sashoalm
  • 63,456
  • 96
  • 348
  • 677
Carson Lee
  • 2,133
  • 3
  • 18
  • 23

3 Answers3

274

Solution

aws s3 mb s3://[new-bucket]
aws s3 sync s3://[old-bucket] s3://[new-bucket]
aws s3 rb --force s3://[old-bucket]

Explanation

There's no rename bucket functionality for S3 because there are technically no folders in S3 so we have to handle every file within the bucket.

The code above will 1. create a new bucket, 2. copy files over and 3. delete the old bucket. That's it.

If you have lots of files in your bucket and you're worried about the costs, then read on. Behind the scenes what happens is that all the files within the bucket are first copied and then deleted. It should cost an insignificant amount if you have a few thousand files. Otherwise check this answer to see how this would impact you.

Example

In the following example we create and populate the old bucket and then sync the files to the new one. Check the output of the commands to see what AWS does.

> # bucket suffix so we keep it unique
> suffix="ieXiy2"  # used `pwgen -1 -6` to get this
>
> # populate old bucket
> echo "asdf" > asdf.txt
> echo "yxcv" > yxcv.txt
> aws s3 mb s3://old-bucket-$suffix
make_bucket: old-bucket-ieXiy2
> aws s3 cp asdf.txt s3://old-bucket-$suffix/asdf.txt
upload: ./asdf.txt to s3://old-bucket-ieXiy2/asdf.txt
> aws s3 cp yxcv.txt s3://old-bucket-$suffix/yxcv.txt
upload: ./yxcv.txt to s3://old-bucket-ieXiy2/yxcv.txt
>
> # "rename" to new bucket
> aws s3 mb s3://new-bucket-$suffix
make_bucket: new-bucket-ieXiy2
> aws s3 sync s3://old-bucket-$suffix s3://new-bucket-$suffix
copy: s3://old-bucket-ieXiy2/yxcv.txt to s3://new-bucket-ieXiy2/yxcv.txt
copy: s3://old-bucket-ieXiy2/asdf.txt to s3://new-bucket-ieXiy2/asdf.txt
> aws s3 rb --force s3://old-bucket-$suffix
delete: s3://old-bucket-ieXiy2/asdf.txt
delete: s3://old-bucket-ieXiy2/yxcv.txt
remove_bucket: old-bucket-ieXiy2
duality_
  • 15,454
  • 20
  • 74
  • 92
  • 8
    This answer is the same as the accepted answer, except this posting gives a very helpful step by step example of how to do it. (The example should be shortened, though. There's no need to show creation of an example old bucket and using a suffix variable.) The explanation part of this answer doesn't satisfy me, though. It says the lack of folders in S3 is why this awkward procedure is required. Since the original question didn't mention folders, I don't understand how that explains the inability to rename S3 buckets. – Mr. Lance E Sloan Jul 12 '17 at 13:21
  • 2
    Tried this method...appeared to work...however for some weird reason I can't view any of the items (images). I can navigate via the browser through the items on the s3 dashboard...but cant view them via a url or download them. Any ideas why? Permissions seem to be identical. Is there some special permissions to look out for? – bmiskie Jan 22 '18 at 02:00
  • Note that, as far as I'm aware of, when copying objects from one bucket to another, it is currently not possible to preserve their history. That is, you cannot copy an object with all its versions together with their creation date, in case versioning was enabled in the source bucket. – Igor Akkerman Jan 28 '18 at 13:01
  • 1
    @duality_ can you add into your answer notification about permissions and that you can copy files with `--acl bucket-owner-full-control`(like in that [answer](https://stackoverflow.com/a/43728025/1514072)) – Евгений Масленков Feb 07 '18 at 09:18
  • 1
    Note that you need to specify region as well, otherwise your new buckets will be in US East (North Virginia). E.g. `aws --region ap-southeast-2 s3 mb s3://new-bucket` – cobberboy Feb 08 '18 at 11:38
  • I set my region in `~/.aws/config`. – duality_ Feb 09 '18 at 11:02
  • 1
    If you have versioning on, you'll need to use lifecycle management to expire the versions... apparently: https://docs.aws.amazon.com/AmazonS3/latest/dev/delete-or-empty-bucket.html#delete-bucket-lifecycle – carlin.scott Apr 17 '18 at 19:31
  • How much will this cost? – Aleksandr Dubinsky May 30 '18 at 08:32
  • @AleksandrDubinsky as explained in the answer you will pay for copying and deleting each file in the source directory, but still this should be really cheap, well below $1. Check [this answer](https://serverfault.com/questions/349460/how-to-move-files-between-two-s3-buckets-with-minimum-cost/349813#349813) for the nitty-gritty. – duality_ May 31 '18 at 05:48
  • @AleksandrDubinsky If you have objects in glacier or deep archive, notice that there is a minimal time for each object, and if you delete the objects from the old bucket before the minimal time (90 or 180 days), you will be charged for the whole time. Therefore, it may cost you more if you move objects to a new bucket and then delete the old bucket. – Uri Oct 26 '20 at 17:01
  • Hi Guys, I did it but object ACL wasn't copied. how can I solve it? – lovecoding Nov 25 '20 at 03:36
118

I think only way is to create a new bucket with correct name and then copy all your objects from old bucket to new bucket. You can do it using Aws CLI.

liferacer
  • 2,115
  • 1
  • 13
  • 16
  • 9
    @Tashows pavitran was asking about chaRges, not chaNges. As far as I know there are indeed charges for copying bucket items, I believe 1 GET and 1 PUT operation cost for each item. – guival Aug 29 '17 at 10:06
  • 5
    @Tashows Actually there's actually an entry for COPY operations on the [pricing table for S3](https://aws.amazon.com/s3/pricing/), it's actually just the same cost as doing a PUT (so there's no extra GET cost). – guival Aug 29 '17 at 17:53
  • 3
    Note you can also cut and paste using the web console, for people who don't want to do this via CLI. – mahemoff Jul 25 '18 at 09:52
  • @pavitran If you have objects in glacier or deep archive, notice that there is a minimal time for each object, and if you delete the objects from the old bucket before the minimal time (90 or 180 days), you will be charged for the whole time. Therefore, it may cost you more if you move objects to a new bucket and then delete the old bucket. – Uri Oct 26 '20 at 16:59
46

Probably a later version of the AWS CLI toolkit provided the mv option.

$ aws --version
aws-cli/1.15.30 Python/3.6.5 Darwin/17.6.0 botocore/1.10.30

I'm renaming buckets using the following command:

aws s3 mv s3://old-bucket s3://new-bucket --recursive
Richard A Quadling
  • 2,944
  • 23
  • 35
  • 11
    This worked for me. However, it is important to note that the `new-bucket` must be created first before running the command. Also, the `old-bucket` will then be empty but NOT deleted. If you want to delete it following the transfer of all files, use the following command (without the angle brackets): ```aws s3api delete-bucket --bucket --region ``` – Mabyn Oct 01 '18 at 01:01
  • 1
    If the old bucket is in use anywhere, it is obviously good practise to copy the bucket, test with the new destination, and only then, delete the old bucket. `aws s3 mv` actually copies and deletes, so the financial costs should be the same (I think). – Flimm Oct 31 '19 at 20:28