19

My Laravel application needs to manipulate files present in multiple buckets simultaneously into a single session. So, I couldn't find a way to change several times the current bucket, since my .env file is like this:

S3_KEY='MY-KEY'
S3_SECRET='MySeCret'
S3_REGION='us-east-1'
S3_BUCKET='my-first-used-bucket'

I found somewhere that I could do this:

Config::set('filesystems.disks.s3.bucket', 'another-bucket');

but It works only once. What I need is something like:

Storage::disk('s3')->put('/bucket-name/path/filename.jpg', $file, 'public');

Where /bucket-name/ could be any bucket that I already create. What can I do? Thanks a lot!

Leandro Ferreira
  • 193
  • 1
  • 1
  • 6
  • What do you mean only works once? `Config::set('foo', 'bar'); Config::set('foo', 'baz'); echo Config::get('foo'); // baz` will work... – Ben Swinburne Feb 05 '16 at 14:54
  • @BenSwinburne It works like a first configuration. So, if I set a bucket using Config::set, it works, the files will be stored into the correct location, but if I try to change the bucket later, using the same method, the current bucket stills the same and the files will be stored into the first bucket. – Leandro Ferreira Feb 05 '16 at 15:27

3 Answers3

46

You are correct in that Config::set(); only works once per request. My estimation is that this is done intentionally to stop the kind of thing you are attempting to do in your code example.

In config/filesystems.php you can list any number of "disks". These are locations of your file repositories. It looks like so:

'disks' => [

    'local' => [
        'driver' => 'local',
        'root'   => storage_path('app'),
    ],

    'ftp' => [
        'driver'   => 'ftp',
        'host'     => 'ftp.example.com',
        'username' => 'your-username',
        'password' => 'your-password',

        // Optional FTP Settings...
        // 'port'     => 21,
        // 'root'     => '',
        // 'passive'  => true,
        // 'ssl'      => true,
        // 'timeout'  => 30,
    ],

    's3' => [
        'driver' => 's3',
        'key'    => env('S3_KEY',''),
        'secret' => env('S3_SECRET',''),
        'region' => env('S3_REGION',''),
        'bucket' => env('S3_BUCKET',''),
    ],
]

The Solution

The solution is to create a new disk of the extra buckets you want to use. Treat your buckets like different disks.

Note: The user that the S3_Key belongs to needs to have permissions to perform your required actions on the S3 buckets you are setting up as additional 'disks'.

'disks' => [

    //All your other 'disks'
    ...

    //My default bucket details.
    's3' => [
        'driver' => 's3',
        'key'    => env('S3_KEY',''),
        'secret' => env('S3_SECRET',''),
        'region' => env('S3_REGION',''),
        'bucket' => env('S3_BUCKET',''),
    ],

    's3MyOtherBucketName' => [
        'driver' => 's3',
        'key'    => env('S3_KEY',''),
        'secret' => env('S3_SECRET',''),
        'region' => env('S3_REGION',''),
        'bucket' => 'myOtherBucketName',
    ],

    's3YetAnotherBucketName' => [
        'driver' => 's3',
        'key'    => env('S3_KEY',''),
        'secret' => env('S3_SECRET',''),
        'region' => env('S3_REGION',''),
        'bucket' => 'yetAnotherBucketName',
    ],
]

Then whenever you want to access the bucket of your choice call it like so:

Storage::disk('s3')->put($fileName, $data);
Storage::disk('s3MyOtherBucketName')->put($anotherFileName, $moreData);
Storage::disk('s3YetAnotherBucketName')->put($yetAnotherFileName, $evenMoreData);
Yoram de Langen
  • 4,825
  • 2
  • 21
  • 30
Samuel Hawksby-Robinson
  • 2,177
  • 4
  • 20
  • 23
  • 2
    the most elegant way to deal with multiple s3 buckets. Thanks – Michael Nguyen Dec 08 '18 at 19:27
  • what if the bucket is the only one that's different but all of it has the same key, secret, and region? isn't this a redundant code? – PinoyStackOverflower Nov 25 '20 at 06:38
  • There are a lot of good reasons to have redundant code, especially when the repeated lines are near each other so other developers can notice the duplication. This amount of redundancy shouldn't set off any alarm bells – dankuck May 17 '21 at 21:34
20

If you have dynamic buckets you also can create a new instance like this:

$storage = Storage::createS3Driver([
    'driver' => 's3',
    'key'    => 'your-key',
    'secret' => 'your-secret',
    'region' => 'us-east-1',
    'bucket' => $bucketName,
]);

$storage->put('path/to/file.png', $content);
Mario Campa
  • 3,433
  • 1
  • 20
  • 25
  • So could you use this as sort of a separator for different 'customers' or 'clients' ? Would the duplicate not be stored, or would it store duplicates? How would you check against an existing s3 'bucket'? – lzoesch Nov 24 '20 at 09:47
9

You can add the buckets to the filesystems config like so:

'disks' => [
    's3' => [
        'bucket1' => [
            'driver' => 's3',
            'key' => env('AWS_BUCKET1_ACCESS_KEY_ID'),
            'secret' => env('AWS_BUCKET1_SECRET_ACCESS_KEY'),
            'region' => env('AWS_BUCKET1_DEFAULT_REGION'),
            'bucket' => env('AWS_BUCKET1_BUCKET'),
            'url' => env('AWS_BUCKET1_URL'),
        ],
        'bucket2' => [
            'driver' => 's3',
            'key' => env('AWS_BUCKET2_ACCESS_KEY_ID'),
            'secret' => env('AWS_BUCKET2_SECRET_ACCESS_KEY'),
            'region' => env('AWS_BUCKET2_DEFAULT_REGION'),
            'bucket' => env('AWS_BUCKET2_BUCKET'),
            'url' => env('AWS_BUCKET2_URL'),
        ],
    ],
],

Then you can access the server using the following:

\Storage::disk('s3.bucket1')->put('path/to/file.png', $content);
Wayne Travers
  • 91
  • 1
  • 2