Using Minio server for simulate S3 locally

The other day, I was building a small development environment for me to test Laravel and S3, but as usual, I didn’t want to use production credentials nor to use S3 directly, so after a small search I found Minio :

Minio is an open source object storage server with Amazon S3 compatible API. 
Build cloud-native applications portable across all major public and private clouds.

And I was so happy, as I’ll be able to use S3 locally and wont worry about hitting outside my local network, I pulled the docker image, and then I realize that I’ll have to specify the command to run and some environment variables which okay for local development, but it wont work with Bitbucket Pipelines, so my tests wont work.

you can read this tweet and the list of tweets under it

So what I did was to build a new docker image that will have everything up and running, the content of the Dockerfile was

which is simple and nice, but again wont be that much helpful, as I need to have two default buckets created automatically for me, so I altered the file to create a new directories under the /data directory.

and that was nice, so now whenever we run the image it will always have those two buckets, but what about the policy? how can I add a default one. after some search I found out that each bucket will have a policy saved under .minio.sys/buckets directory

so I created the following policy, which will make the bucket public

PS: remember to change <BUCKETNAME> with the bucket name you have created, I have two one for develop and one for test.

and then updated the Dockerfile to ADD those two files to image and the final result is:

So now whenever I run the docker image I’ll always get two public bucket ready for me to work with.

Filed under: Code, docker

No comment yet, add your voice below!


Leave a Reply