The other day, I was building a small development environment for me to test Laravel and S3, but as usual, I didn’t want to use production credentials nor to use S3 directly, so after a small search I found Minio :
Minio is an open source object storage server with Amazon S3 compatible API.
Build cloud-native applications portable across all major public and private clouds.
And I was so happy, as I’ll be able to use S3 locally and wont worry about hitting outside my local network, I pulled the docker image, and then I realize that I’ll have to specify the command to run and some environment variables which okay for local development, but it wont work with Bitbucket Pipelines, so my tests wont work.
you can read this tweet and the list of tweets under it
So what I did was to build a new docker image that will have everything up and running, the content of the
| || |
ENV MINIO_ACCESS_KEY minio
ENV MINIO_SECRET_KEY miniostorage
which is simple and nice, but again wont be that much helpful, as I need to have two default buckets created automatically for me, so I altered the file to create a new directories under the