Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Volume on Host / Containers not reflecting Bucket Contents #42

Closed
logicalor opened this issue Jan 21, 2023 · 10 comments
Closed

Volume on Host / Containers not reflecting Bucket Contents #42

logicalor opened this issue Jan 21, 2023 · 10 comments

Comments

@logicalor
Copy link

OS: Ubuntu 22.04
Docker Version: 20.10.22

Sample Docker-Compose:

version: "3.6"

services:
  php-fpm:
    container_name: "php-fpm"
    build:
      context: ./services/php-fpm
      dockerfile: Dockerfile
    volumes:
      ...
      - $VOLUME_S3FS_PUBLIC:/var/www/html/sites/default/files
      ...
    depends_on:
      - s3fs-public
  ...
  s3fs-public:
    container_name: "s3fs-public"
    image: efrecon/s3fs:1.91
    environment:
      AWS_S3_BUCKET: $MEDIA_S3_BUCKET_PUBLIC
      AWS_S3_ACCESS_KEY_ID: $MEDIA_S3_KEY
      AWS_S3_SECRET_ACCESS_KEY: $MEDIA_S3_SECRET
      AWS_S3_MOUNT: '/opt/s3fs/bucket'
      S3FS_DEBUG: 1
      S3FS_ARGS: ''
    devices:
      - /dev/fuse
    cap_add:
      - SYS_ADMIN
    security_opt:
      - "apparmor:unconfined"
    volumes:
      - '${VOLUME_S3FS_PUBLIC}:/opt/s3fs/bucket:rshared'

The issue I'm having is when I run docker compose up against the above config (some other containers and env vars omitted), the s3fs volumes don't appear to be shared with the host or containers.

This is the output from a docker compose log for the s3fs-public container:

s3fs-public   | Mounting bucket dev-website-public onto /opt/s3fs/bucket, owner: 0:0
s3fs-public   | FUSE library version: 2.9.9
s3fs-public   | nullpath_ok: 0
s3fs-public   | nopath: 0
s3fs-public   | utime_omit_ok: 1
s3fs-public   | unique: 2, opcode: INIT (26), nodeid: 0, insize: 56, pid: 0
s3fs-public   | INIT: 7.34
s3fs-public   | flags=0x33fffffb
s3fs-public   | max_readahead=0x00020000
s3fs-public   |    INIT: 7.19
s3fs-public   |    flags=0x00000039
s3fs-public   |    max_readahead=0x00020000
s3fs-public   |    max_write=0x00020000
s3fs-public   |    max_background=0
s3fs-public   |    congestion_threshold=0
s3fs-public   |    unique: 2, success, outsize: 40

If I docker exec s3fs-public sh and navigate to ./bucket I can see the contents of the remote s3 bucket. But if I am on the host and navigate to $VOLUME_S3FS_PUBLIC (which the container creates - in this case /media/s3fs-public) then I can't see the contents of the remote s3 bucket. Similarly, if I docker exec php-fpm bash and navigate to /var/www/html/sites/default/files I can't see the contents of the remote s3 bucket either.

I have also tried cloning this repo, setting my S3 credentials in a .env, and running docker compose up against the untouched docker-compose.yml file, but am getting the same result - i.e. can't see the remote s3 files in ./bucket.

Is there additional configuration I need to make in order for the mounted s3fs to be shared with the host and other containers?

Thanks.

@efrecon
Copy link
Owner

efrecon commented Feb 3, 2023

Are you running this against something else than AWS? In that case, you would need to specify the URL at which to contact the S3 API, e.g. https://s3.yourprovider.com or similar. Tell me if it helps.

@nrukavkov
Copy link

nrukavkov commented Apr 27, 2023

@efrecon I had the same problem. Inside container files are existing. But in volume nothing. I tried to use docker volumes and volume mapping to host machine. The same behaviour.

AWS_S3_BUCKET=BYBUCKET
AWS_S3_ACCESS_KEY_ID=MYID
AWS_S3_SECRET_ACCESS_KEY=MYKEY
AWS_DEFAULT_REGION=MYREGION
AWS_S3_URL=https://s3.provider
S3FS_ARGS=use_path_request_style
  s3fs:
    cap_add:
    - SYS_ADMIN
    security_opt:
      - apparmor:unconfined
    privileged: true
    image: efrecon/s3fs:1.91
    restart: unless-stopped
    env_file: .env
    volumes:
      - s3data:/opt/s3fs/bucket:rshared
  test:
    image: bash:latest
    restart: unless-stopped
    depends_on:
      - s3fs
    # Just so this container won't die and you can test the bucket from within
    command: sleep infinity
    volumes:
      - s3data:/data:rshared
volumes:
  s3data:

@nrukavkov
Copy link

nrukavkov commented Apr 27, 2023

I did an experiment. I opened shell in s3fs and umount /opt/s3fs/bucket. Then I tried to create a folder. And this folder was showed in another container.

Then I deleted folder 'test' and run tini -g -- docker-entrypoint.sh again. And got that in second container nothing

Also I build a new image from ubuntu latest and there is the same problem

@truesteps
Copy link

@logicalor heya! did you manage to figure out a fix to this? I have the same issue, when i exec into the container and modify the contents of the bucket folder, it works, but not when i mount it to the host and then mount things from the host into the other containers

@truesteps
Copy link

It seems as if the volume from the container is not getting mapped to the host container from what i'm seeing, 'cause all the other services mount properly to the host, except s3fs

@truesteps
Copy link

@logicalor I figured out the issue with the help of my friend and trial / error... There is a bug in docker compose plugin it seems docker/compose#9380 so when you run it using docker compose up it just wont work since the propagation doesn't get updated no matter what you put in docker-compose, you can check it by docker inspect {container_name} and check propagation under the moutns section.

Fixed by uninstalling docker-compose-plugin and installing standalone docker-compose

@TheNexter
Copy link

@truesteps
Copy link

@TheNexter thanks :) i already figured it out tho, the issue was me using the compose plugin for docker instead of standalone docker-compose... Unfortunately setting propagation just plain didn't work with the compose plugin

@tab10
Copy link

tab10 commented Aug 4, 2023

Just some tips from my experience:

  • Mount propagation doesn't work on Docker Desktop (Windows machines I believe). The project will still build and run fine but the mount will only be accessible within the s3fs container.
  • If you're deploying this docker as a container in a Compose project to a cloud service, be careful! Using AWS EC2 and Elastic Beanstalk I could not for the life of me get the bucket to be propagated into other containers. EC2 and services that involve deploys have their own procedure for building and running Dockers, which involves many intermediate steps that might not be documented.

Possible workarounds:

  • Build a single Dockerfile (no point in using Compose) and multi-stage builds to FROM this docker-s3fs-client image from Dockerhub, then copy in these files into your container. You will need to use this entrypoint.sh and related files, so I'd copy them into your root folder and then add the running command as CMD. This would result in a longer 1st build time but it'll be optimized using the Buildkit for future builds. A single container approach works on AWS and when built locally on machines with Docker Desktop
  • aws s3 sync CLI command
  • AWS DataSync service

Thanks to the package contributors for your efforts!

@efrecon
Copy link
Owner

efrecon commented Aug 11, 2023

Thanks for figuring this out. I have added a mention of this issue in the main README.

@efrecon efrecon closed this as completed Aug 11, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants