-
Notifications
You must be signed in to change notification settings - Fork 322
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Fargate] [request]: container files not copied to volume in 1.4.0 #863
Comments
Hello, thanks for reporting this. The behavior of preserving the contents of the directory to share data between containers was inadvertently supported in platform version 1.3 and hence not documented. We recognize that you have taken a dependency on this. However, there is an easy path forward in platform version 1.4. On platform version 1.4, you can get the desired behavior by specifying directories you want exported with a Please note that the behavior where contents of directories are automatically exported as volumes works in Docker using the 'volumes' feature. However, as per ECS's 'Docker-volume' documentation,
Fargate does support attaching bind-mounts to containers. You can use the combination of the For example, running a container that uses the Dockerfile
Output from the container that does
|
Thanks for your update I have tested the volumes-from technique and I can confirm it works. However, "You can use the combination of the VOLUME Dockerfile directive and either of bind-mounts" maybe my mistake is that the I'll try using the same directory and get back to you |
Hello, thanks for confirming that
|
we have a similar use-case where we're not really interested by the 'content' of the VOLUME, but instead by its OWNER and PERMISSIONS See the following example Dockerfile:
such a container would run in FARGATE platform 1.3.0, but would fail in platform 1.4.0. the additional trick here is that in platform 1.3.0, the docker 'hack' also copied the owner/group/perms of the /scratch VOLUME to the mount at run time making the folder writable by the application running as 'lowuser'. In 1.4.0, using external bind mount might indeed fix the problem of mounting a writable volume to the /scratch dir within the container, but the dir would then be owner by 'root' and not be writable by the 'lowuser'... and I really don't want to start my container as root, just to 'chown' the folder before dropping privilege (sounds like added complexity and security risk here). I should mention that our ECS Task Definition set the readonlyRootFilesystem to |
Hi I see the same issue as reported by @fischaz . We relied on setting the owner of the shared volume via the docker file similar to the example shared by @fischaz and the permissions were propagated to the mount directories of all containers using that shared volume. This is no longer the case in 1.4.0 and the shared volume is always mounted on the directory with owner as 'root'. Is this expected ? If yes, is there a workaround to have the same behavior in 1.4.0 ? |
I am seeing the same issue as well with Fargate 1.4.0. |
Can you provide example of what you have done? I am struggling with apache/php-fpm combination. |
Hello! To run a container an image as non-root, you can do the following: They can export the path that they want to export as a VOLUME and run chown in their own Dockerfile or ImageFile. As an example, To understand how this can be achieved, let us look at the following Dockerfile.
Please let us know if it works for you. We look forward to your feedback. Thank you |
Hello, to confirm, AWS has recently (in Nov/Dec 2020) fixed that issue in FARGATE 1.4.0 and my sample above is now working (alongside all by fargate services now upgraded to 1.4.0 without any major issue)... One little CAVEAT of fargate 1.4.0 (compared to 1.3.0) is when the VOLUME in not on a canonical path, but a symlinked path... to explain, take the following example:
(yes, I am missing a lot of statements like the ENTRYPOINT and such, just trying to show the file structure)... in this case, imaging the squid process (running as the squid user) trying to write a PID file in in FARGATE 1.3.0 (docker), this would somehow work and the folder/volume owner/permission would work and the process would start. in FARGATE 1.4.0 (containerd), this would fail to start and the folder within the container would still be owner by root:root.... I suspect that's because the VOLUME is referring not to a 'real' absolute path on disk but symlinked path (the real path on disk would be /run/squid if you resolve all the links).... the solution here is simply to change the volume to be:
(even if you squid configuration might still try to write to |
I'm stumbling on this very same problem and the |
Hello, We have recently updated our documentation with some examples on how to use bind-mounts. These examples include:
The link to our updated documentation can be found here: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/bind-mounts.html#bind-mount-examples If this does not solve your use-case, please reach out to AWS Support. |
Hi @manugupt1, I am not sure the documentation is correct and is resulting in an empty directory even when the instructions from the To expose a path and its contents in a Dockerfile to a container section are followed to the letter. Our use-case is very similar to the OP we have an Nginx container reading static assets from a Django container. We share the static directory from the Django container with the Nginx container. When we inspect the containers the directory is correctly shared, but for some reason it has been cleared out. Is there a way to prevent the clearing? I have an active case with support (8168103861) about this and they said to also post here... |
I can confirm that it works, by setting the mount definitions paths in the task definition to the same values as defined in the Dockerfile VOLUME statements. eg I've been able to remove the VolumesFrom workaround (which also worked) should I close this issue? |
Yes, @guillaumesmo, thank you for validating. Closing it based on your confirmation. |
I´m having the same issue and have done the same configurations and nothing is working. |
same !! |
@SaloniSonpal - I was unable to get this to work even with a volume directive specifying the mount path in the docker file and a matching containerPath in the task definition. |
Hi @SaloniSonpal and Community Regarding copying data to a bind volume , I found a workaround to get data inside a bind volume is not pretty but works like a charm. SolutionIn the dockerfile add an ENTRYPOINT command to copy the files to the right folders , like so: example1COPY ./data/with/data/to/copy /data/with/data/to/copy
VOLUME ["/bind/volume/folder" ]
ENTRYPOINT cp -r /data/with/data/to/copy /bind/volume/folder example2Let's assume you want to test a deb file generated by yourself and also assume the files are copied to COPY ./data/with/data/to/copy /data/with/data/to/copy
VOLUME ["/opt/" ]
ENTRYPOINT apt install /data/with/data/to/copy/installer.deb |
I have same problem. But, it's working when rewriting Docker file below. Not Working # "containerPath": "/var/log/exported" <- adjust to task definition
VOLUME ["/var/log/exported"]
RUN mkdir -p /var/log/exported
RUN touch /var/log/exported/examplefile Wroking # "containerPath": "/var/log/exported" <- adjust to task definition
RUN mkdir -p /var/log/exported
RUN touch /var/log/exported/examplefile
# Write VOLUEM command after file copying
VOLUME ["/var/log/exported"] |
Have the same problem. I have tried to add different configuration options: Container: Task definition: ` volume {
Fargate: 1.4.0 The only one solution which I see.... it just copy content to EFS...... |
Community Note
Tell us about your request
tldr: Keep feature to copy files from mount point into volume on initial creation
https://cloudonaut.io/how-to-dockerize-your-php-application-for-aws-fargate/
I have a task with 2 containers: nginx and php
php contains scripts which is called by nginx through php-fpm fastCGI
nginx serves those php requests but also static files which, for performance reasons, are not served through fpm but are on a shared volume served directly by nginx
In Fargate 1.3.0, mounting an empty volume (without host configuration) in both containers magically shared files from one container with the other container, I understand this may have been a "hack" though
Which service(s) is this request for?
Fargate
Tell us about the problem you're trying to solve. What are you trying to do, and why is it hard?
In Fargate 1.3.0, when mounting a volume in a container, the initial data at the mount point was copied into the new volume (which is as expected according to the Docker documentation: https://docs.docker.com/storage/volumes/#populate-a-volume-using-a-container)
that is not the case in 1.4.0 anymore, the volume is empty
this is a breaking change that prevents one of my nginx-php applications to be migrated to 1.4.0
which is good I discovered this before AWS set the LATEST tag to 1.4.0
Are you currently working around this issue?
Hardcoded platform version 1.3.0 in service definition
still looking for a workaround but hard to debug since I cannot open a console in Fargate tasks
I could also workaround the issue by copying the files from the php image into the nginx image before pushing to ECR, but that, again, is extra work in the CD pipeline
Additional context
I suspect this is a Docker feature which was lost since Fargate 1.4.0 moved to Containerd
this issue is very related in another non-Docker environment:
containers/podman#3945
Attachments
The text was updated successfully, but these errors were encountered: