Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Fargate] [request]: container files not copied to volume in 1.4.0 #863

Closed
guillaumesmo opened this issue Apr 28, 2020 · 20 comments
Closed
Labels
Fargate PV1.4 Fargate AWS Fargate Proposed Community submitted issue

Comments

@guillaumesmo
Copy link

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Tell us about your request
tldr: Keep feature to copy files from mount point into volume on initial creation

https://cloudonaut.io/how-to-dockerize-your-php-application-for-aws-fargate/
I have a task with 2 containers: nginx and php
php contains scripts which is called by nginx through php-fpm fastCGI
nginx serves those php requests but also static files which, for performance reasons, are not served through fpm but are on a shared volume served directly by nginx

In Fargate 1.3.0, mounting an empty volume (without host configuration) in both containers magically shared files from one container with the other container, I understand this may have been a "hack" though

Which service(s) is this request for?
Fargate

Tell us about the problem you're trying to solve. What are you trying to do, and why is it hard?
In Fargate 1.3.0, when mounting a volume in a container, the initial data at the mount point was copied into the new volume (which is as expected according to the Docker documentation: https://docs.docker.com/storage/volumes/#populate-a-volume-using-a-container)
that is not the case in 1.4.0 anymore, the volume is empty
this is a breaking change that prevents one of my nginx-php applications to be migrated to 1.4.0
which is good I discovered this before AWS set the LATEST tag to 1.4.0

Are you currently working around this issue?
Hardcoded platform version 1.3.0 in service definition
still looking for a workaround but hard to debug since I cannot open a console in Fargate tasks

I could also workaround the issue by copying the files from the php image into the nginx image before pushing to ECR, but that, again, is extra work in the CD pipeline

Additional context
I suspect this is a Docker feature which was lost since Fargate 1.4.0 moved to Containerd

this issue is very related in another non-Docker environment:
containers/podman#3945

Attachments

@guillaumesmo guillaumesmo added the Proposed Community submitted issue label Apr 28, 2020
@SaloniSonpal SaloniSonpal added the Fargate AWS Fargate label May 4, 2020
@aaithal
Copy link

aaithal commented May 14, 2020

Hello, thanks for reporting this.

The behavior of preserving the contents of the directory to share data between containers was inadvertently supported in platform version 1.3 and hence not documented. We recognize that you have taken a dependency on this. However, there is an easy path forward in platform version 1.4.

On platform version 1.4, you can get the desired behavior by specifying directories you want exported with a VOLUME directive in the Dockerfile (See example below).

Please note that the behavior where contents of directories are automatically exported as volumes works in Docker using the 'volumes' feature. However, as per ECS's 'Docker-volume' documentation,

Docker volumes are only supported when using the EC2 launch type.

Fargate does support attaching bind-mounts to containers. You can use the combination of the VOLUME Dockerfile directive and either of bind-mounts or volumes-from flags to achieve what you're looking for.

For example, running a container that uses the volumes-from directive along with the container built from the Dockerfile pasted below produces the following result:

Dockerfile

FROM ubuntu:16.04
RUN mkdir /scratch && touch /scratch/file.txt
VOLUME /scratch #### THIS IS NEEDED TO EXPOSE THE DIRECTORY AS A VOLUME
CMD ["sh", "-c", "ls -l /scratch >&1 "]

Output from the container that does volumes-from

total 68
-rw-r--r--   1 root   root   0 May 13  2020 file.txt

Please let us know if this helps resolve the issue that you're running into.

Thanks,
Anirudh

@guillaumesmo
Copy link
Author

Thanks for your update

I have tested the volumes-from technique and I can confirm it works. However, volumes-from was deprecated from docker-compose 2 specification and removed in version 3, so I would rather not use that.

"You can use the combination of the VOLUME Dockerfile directive and either of bind-mounts"
that is exactly what was being done before but it stopped working in Fargate 1.4.0

maybe my mistake is that the VOLUME attribute is on a top-level directory (eg /var) and the bind mount is on a subdirectory (eg /var/static) ?

I'll try using the same directory and get back to you

@aaithal
Copy link

aaithal commented May 18, 2020

Hello, thanks for confirming that volumes-from and VOLUME works for this use-case. I've corrected my original post to reflect that the combination of VOLUME and bind-mount will work for this scenario. As per the bind-mount documentation,

If you bind-mount into a non-empty directory on the container, the directory’s existing contents are obscured by the bind mount. This can be beneficial, such as when you want to test a new version of your application without building a new image. However, it can also be surprising and this behavior differs from that of docker volumes.

volumes-from would be the way to do this in the current scenario.

@fischaz
Copy link

fischaz commented Jun 11, 2020

we have a similar use-case where we're not really interested by the 'content' of the VOLUME, but instead by its OWNER and PERMISSIONS

See the following example Dockerfile:

FROM ubuntu:16.04
RUN mkdir /scratch
RUN useradd lowuser
RUN chmod 700 /scratch
RUN chown lowuser /scratch
USER lowuser
VOLUME /scratch #### THIS IS NEEDED TO EXPOSE THE DIRECTORY AS A VOLUME
CMD ["sh", "-c", "echo hello world > /scratch/data.txt"]

such a container would run in FARGATE platform 1.3.0, but would fail in platform 1.4.0.

the additional trick here is that in platform 1.3.0, the docker 'hack' also copied the owner/group/perms of the /scratch VOLUME to the mount at run time making the folder writable by the application running as 'lowuser'.

In 1.4.0, using external bind mount might indeed fix the problem of mounting a writable volume to the /scratch dir within the container, but the dir would then be owner by 'root' and not be writable by the 'lowuser'... and I really don't want to start my container as root, just to 'chown' the folder before dropping privilege (sounds like added complexity and security risk here).

I should mention that our ECS Task Definition set the readonlyRootFilesystem to true as per https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definition_parameters.html#container_definition_storage

@jitendra29
Copy link

Hi I see the same issue as reported by @fischaz . We relied on setting the owner of the shared volume via the docker file similar to the example shared by @fischaz and the permissions were propagated to the mount directories of all containers using that shared volume. This is no longer the case in 1.4.0 and the shared volume is always mounted on the directory with owner as 'root'. Is this expected ? If yes, is there a workaround to have the same behavior in 1.4.0 ?

@ErikOwen
Copy link

ErikOwen commented Jul 9, 2020

I am seeing the same issue as well with Fargate 1.4.0.

@sky4git
Copy link

sky4git commented Sep 25, 2020

Thanks for your update

I have tested the volumes-from technique and I can confirm it works. However, volumes-from was deprecated from docker-compose 2 specification and removed in version 3, so I would rather not use that.

"You can use the combination of the VOLUME Dockerfile directive and either of bind-mounts"
that is exactly what was being done before but it stopped working in Fargate 1.4.0

maybe my mistake is that the VOLUME attribute is on a top-level directory (eg /var) and the bind mount is on a subdirectory (eg /var/static) ?

I'll try using the same directory and get back to you

Can you provide example of what you have done? I am struggling with apache/php-fpm combination.

@manugupt1
Copy link

manugupt1 commented Dec 16, 2020

Hello!

To run a container an image as non-root, you can do the following:

They can export the path that they want to export as a VOLUME and run chown in their own Dockerfile or ImageFile.

As an example,
Let’s consider an image that has node as the base image (node will use nodejs environment) and it wants to use /var/log/exported in node:node as the user and group respectively. Now, you can
specify VOLUME directive to /var/log/exported and then all the permissions will be reflected in their task volumes.

To understand how this can be achieved, let us look at the following Dockerfile.

FROM node:12-slim ## A node js base image
RUN chown node:node /var/log/exported ## Changing permissions from root to node
VOLUME ["/var/log/exported"] ## Specifying a VOLUME directive applies the permission

Please let us know if it works for you. We look forward to your feedback.

Thank you

@fischaz
Copy link

fischaz commented Jan 23, 2021

Hello,

to confirm, AWS has recently (in Nov/Dec 2020) fixed that issue in FARGATE 1.4.0 and my sample above is now working (alongside all by fargate services now upgraded to 1.4.0 without any major issue)...

One little CAVEAT of fargate 1.4.0 (compared to 1.3.0) is when the VOLUME in not on a canonical path, but a symlinked path...

to explain, take the following example:

FROM scratch

RUN mkdir /run
RUN mkdir /var
RUN ln -s /run /var/run
RUN mkdir /var/run/squid
RUN chown squid:squid /var/run/squid

VOLUME ["/var/run/squid"]

(yes, I am missing a lot of statements like the ENTRYPOINT and such, just trying to show the file structure)... in this case, imaging the squid process (running as the squid user) trying to write a PID file in /var/run/squid/squid.pid...

in FARGATE 1.3.0 (docker), this would somehow work and the folder/volume owner/permission would work and the process would start. in FARGATE 1.4.0 (containerd), this would fail to start and the folder within the container would still be owner by root:root....

I suspect that's because the VOLUME is referring not to a 'real' absolute path on disk but symlinked path (the real path on disk would be /run/squid if you resolve all the links)....

the solution here is simply to change the volume to be:

VOLUME ["/run/squid"]

(even if you squid configuration might still try to write to /var/run/squid and follow the symlink. Ideally, I guess we'll want to simply avoid symlink in docker when possible to keep it simple for everyone.

@babaMar
Copy link

babaMar commented Feb 11, 2021

I'm stumbling on this very same problem and the VOLUME ... directive in Dockerfile doesn't seem to solve it.

@manugupt1
Copy link

Hello,

We have recently updated our documentation with some examples on how to use bind-mounts. These examples include:

  1. Getting an empty data-volume for one or more containers.
  2. To expose a path and its contents from a Dockerfile to a container.
  3. To run a particular data-volume in a non-root environment.

The link to our updated documentation can be found here: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/bind-mounts.html#bind-mount-examples

If this does not solve your use-case, please reach out to AWS Support.
Thanks
Manu

@Sjeanpierre
Copy link

Sjeanpierre commented Mar 31, 2021

Hi @manugupt1,

I am not sure the documentation is correct and is resulting in an empty directory even when the instructions from the To expose a path and its contents in a Dockerfile to a container section are followed to the letter.

Our use-case is very similar to the OP we have an Nginx container reading static assets from a Django container. We share the static directory from the Django container with the Nginx container. When we inspect the containers the directory is correctly shared, but for some reason it has been cleared out. Is there a way to prevent the clearing?

I have an active case with support (8168103861) about this and they said to also post here...

@guillaumesmo
Copy link
Author

I can confirm that it works, by setting the mount definitions paths in the task definition to the same values as defined in the Dockerfile VOLUME statements.

eg
VOLUME /home and ContainerPath: /home/test -> does not work
VOLUME /home/test and ContainerPath: /home/test -> works! files are copied in the mounted volume

I've been able to remove the VolumesFrom workaround (which also worked)

should I close this issue?

@SaloniSonpal
Copy link

Yes, @guillaumesmo, thank you for validating. Closing it based on your confirmation.

@FANMixco
Copy link

FANMixco commented May 6, 2021

I´m having the same issue and have done the same configurations and nothing is working.

@AffiTheCreator
Copy link

I´m having the same issue and have done the same configurations and nothing is working.

same !!

@yehudacohen
Copy link

@SaloniSonpal - I was unable to get this to work even with a volume directive specifying the mount path in the docker file and a matching containerPath in the task definition.

@AffiTheCreator
Copy link

Hi @SaloniSonpal and Community

Regarding copying data to a bind volume , I found a workaround to get data inside a bind volume is not pretty but works like a charm.

Solution

In the dockerfile add an ENTRYPOINT command to copy the files to the right folders , like so:
I have a similar setup working with AWS Fargate

example1

COPY ./data/with/data/to/copy  /data/with/data/to/copy
VOLUME ["/bind/volume/folder" ]
ENTRYPOINT cp -r /data/with/data/to/copy /bind/volume/folder

example2

Let's assume you want to test a deb file generated by yourself and also assume the files are copied to /opt/

COPY ./data/with/data/to/copy  /data/with/data/to/copy
VOLUME ["/opt/" ]
ENTRYPOINT apt install   /data/with/data/to/copy/installer.deb 

@yukimura1227
Copy link

I´m having the same issue and have done the same configurations and nothing is working.

I have same problem. But, it's working when rewriting Docker file below.

Not Working

#  "containerPath": "/var/log/exported" <- adjust to task definition
VOLUME ["/var/log/exported"]

RUN mkdir -p /var/log/exported
RUN touch /var/log/exported/examplefile

Wroking

#  "containerPath": "/var/log/exported" <-  adjust to task definition

RUN mkdir -p /var/log/exported
RUN touch /var/log/exported/examplefile

# Write VOLUEM command after file copying
VOLUME ["/var/log/exported"]

@mykolaov
Copy link

mykolaov commented Apr 5, 2023

Have the same problem.

I have tried to add different configuration options:

Container:
mountPoints = [{ containerPath = "/my/path/web" sourceVolume = "web" }]

Task definition:

` volume {
name = "web"

  efs_volume_configuration {
    file_system_id          = "fs-xxxxxxxxx"
    root_directory          = "/my/path/web"
  }
}`

Fargate: 1.4.0

The only one solution which I see.... it just copy content to EFS......
Meanwhile, GCP and Azure have no such problem...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Fargate PV1.4 Fargate AWS Fargate Proposed Community submitted issue
Projects
None yet
Development

No branches or pull requests