-
Notifications
You must be signed in to change notification settings - Fork 4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
s3 bucket deployment time outs with many files #7571
Comments
I saw that we can configure memory on the deployment resource to increase the lambda performance. Is that the way to go? |
I'm facing the same, it logs copying file until this line
|
@jeshan I just tried increasing memory to |
@jeshan Wrote:
I am not versed enough in how the CLI optimizations work, but its possible. In any case, we already have a feature request for toggling the Feel free to 👍 it! @armandosoriano Wrote:
Dont think this its related to python per se. In any case, the 15 minutes time limit is a hard lambda limit, so no matter how many optimizations or configurations we provide, if the I am leaning towards closing this issue as it seems to relate to already existing issues. Thoughts? |
before closing, let's share ideas that's beyond the scope of #953 Instead of using
Yes, but at least we'll be able to make progress in the right direction by quite a bit. |
@iliapolo our deployment weights less than 300MB so I'm not sure if it is really related to that limit. Indeed I'm just guessing but it looks like python process gets stuck during the copy files operation, while it still reports 5-8MB/s speed, which is indeed faulty. Maybe the copy is proper but for some reason the process is not ending. Without more knowledge about how that is being managed is difficult to know. Hope those guessings can at least help you in your investigation. |
@armandosoriano @jeshan thanks for the feedback. Keeping this on our docket as we discuss what the best course of action here would be. |
Closing in favor of #7950 |
Deploying a website with
BucketDeployment
that has thousands of files times out as it needs to run longer than the lambda function timeout of 15 minutes.Reproduction Steps
Error Log
The serverless function
the cloudformation stack
Environment
Other
Suggestions
I think it's because of the s3 sync is checking thousands of times if the files exist.
aws-cdk/packages/@aws-cdk/aws-s3-deployment/lambda/src/index.py
Lines 133 to 134 in b30add8
cp
command instead, or:aws s3 cp --recursive
when creating the deployment and continue usingsync
for subsequent updates. That way, at least the cdk stack will stabilise (but postpones dealing with the issue)This is 🐛 Bug Report
The text was updated successfully, but these errors were encountered: