-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature request: make it possible to keep docker container warm #239
Comments
+1 Python container takes too long to start for simple debugging... |
+1. This currently makes local automated testing painful at best. Thanks for the continued work on this project! |
Have there been any eyes on this? The benefit would be so huge. |
+1 |
1 similar comment
+1 |
+1 |
1 similar comment
+1 |
+1, even a simple hello world java8 lambda takes 3/4 seconds for each request! |
My sketch proposal to make warm containers work and maintain all the existing nice hot reload/memory usage etc functionality around them: Currently, the container is simply run with handler argument and the event passed in via an environment variable. The containers logs are then piped to the console stdout/stderr and it just records how much memory is used. Instead, we can start the container with |
+1 |
+1 Very interested in this feature |
+1 Yes please! |
+1, throwing my hat in the ring on this too |
As a note: Please use the reaction feature on the top comment. We do look at issues sorted by thumbs up (as well as other reactions). Commenting +1 does not good for that and adds noise to the issue. |
@jfuss I agree (and had done this). Any feedback from your team would be helpful here, though. The closest thing we had to knowing if this is on your radar (before your comment) was duplicate issue consolidation and labeling. |
+1, this would be very beneficial for people using java + spring boot. |
+1, around 1s for golang case |
I did an experiment with container reuse. This is just with a lambda in python, I'm developing on ubuntu 16.04. In summary, docker container spinning up only takes an extra second. So it is not worth making the feature for container reuse. Link to my code https://github.com/kevanpng/aws-sam-local . For a fixed query, both my colleague and I have 4s invocation time on sam local. His is a windows machine. With giving the My colleague is running on mac and when he tried the same query with lambda reuse and profile flag, he still had 11-14 seconds to run. Maybe it could be that docker is slow on mac? |
1 second is a world's difference when building an API and you expect to serve more than 1 request. I think it's well worth the feature. |
@kevanpng Hey I was looking through your code to understand what exactly you did.. So basically, you create the container once with a fixed name, run the function, and on next invocation look for container with same name and simply I am super surprised Docker container creation makes this big of a difference. We can certainly look deeper into this if it is becoming usability blocker. |
@sanathkr. Thanks for looking at this. FWIW, it's a huge usability blocker for me:
And the |
@scoates Thanks for the comparison. Its not apples-to-apples to compare vanilla Flask to Docker-based app. But the 6 second duration with SAM CLI is definitely not what I would expect..
Thinking ahead: |
I did some more profiling by crudely commenting out parts of the codebase. Also this is not run multiple times. So the numbers are ballpark estimates. I ran Platform: MacOSX WARNING: Very crude measurements. Total execution time ( Based on the above numbers, I arrived at a rough estimate for each step of the invoke path by assuming: Total execution = SAM CLI overhead + Docker Image pull + Create container + Run Container + Run function Then, here is how much each steps took: SAM CLI Overhead: 0.045 seconds The most interesting part is Create vs Run container durations. Run is 5x of Create. So it is better if we optimized for the Run duration. If we were to do a warm start, then we would be saving some fraction of the 0.85 seconds it took to run the container. We should be keeping the runtime process up and running inside the container and re-run just the function in-place. Otherwise we aren't going to save much. |
Hi. Sorry for the late reply. I was traveling last week and forgot to get to this when I returned. I agree absolutely that apigw and flask aren't apples-to-apples, and crude measurements are definitely where we're at right now. With
The
I acknowledge that I'm not sure how to measure much deeper than that. More info:
I also agree that if I can get this down to sub-1s request times, it's probably usable. 5s+ is painful, still, though. (Edit: adding in case anyone looking for Zappa info stumbles on this. I'm using an experimental fork of the Zappa handler runtime. This doesn't really apply to Zappa-actual. At least not right now.) |
If When I hardcode the environment variable in my template.yaml like that:
The whole thing crashes giving me that error message:
|
Are there any updates or is there a timeline on this? This is the single biggest blocker for us (and I can imagine for many others) to do more with AWS Lambda because this makes it almost impossible to develop and test stuff locally. Even with I understand that features must be prioritized but I am having a hard time to understand that everything that is running on lambda cannot be tested locally is not a high priority issue. Or am I missing something? |
I have solved this trouble by moving away from Lambda to Node Express
Dne st 15. 4. 2020 12:50 uživatel flache <[email protected]> napsal:
… Are there any updates or is there a timeline on this? This is the single
biggest blocker for us (and I can imagine for many others) to do more with
AWS Lambda because this makes it almost impossible to develop and test
stuff locally. Even with --skip-pull-image, a delay of ~5 seconds for
each request makes it just unusable. Also there is the problem with global
context not being preserved.
I understand that features must be prioritized but I am having a hard time
to understand that everything that is running on lambda cannot be tested
locally is not a high priority issue. Or am I missing something?
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#239 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABQWNN5BD2AETHPMBI3KXULRMWGQTANCNFSM4EJNN3UA>
.
|
Update: The team is working on other prioritizes at the moment. We know the time it takes for invoking locally is a pain point for many and we have plans to address it in the future. We do not have an ETA as of now. |
@flache
I have built a whole graphql service like that, and run it for a few weeks on AWS now. Seems to be fine. |
For those who are very comfortable with Docker and docker-compose, I created a proxy image that works with the underlying SAM (lambci) images and can bring your lambda function into existing docker-compose workflows as a long-lived function. https://github.com/elthrasher/http-lambda-invoker |
I have personally switched from AWS Lamda to NodeJS+Express+nodemon and my productivity and happiness boosted. |
Spent the last week writing a CLI tool to help with this issue, just 2 days ago I published the first version. It's available in npm for download and installation. It provides both I think the tool is easy to use (takes one command to run your api locally) but it's in a very early stage. It works very well for my APIs but I'm pretty sure I didn't take all use cases into consideration. So, give it a go, report any issues you find and please leave some feedback. |
@duartemendes that tool is amazing! Congratulations and let me know if you need any help. Does your tool currently support layers? |
Thanks @S-Cardenas. It doesn't but it's something I'm happy to take a loot at 👍 |
This is really a road blocker for this technology for us. Too painfully. It is not sustainable to wait 10 seconds per each request during development. Without any action on this, I think that we have to reconsider our approach to this technology. |
Update: We have prioritized some work that will help with the slow request time and provided a better warm invoke experience. I do not have timelines or ETAs to share at this point but wanted to communicate that we are starting to look at what we can do in this space. |
@jfuss any updates? |
I'm very excited to see this feature. |
@jfuss any news? |
Ditto. Would be great if this was officially released. Currently using https://github.com/elthrasher/http-lambda-invoker as a substitute. |
🤞 Let's hope we can see this soon |
Seems like it's getting very close to being approved and merged. Would love to get a notification when/if it does. |
Fingers crossed this is soon added |
This feature has been added to the newest release (https://github.com/aws/aws-sam-cli/releases/tag/v1.14.0) 🎉 |
(As @kaarejoergensen mentioned 😄 ) Happy to inform that, this has been released with v1.14, resolving the issue. |
I understand from other issues that a new docker container is started for each request. This makes some experiments or automated tests undoable in practice. SAM Local is much too slow in the context where more then 1 request is to be handled.
I suspect that hot reloading depends on this feature.
I think it would be a good idea to make it possible to choose, while this project evolves further, to forego hot reloading, but to keep the docker container warm.
Something like
This would broaden the applicability of sam local enormously.
Thank you for considering this suggestion. This looks like an awesome project.
The text was updated successfully, but these errors were encountered: