-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Lambda package size #2162
Comments
I've got some more scientific info. Here is a flame chart showing a cold start on a beefy 512 lambda
I tried to skip These are some of the pain points I think should be moved out of core, or provided some way to stop from loading inside the lambda package
|
I'm looking at the contents of the
File uploads with graphql are also opt-in, so Those two changes would drop my bundle size by nearly 1/3. |
@hotgazpacho yea, ideally those wouldn’t be in the bundle. However, changing the dependencies like that is a breaking change, and probably couldn’t be done before the next major version. |
Totally understand and appreciate that. My hope is that it does indeed get placed and committed to on the roadmap 😄 |
Per my comment, #2324 (comment), this should have been improved with 2.6.0, and I hope that you have noticed some substantial reduction. We'll continue to work on improving this (See #2360), but I just want to thank you very much for your very clear investigative approach here! |
Hi @abernix - It looks like this issue was resolved and released, but I still have similar issues in It looks like I couldn't find a current issue for this but wanted to make sure it wasn't something in my setup before raising one. |
Sorry, but this has not improved the package size. |
@aaronplummeridgelumira @hotgazpacho Yep, looks like this has regressed. I don't have the time right now to re-run this investigation. Is anyone else available to fix this? |
Looks like the regression happened in #2900, per the blame: https://github.com/apollographql/apollo-server/blame/master/packages/apollo-server-types/src/index.ts |
Upon further digging, it looks like the 2.7.0
2.7.1
BTW, the 2.6.0 release did not address the issue:
|
Here it is for
Hopefully I'll get time to look at this at some point next week. |
I think it would be prudent to open a new issue, though I would like to ask that we create distinction between bundle size and runtime cost. The actual bundle size would have been unlikely to have ever changed, but we specifically shifted the evaluation of the module into a conditional to avoid it on environments where Engine is not used. That's to say, I believe we only previously fixed the runtime cost. As an explanation for the new state of affairs, with the new configuration, it is certainly plausible to believe that the importing of I believe that a workaround worth trying is to switch to importing Anyhow, still, same request for opening a new issue and referencing this from that, but please do try to take a shot at my above suggestion. Also, we should avoid evaluating any of these changes based on |
…obuf`. This is an experiment, but hopefully will resolve the issue noted in: #2162 (comment)
@abernix is exactly right, the tools I used to test only measure whether the package was loaded at runtime, since it is measuring how long it takes to load. Lambda is not impacted by bundle size, only by the code that is loaded. |
Lambda is impacted by package size. There’s a 50mb limit to the package size. Packaging something if it isn’t going to be used by your function increases not only deployment time, but also cold start time. So, yes |
@hotgazpacho unused code does not impact cold start time. I have verified this with dozens of tests. Even the article you linked draws this conclusion
|
I’m sorry, but you are mistaken. Downloading the package from S3 is the very first thing that happens during a cold start. A 3mb package downloads faster than a 30mb one. Therefore, reducing deployment package size reduces cold start time. https://lumigo.io/blog/how-to-improve-aws-lambda-cold-start-performance/ |
I don’t upload my packages to s3, so perhaps that has affected my tests. I deploy directly to lambda.
I have run tests that measure cold start with paths that don’t load any dependencies. have also verified that a lambda with a 20mb video in the bundle made no impact at all to cold start. Size doesn’t matter, only what is loaded into memory.
I have run these tests regularly, on several versions of node lambda. I am confident in this conclusion.
Have you actually run a test to verify this?
|
You may not think that you upload you packages to S3, but that’s exactly what “deploying to lambda” does... it stores your package in an S3 bucket, and when the lambda service invokes it from a cold start, it pulls it down from that bucket. At least, that is how it used to work. I’ve asked around some Lambda experts who seem to think they’ve put some optimizations in place to mitigate this. I apologize if my tone was a bit harsh. At the end of the day, I just want to get my deployment package smaller. If I have to manually strip the |
It may silently use s3 as a storage layer, but it definitely doesn't create an s3 bucket. The file mode and s3 mode are mutually exclusive, perhaps you are only aware of the latter? |
In either case, whatever secret sauce, black magic, operational wizardy that AWS uses, unloaded bundle size doesn't seem to have an impact, only what is loaded into memory. I wont pretend to know how lambda works under the hood, but I can reliably reproduce those measurements. I can share the test harness I use if you are interested (its not as automated as it could be). |
I really don’t care to argue this further. As I said in my last comment:
I’m really unconcerned with how this unused code gets removed from my final deployment package. I just want it gone. |
@hotgazpacho I can understand if you want to be able to control how that unused code gets removed in an environment that you can control. Since the runtime cost and the bundle size appears to actually have remained unchanged, if removing the I will note though, to truly achieve the precision you seem to desire, you'll likely want to introduce a more complicated build step that allows you to do precise dead code elimination because there is likely more to remove. It might even make sense for you to fork Apollo Server and change it to emit ECMAScript modules so you can use a build tool like I do think it's worth noting that, in my opinion, you could have exercised a bit more discretion in your choice of references since it makes for a confusing discussion when both the articles you've cited include notes that there are not discernible differences in cold-boot time corresponding to bundle size: https://read.acloud.guru/does-coding-language-memory-or-package-size-affect-cold-starts-of-aws-lambda-a15e26d12c76 states as a Conclusion:
https://lumigo.io/blog/how-to-improve-aws-lambda-cold-start-performance/ states that:
Of course, without some concrete examples demonstrating your own increased Lambda boot times, speculating about Amazon's internal operational dynamics is not going to do us much good, though I would be a bit surprised if the 732KB uncompressed size of But regardless, the safe suggestion to you is to just lop off |
you’re right, cold start appears not to be affected by this. What is affected, however, is the time to package and deploy, and the extra attack surface presented by code that has no use in this context. I should have focused my discussion on those points. Since I don’t seem to be able to communicate effectively why reducing the package size is important, I’ll drop off this thread, and as you suggest, look for other ways to easily prune megabytes of unused files from the deployment package. |
AWS Lambda containers are especially sensitive to their startup time. The scaling model of lambda (one-container-per-request) means containers are often starting, unlike "normal" node servers that start once.
A lot of performance testing has gone into measuring the "cold start" time of Lambdas. Code parsing and loading plays a significant role in this time. A dependency-free node Lambda running on 256mb can cold-start in around ~250ms. A node lambda with 2mb of dependencies running on 256mb can cold-start in around ~2seconds.
apollo-server-lambda has some rather large dependencies, given its task of receiving and sending JSON objects.
Full report
busboy: 539kb
apollo-engine-reporting-protobuf: 177kb
lodash: 69kb.
The Challenge
Of course, none of these are direct dependencies of apollo-server-lambda, they are transitive dependencies. They all come from apollo-server-core. Fixing these transitive dependencies will require work on the core to break out these "not-actually-core" dependencies. However, from the performance tests we've done a drop of just 500kb (busboy) could save as much as 500ms during cold-starts. Hopefully that kind of improvement is worth the effort.
The text was updated successfully, but these errors were encountered: