You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
Camaro 6.1.0 is orders of magnitude slower on AWS Lambda than 3.0.7
I have a complex application that makes use of camaro to decode XML we receive from external systems that we use for creating train tickets. We've been using v3.0.7 for the past year and a half (current version when we built the system).
We've been updating various aspects of the system, including upgrading camaro to v6.1.0
Running locally on Macs everything seemed fine.
Yesterday we deployed the update to AWS Lambda.
Prior to the update, the overall execution time for a seat reservation function was around 3 seconds. After the deploy that has gone up to 20 - 120 seconds.
Increasing the memory allocation from 128Mb to 1576Mb has brought the time back down to around 5-6 seconds.
Today we have trying rolling back various aspects of the upgrade. The only one that made a difference is camaro.
We've extensively tested two builds where the only difference is the version of camaro. We've used the same reservation request to ensure both sets of code have the exact same task to perform.
The version with 3.0.7 is taking 3 seconds with 128Mb of memory allocated to Lambda.
The version with 6.1.0 is taking 28 seconds with the same 128Mb of memory allocated to Lambda.
I have logging which counts up timings for the main operations. They show the number of times an operation was called and the total time for all calls combined and average time per call.
With v 3.0.7 on AWS Lambda the logs for camaro show (timings in µs)
Over repeated runs, both of those examples had times in the 100s as well and in the millions (µs). However, out of the 12 calls there was always at least two that were in the millions.
I can speed things up a fair bit by adding some locking around the camaro calls to ensure camaro is never processing more than one XML file at a time.
Digging through past issues I can see that the slow down came in v4 with the introduction of WebAssembly. I saw mention of example using piscina to create worker pools for processing file sin parallel. However, these have gone so I presume that the need for them has gone too?
Is what i am seeing in terms of speed still expected with v6? Is there any solution to having speeds like v3 other than to stick with v3?
I'm still using your great library (v 3.0.19). I'm now using an M2 Mac for development which doesn't play nicely with 3.0.7 but works well with 3.0.19.
Similarly 3.0.19 works great on AWS.
However, I'd very much like to using AWS arm64 architecture rather than i86.
Any chance you could provide a binary for Node18 and AWS arm64?
Describe the bug
Camaro 6.1.0 is orders of magnitude slower on AWS Lambda than 3.0.7
I have a complex application that makes use of camaro to decode XML we receive from external systems that we use for creating train tickets. We've been using v3.0.7 for the past year and a half (current version when we built the system).
We've been updating various aspects of the system, including upgrading camaro to v6.1.0
Running locally on Macs everything seemed fine.
Yesterday we deployed the update to AWS Lambda.
Prior to the update, the overall execution time for a seat reservation function was around 3 seconds. After the deploy that has gone up to 20 - 120 seconds.
Increasing the memory allocation from 128Mb to 1576Mb has brought the time back down to around 5-6 seconds.
Today we have trying rolling back various aspects of the upgrade. The only one that made a difference is camaro.
We've extensively tested two builds where the only difference is the version of camaro. We've used the same reservation request to ensure both sets of code have the exact same task to perform.
The version with 3.0.7 is taking 3 seconds with 128Mb of memory allocated to Lambda.
The version with 6.1.0 is taking 28 seconds with the same 128Mb of memory allocated to Lambda.
I have logging which counts up timings for the main operations. They show the number of times an operation was called and the total time for all calls combined and average time per call.
With v 3.0.7 on AWS Lambda the logs for camaro show (timings in µs)
With 6.1.0 on AWS Lambda they are:
Running locally on OSX, 6.1.0 timings are:
So it is MUCH faster than running on the mac than on AWS Lambda, but still far slower than 3.0.7 on the Mac:
Here is one example of XML and template which took 3559851 µs with 6.1.0 and 73 µs with 3.0.7:
and another that took 2579765 µs with 6.1.0 and 443 µs with 3.0.7
Over repeated runs, both of those examples had times in the 100s as well and in the millions (µs). However, out of the 12 calls there was always at least two that were in the millions.
I can speed things up a fair bit by adding some locking around the camaro calls to ensure camaro is never processing more than one XML file at a time.
Digging through past issues I can see that the slow down came in v4 with the introduction of WebAssembly. I saw mention of example using piscina to create worker pools for processing file sin parallel. However, these have gone so I presume that the need for them has gone too?
Is what i am seeing in terms of speed still expected with v6? Is there any solution to having speeds like v3 other than to stick with v3?
Our other dependencies are:
Expected behaviour
I'd expect 6.1.0 to comparable in speed to 3.0.7
The text was updated successfully, but these errors were encountered: