Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Performance Issues with executor #297

Open
benMain opened this issue Nov 2, 2022 · 5 comments
Open

Performance Issues with executor #297

benMain opened this issue Nov 2, 2022 · 5 comments

Comments

@benMain
Copy link

benMain commented Nov 2, 2022

We are in the process of implementing NCQA HEDIS dQMs behind a web service. NCQA references this project as the cql + elm execution engine they had used. They directly release the cql and elm to for us to run against our patients' FHIR r4 resource bundles. Our goal is to be able to synchronously process measures in a request/response lifecycle. What we're seeing in AWS Lambda runtime is that executor.exec(patientSource) is taking approximately 15 seconds per patient measure. To our organization that seems unacceptably slow regardless of any overhead involved in interpreting the ELM, especially considering that we're feeding the executor all the resources it needs in RAM and the Patient source represents one patient with approximately 1MB in resource data.

I recognize that you are not responsible for the code that get fed into your engine, but in this instance, to ensure compliance with NCQA, neither are we. What I'm seeking are diagnostic tools, ie. a stopwatch at the expression level that might emit traces to help us pinpoint what exactly is causing performance issues and take those issues back to NCQA.

@benMain
Copy link
Author

benMain commented Nov 2, 2022

If you're going to write a dsl execution engine, you need to provide mechanisms for users to tune their code in your engine.

@birick1
Copy link
Contributor

birick1 commented Nov 2, 2022

Hi @benMain - I've been looking at performance in the engine, and I currently have a pull request under review that has some performance improvements. The pull request is at #278.

I would be interested in feedback on whether that pull request provides an improvement in your scenario. If you'd like to try it, it is a drop-in update for the current engine. The source code is public at https://github.com/projecttacoma/cql-execution/tree/perf-optimizations.

@benMain
Copy link
Author

benMain commented Nov 2, 2022

Thanks @birick1 I see your pr. Let me give that a shot :)

@benMain
Copy link
Author

benMain commented Jan 4, 2023

@birick1. Just came back to this after a couple of months working other projects. Anecdotally, I'm seeing a significant performance improvement executing locally and in aws lambda utilizing your perf-improvement branch. For context, I work at a Midwest regional health carrier. Our use case is to bundle up the $everything Fhir endpoint that contains everything we know about a patient: Conditions, Procedures, Medications, Claims, all of it, and run several dozen cql defined Clinical Decision Support measures utilizing that bundle. Now that $everything bundle is inherently not small. We're averaging about 8 MB uncompressed and serialized as JSON. Running the measure Statin Therapy for the Prevention of CVD published on CDS by MITRE is averaging about 4800 ms to run on your branch in an ARM Lambda function with 3009 MB of RAM.

===============================
ms % Task name

15 0.29 Load Library
0 0.00 Message Listener
12 0.23 Code Service
1 0.02 Executor Build
324 6.29 Patient Source
4800 93.17 Execute

The average execution time for the cql-execution 2.4.3 release of the Statin Therapy measure was around 6450 milliseconds.

==================================
ms % Task name

7 0.11 Load Library
0 0.00 Message Listener
7 0.11 Code Service
1 0.02 Executor Build
161 2.43 Patient Source
6459 97.35 Execute

This was calculated using the same de-identified person's 8 MB bundle that I would be happy to share with any MITRE resources maintaining this repository across 100 executions of warm lambda functions.
So while I see a marked improvement over the previous doDistinct functionality, it still isn't as fast as I would expect a wholly in-memory compute engine to be.

@cmoesel I see tags and published versions for 3.0.0-beta out there but they don't incorporate this change. Are you still planning on merging in this branch?

@cmoesel
Copy link
Member

cmoesel commented Jan 4, 2023

@benMain - Coincidentally, I reached out to Brian and other team members this morning to try to push Brian's PR forward. There are a couple of things we need to agree upon first, but I think we're likely close to merging. Stay tuned to #278.

On a side note, if you see specific opportunities for further performance improvements, please feel free to propose some specific changes. I can't promise that we will have capacity to implement them, but if not, maybe you do?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants