Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
* Switch to non-native Postgres client. And add a "streaming" API for making database queries, which streams the results from the database to Node as they are generated by Postgres. This allows Node to process the rows one by one (and garbage collect in between), which is much easier on the VM when we need to do big queries that summarize data (or just format it and incrementally spit it out an HTTP response). * Mostly refactoring. This moves the handle_GET_reportExport route into its own file, which necessitated refactoring some other things (zinvite and pca) out of server.ts as well. Chipping away at the monolith. This also converts the votes.csv report to use the streaming query from Postgres, which is mostly a smoke test. It seems to work, so next I'll convert it to stream the results incrementally to the HTTP response as well. * Split each report into separate function. * Count up comment votes in single pass over votes table. There was actually a bug in the old SQL that aggregated votes from _all_ conversations instead of just the conversation in question, which is why it took 30 seconds to run. With that bug fixed, even the super slow "do a full subquery for each comment row" was actually quite fast. But this is way cheaper/faster. * Add participant-votes.csv export. * Switch to non-native Postgres client. And add a "streaming" API for making database queries, which streams the results from the database to Node as they are generated by Postgres. This allows Node to process the rows one by one (and garbage collect in between), which is much easier on the VM when we need to do big queries that summarize data (or just format it and incrementally spit it out an HTTP response). * Mostly refactoring. This moves the handle_GET_reportExport route into its own file, which necessitated refactoring some other things (zinvite and pca) out of server.ts as well. Chipping away at the monolith. This also converts the votes.csv report to use the streaming query from Postgres, which is mostly a smoke test. It seems to work, so next I'll convert it to stream the results incrementally to the HTTP response as well. * Split each report into separate function. * Count up comment votes in single pass over votes table. There was actually a bug in the old SQL that aggregated votes from _all_ conversations instead of just the conversation in question, which is why it took 30 seconds to run. With that bug fixed, even the super slow "do a full subquery for each comment row" was actually quite fast. But this is way cheaper/faster. * Add participant-votes.csv export. * Flip vote polarity. In the raw votes table, -1 means agree and 1 means disagree, so we need to count things correctly. And when exporting votes in participant votes, we flip the sign so that 1 means agree and -1 means disagree. * Properly escape comment text. * add votes matrix, show data license preprod, logging. --------- Co-authored-by: Michael Bayne <[email protected]>
- Loading branch information