-
Notifications
You must be signed in to change notification settings - Fork 20
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Analytics Database and Build Time Processor #594
Conversation
671cbf1
to
4e6cbed
Compare
analytics/analytics/management/commands/upload_build_timings.py
Outdated
Show resolved
Hide resolved
0d5b0e6
to
3eb27b8
Compare
f827986
to
4738cbd
Compare
- Rename Phase to TimerPhase - Add constraints to Timer hash, name, and cache fields - Add constraints to TimerPhase path and timer fields - Update upload script to accommodate these changes
a84e78d
to
3dd088b
Compare
Okay, I've backed out all of the changes involving running migrations from the github workflow (will address in a follow up), and have just retained the very simple workflow of checking the migrations. So for the time being, if any migrations are made, they will need to be applied manually. I've also backed out any changes that existed solely for testing, so this PR is ready to go. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is working for me, so I'm approving 😆 But it might make sense for @mvandenburgh to take a look from a kube/terraform perspective as well.
One question, well maybe with two parts: Will this just work when we merge Ryan's timing statistics PR and set up the webhook from spack/spack
? Will there be a way to keep triggering it and testing from scott/pipeline-experiments
at that point as well?
I believe the answer to both is yes. Flux should deploy the required services (might hiccup at first as the images will need to be built), and the webhook makes no distinction as to which project is invoking it. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've been disconnected from the timing info stuff so I'll defer to Scott on the specifies of the DB schema and build processor server. But the Terraform for the database and k8s resources look good to me.
This PR adds an analytics database, with which arbitrary data can be stored and queried from, as well as a build timing processor, which processes succeeded jobs and their respective phase timings. To accomplish this, several components have been added/configured:
analytics
gitlab-error-processor
. This http server listens for received webhooks from gitlab, and dispatches a kubernetes job to run theupload_build_timings
analytics management command.Note: The Terraform changes are already applied to production and running correctly