Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

🐛Clusters-keeper: calling >100 times for the same new cluster now returns cached information #5201

Conversation

sanderegg
Copy link
Member

@sanderegg sanderegg commented Dec 21, 2023

What do these changes do?

when running many jobs through the public API, we actually create as many projects as jobs for 1 same user.
When starting them all at the same time, we actually need to get_or_create the very same cluster at the same time which means there are as many very same call to the clusters-keeper via RPC, then locking a distributed lock, then calling into AWS.

This PR ensures that:

  • the call for the very same cluster are sent in sequence instead of concurrently
  • the call for the very same cluster are cached 5 seconds

In effect, this prevents calls from going more often than once every 5 seconds for the same user_id/wallet_id combination

Related issue/s

How to test

Dev Checklist

DevOps Checklist

@sanderegg sanderegg added a:director-v2 issue related with the director-v2 service a:clusters-keeper labels Dec 21, 2023
@sanderegg sanderegg added this to the Kobayashi Maru milestone Dec 21, 2023
@sanderegg sanderegg self-assigned this Dec 21, 2023
Copy link

codecov bot commented Dec 21, 2023

Codecov Report

Attention: 6 lines in your changes are missing coverage. Please review.

Comparison is base (8e25931) 89.4% compared to head (c1f8620) 86.9%.

Additional details and impacted files

Impacted file tree graph

@@           Coverage Diff            @@
##           master   #5201     +/-   ##
========================================
- Coverage    89.4%   86.9%   -2.5%     
========================================
  Files        1121    1031     -90     
  Lines       45899   44422   -1477     
  Branches     1057     603    -454     
========================================
- Hits        41049   38631   -2418     
- Misses       4619    5650   +1031     
+ Partials      231     141     -90     
Flag Coverage Δ
integrationtests 64.9% <ø> (+<0.1%) ⬆️
unittests 84.3% <0.0%> (-2.6%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

Files Coverage Δ
...abbitmq/rpc_interfaces/clusters_keeper/clusters.py 0.0% <0.0%> (ø)

... and 388 files with indirect coverage changes

Copy link
Contributor

@bisgaard-itis bisgaard-itis left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cool! Thanks a lot.
Looks good to me

@sanderegg sanderegg force-pushed the comp-backend/allow-to-start-many-pipelines branch from 52048d6 to 0495f0d Compare December 21, 2023 15:53
@sanderegg sanderegg force-pushed the comp-backend/allow-to-start-many-pipelines branch from 5cd3b46 to c1f8620 Compare December 22, 2023 14:18
Copy link

sonarcloud bot commented Dec 22, 2023

Quality Gate Passed Quality Gate passed

Kudos, no new issues were introduced!

0 New issues
0 Security Hotspots
No data about Coverage
0.0% Duplication on New Code

See analysis details on SonarCloud

@sanderegg sanderegg changed the title 🐛Clusters-keeper: calling >100 times for a new cluster now returns cached information 🐛Clusters-keeper: calling >100 times for the same new cluster now returns cached information Dec 22, 2023
@sanderegg sanderegg merged commit 9b5cf5a into ITISFoundation:master Dec 22, 2023
54 checks passed
@sanderegg sanderegg deleted the comp-backend/allow-to-start-many-pipelines branch December 22, 2023 15:02
@matusdrobuliak66 matusdrobuliak66 mentioned this pull request Feb 14, 2024
39 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
a:clusters-keeper a:director-v2 issue related with the director-v2 service
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants