-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(applications): add scalers for cpu and memory #1295
Conversation
📝 Walkthrough📝 WalkthroughWalkthroughThis pull request introduces a new variable Changes
Possibly related PRs
Suggested reviewers
📜 Recent review detailsConfiguration used: .coderabbit.yaml 📒 Files selected for processing (1)
🚧 Files skipped from review as they are similar to previous changes (1)
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
Documentation and Community
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Outside diff range and nitpick comments (3)
.azure/modules/containerApp/main.bicep (1)
34-38
: LGTM! Consider clarifying the comment about disabling scaling.The new
scale
parameter is well-defined and aligns with the PR objectives. However, the comment "temp disable scaling by default for outbox scheduling" might benefit from additional context.Could you provide more information about why scaling is temporarily disabled by default for outbox scheduling? This context could be valuable for future maintainers.
.azure/applications/graphql/main.bicep (1)
109-132
: LGTM! Consider parameterizing scale values for flexibility.The
scale
variable declaration looks good and aligns with the PR objectives. It sets the CPU usage limit to 70% as specified, and includes a similar rule for memory usage. The minimum of 2 replicas ensures high availability, and the maximum of 10 replicas matches the PR requirements.To improve flexibility, consider parameterizing the scale values:
param minReplicas int = 2 param maxReplicas int = 10 param cpuThreshold int = 70 param memoryThreshold int = 70 var scale = { minReplicas: minReplicas maxReplicas: maxReplicas rules: [ { custom: { type: 'cpu' metricType: 'Utilization' metadata: { value: string(cpuThreshold) } } } { custom: { type: 'memory' metricType: 'Utilization' metadata: { value: string(memoryThreshold) } } } ] }This approach would allow easier adjustments to the scaling configuration without modifying the template itself.
.azure/applications/web-api-eu/main.bicep (1)
80-103
: LGTM! Consider parameterizing threshold values for flexibility.The
scale
variable declaration aligns well with the PR objectives. It sets the CPU usage limit to 70% as specified, includes a similar threshold for memory, and sets the maximum replicas to 10. The minimum of 2 replicas ensures high availability.Consider parameterizing the threshold values (currently set to 70%) to allow for easier adjustments in different environments:
var scale = { minReplicas: 2 maxReplicas: 10 rules: [ { custom: { type: 'cpu' metricType: 'Utilization' metadata: { - value: '70' + value: '${cpuThreshold}' } } } { custom: { type: 'memory' metricType: 'Utilization' metadata: { - value: '70' + value: '${memoryThreshold}' } } } ] }Then, add these parameters at the top of the file:
@description('CPU utilization threshold for scaling (percentage)') @minValue(1) @maxValue(100) param cpuThreshold int = 70 @description('Memory utilization threshold for scaling (percentage)') @minValue(1) @maxValue(100) param memoryThreshold int = 70This change would make it easier to adjust thresholds for different environments or future tuning.
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
📒 Files selected for processing (3)
- .azure/applications/graphql/main.bicep (2 hunks)
- .azure/applications/web-api-eu/main.bicep (2 hunks)
- .azure/modules/containerApp/main.bicep (2 hunks)
🧰 Additional context used
🔇 Additional comments (6)
.azure/modules/containerApp/main.bicep (2)
Line range hint
1-89
: Overall, the changes look good and align with the PR objectives.The implementation of the
scale
parameter and its usage in thecontainerApp
resource provide the desired flexibility for configuring automatic scaling based on CPU and memory usage. This aligns well with the PR objectives for the GraphQL and web API services.A few points to consider:
- The comment about temporarily disabling scaling by default could use more context.
- Ensure that the scaling configuration (including the 70% CPU usage limit mentioned in the PR objectives) is properly set when this module is used.
- Verify that the maximum of 10 revisions for scaling configurations (mentioned in the PR objectives) is implemented, possibly in another file or when this module is used.
72-72
: LGTM! Scaling configuration implemented as intended.The
scale
parameter is correctly used in thecontainerApp
resource, allowing for flexible scaling configuration. This implementation aligns with the PR objectives of adding automatic scaling capabilities for the GraphQL and web API services.To ensure the scaling configuration is correctly implemented across all relevant files, let's run the following verification script:
This script will help us confirm that the
scale
parameter is consistently used across the project and that no hardcoded scaling configurations remain..azure/applications/graphql/main.bicep (2)
154-154
: LGTM! Correct implementation of thescale
parameter.The
scale
variable is correctly passed as a parameter to thecontainerApp
module. This implementation allows the container app to utilize the defined scaling configuration during deployment, which aligns with the PR objectives.
Line range hint
109-154
: Summary: Changes align well with PR objectives.The modifications in this file are focused and effectively implement the automatic scaling capabilities for the GraphQL service as outlined in the PR objectives. The
scale
variable introduces CPU and memory-based scaling rules with a 70% threshold, and sets appropriate minimum (2) and maximum (10) replicas. These changes are correctly integrated into thecontainerApp
module.A few points to consider:
- The PR mentions that the CPU usage limit of 70% is adjustable. The suggested parameterization in the earlier comment would facilitate this adjustment.
- The PR notes that the maximum of 10 revisions might need to be increased in the yt01 environment. Ensure that this is addressed if necessary, possibly through environment-specific parameters.
- Remember to update the relevant documentation as mentioned in the PR objectives, either in the
docs
directory or through a separate PR in the Altinn Studio documentation repository.To ensure consistency across the codebase, let's check if similar changes have been applied to other services:
✅ Verification successful
Verification Complete: Scaling Configuration is Isolated and Correct
The scaling settings have been exclusively applied to the GraphQL service, aligning with the PR objectives. No similar scaling configurations were found in other Bicep files, ensuring that the changes do not inadvertently impact other services.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Check for similar scale configurations in other Bicep files # Test: Search for scale configurations in other Bicep files rg --type bicep -g 'var scale = \{' .azureLength of output: 74
Script:
#!/bin/bash # Description: Check for similar scale configurations in other Bicep files # Find all .bicep files in the .azure directory and search for 'var scale = {' fd --extension bicep .azure | xargs grep -H 'var scale = {'Length of output: 61
.azure/applications/web-api-eu/main.bicep (2)
155-155
: LGTM! Proper integration of scaling configuration.The
scale
variable is correctly passed to thecontainerApp
module, enabling dynamic scaling based on the defined criteria. This implementation aligns with the PR objectives of adding automatic scaling capabilities for the web API service.
Line range hint
1-174
: Overall, the changes successfully implement the scaling configuration.The implementation aligns well with the PR objectives, adding automatic scaling capabilities based on CPU and memory usage for the web API service. The code is clean, focused, and follows best practices.
To ensure these changes work as expected, please verify the following:
- Test the deployment in a non-production environment to confirm that the scaling rules are applied correctly.
- Monitor the CPU and memory usage of the deployed container app under various load conditions to ensure it scales as intended.
- Verify that the maximum number of replicas (10) is sufficient for your expected load, especially in the yt01 environment as mentioned in the PR objectives.
Run the following script to check for any other files that might need similar scaling configurations:
If other files are found, consider applying similar scaling configurations to maintain consistency across your infrastructure.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Denne må ventes litt med hvis ikke scale.minReplicas = scale.maxReplicas = 1
til service er oppe å kjøre.
Mulig jeg har misforstått, men er det ikke bare serviceowner-web-api som trenger scale: 1? Det er kun den som har RUN_OUTBOX_SCHEDULER=true Eller er det noe annet som bestemmer det? |
Nope, stemmer det. Funker fint denne PR'en, SO får fortsatt default 1:1 |
Quality Gate passedIssues Measures |
🤖 I have created a release *beep* *boop* --- ## [1.25.0](v1.24.0...v1.25.0) (2024-10-17) ### Features * **applications:** add scalers for cpu and memory ([#1295](#1295)) ([eb0f19b](eb0f19b)) * **infrastructure:** create new yt01 app environment ([#1291](#1291)) ([1a1ccc0](1a1ccc0)) * **service:** add permissions for service-bus ([#1305](#1305)) ([7bf4177](7bf4177)) * **service:** deploy application in container apps ([#1303](#1303)) ([a309044](a309044)) ### Bug Fixes * **applications:** add missing property for scale configuration ([3ffb724](3ffb724)) * **applications:** use correct scale configuration ([#1311](#1311)) ([b8fb3cc](b8fb3cc)) * Fix ID-porten acr claim parsing ([#1299](#1299)) ([8b8862f](8b8862f)) * **service:** ensure default credentials work ([#1306](#1306)) ([b1e6a14](b1e6a14)) --- This PR was generated with [Release Please](https://github.com/googleapis/release-please). See [documentation](https://github.com/googleapis/release-please#release-please).
Description
Related Issue(s)
Verification
Documentation
docs
-directory, Altinnpedia or a separate linked PR in altinn-studio-docs., if applicable)Summary by CodeRabbit