Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chore(application-system-api-worker): Add missing user notification url #17140

Merged
merged 2 commits into from
Dec 5, 2024

Conversation

obmagnusson
Copy link
Member

@obmagnusson obmagnusson commented Dec 5, 2024

...

Attach a link to issue if relevant

What

Specify what you're trying to achieve

Why

Specify why you need to achieve this

Screenshots / Gifs

Attach Screenshots / Gifs to help reviewers understand the scope of the pull request

Checklist:

  • I have performed a self-review of my own code
  • I have made corresponding changes to the documentation
  • My changes generate no new warnings
  • I have added tests that prove my fix is effective or that my feature works
  • Formatting passes locally with my changes
  • I have rebased against main before asking for a review

Summary by CodeRabbit

  • New Features
    • Introduced a new environment variable USER_NOTIFICATION_API_URL across multiple services to enhance connectivity with the user notification API.
  • Configuration Updates
    • Adjusted health check paths for improved monitoring consistency.
    • Modified resource limits and requests for various services to optimize performance.
    • Updated Horizontal Pod Autoscaler (HPA) settings for better scalability under load.
    • Enhanced ingress configurations for improved traffic routing.

@obmagnusson obmagnusson requested a review from a team as a code owner December 5, 2024 11:42
Copy link
Contributor

coderabbitai bot commented Dec 5, 2024

Walkthrough

The pull request introduces changes primarily to the workerSetup function in the application-system-api.ts file, adding a parameter for userNotificationService and a new environment variable USER_NOTIFICATION_API_URL. Similar updates are made across various YAML configuration files, where the new environment variable is added for the application-system-api-worker service, along with adjustments to Horizontal Pod Autoscaler settings, health check paths, and resource limits. These changes enhance service connectivity and optimize resource management across the application ecosystem.

Changes

File Change Summary
apps/application-system/api/infra/application-system-api.ts Updated workerSetup function to accept userNotificationService parameter and added USER_NOTIFICATION_API_URL environment variable.
charts/islandis/values.dev.yaml Added USER_NOTIFICATION_API_URL to user-notification-worker and application-system-api-worker, updated HPA settings for multiple services, and standardized health check paths.
charts/islandis/values.prod.yaml Added USER_NOTIFICATION_API_URL to application-system-api-worker, updated health check path for web service, and modified resource limits for various services.
charts/islandis/values.staging.yaml Added USER_NOTIFICATION_API_URL to application-system-api-worker, updated health check path for api service, and adjusted resource limits and HPA settings.
charts/services/application-system-api-worker/values.dev.yaml Added USER_NOTIFICATION_API_URL environment variable.
charts/services/application-system-api-worker/values.prod.yaml Added USER_NOTIFICATION_API_URL environment variable.
charts/services/application-system-api-worker/values.staging.yaml Added USER_NOTIFICATION_API_URL environment variable.
infra/src/uber-charts/islandis.ts Updated appSystemApiWorkerSetup to accept userNotificationService parameter.

Possibly related PRs

Suggested labels

automerge, deploy-feature

Suggested reviewers

  • thordurhhh
  • lodmfjord
  • baering

📜 Recent review details

Configuration used: .coderabbit.yaml
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between 55aacf7 and efa36c3.

📒 Files selected for processing (8)
  • apps/application-system/api/infra/application-system-api.ts (1 hunks)
  • charts/islandis/values.dev.yaml (1 hunks)
  • charts/islandis/values.prod.yaml (1 hunks)
  • charts/islandis/values.staging.yaml (1 hunks)
  • charts/services/application-system-api-worker/values.dev.yaml (1 hunks)
  • charts/services/application-system-api-worker/values.prod.yaml (1 hunks)
  • charts/services/application-system-api-worker/values.staging.yaml (1 hunks)
  • infra/src/uber-charts/islandis.ts (1 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
apps/application-system/api/infra/application-system-api.ts (1)

Pattern apps/**/*: "Confirm that the code adheres to the following:

  • NextJS best practices, including file structure, API routes, and static generation methods.
  • Efficient state management and server-side rendering techniques.
  • Optimal use of TypeScript for component and utility type safety."
🔇 Additional comments (10)
charts/services/application-system-api-worker/values.prod.yaml (2)

44-44: Consider security implications of using HTTP for internal service communication

While using HTTP for internal Kubernetes service communication is common, consider using HTTPS for enhanced security, especially for sensitive user notification data. Also, verify that this URL is consistent with other internal service URLs in the cluster.

✅ Verification successful

HTTP is the standard protocol for internal Kubernetes service communication

Based on the comprehensive search results, HTTP is consistently used for all internal service-to-service communication within the Kubernetes cluster (.svc.cluster.local endpoints). This is a standard practice in Kubernetes environments since:

  1. Traffic between services never leaves the cluster's internal network
  2. The Kubernetes service mesh provides built-in security and encryption at the network level
  3. Using HTTPS internally would add unnecessary TLS overhead and complexity

The current configuration is consistent with Kubernetes best practices and aligns with all other internal service URLs in the cluster.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Check if other services use HTTPS for internal communication
rg "http://.+\.svc\.cluster\.local" -A 1 -B 1
rg "https://.+\.svc\.cluster\.local" -A 1 -B 1

Length of output: 57270


Line range hint 1-6: Warning: This file is auto-generated

The file header indicates that manual changes will be overwritten. Please ensure you're updating the source configuration that generates these files by running "yarn charts".

✅ Verification successful

Warning: Confirmed auto-generated file - update source configuration

Based on the verification of generate-chart-values.ts, this file is indeed auto-generated. The source configuration should be updated in the infra/src/uber-charts/ directory, followed by running yarn charts to regenerate the values files. Since we couldn't locate the exact source configuration file, please work with your team to identify the correct source file to modify.

  • The file is generated by the script at infra/src/cli/generate-chart-values.ts
  • Changes should be made to the source configuration, not directly to values.prod.yaml
  • Run yarn charts from the root directory to regenerate all chart values
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Check if there's a source configuration file that generates these values
fd -g "*application-system-api-worker*" -t f | grep -v "values\."

Length of output: 67


Script:

#!/bin/bash
# Let's check for any files related to chart generation or configuration
fd -g "*.yaml" -t f charts/services/application-system-api-worker/

# Also check for any package.json that might contain the "charts" script
fd "package.json" -t f | xargs rg "charts"

Length of output: 487


Script:

#!/bin/bash
# Let's check the chart generation script to understand the source of truth
cat infra/src/cli/generate-chart-values.ts

# Also check if there's a configuration file that serves as input
fd -g "*.ts" -t f infra/src/ | xargs rg "application-system-api-worker"

Length of output: 2623


Script:

#!/bin/bash
# Let's check the configuration in uber-charts directory
fd -g "*.ts" infra/src/uber-charts/ | xargs rg -A 5 "application-system-api-worker"

# Also check the environments configuration
fd -g "*.ts" infra/src/environments/ | xargs cat

Length of output: 245

charts/services/application-system-api-worker/values.staging.yaml (1)

44-44: Verify environment-specific configuration

The user notification service URL is identical across all environments (dev/staging/prod). Typically, different environments would have environment-specific configurations. Please verify if this is intentional.

✅ Verification successful

Using identical service URL across environments is intentional and correct

The identical URL http://web-user-notification.user-notification.svc.cluster.local across environments is actually the correct Kubernetes service discovery pattern. This is a cluster-internal DNS name that automatically resolves to the appropriate service in each environment's cluster. The environment separation is handled by the Kubernetes infrastructure, not through different URLs.

Looking at the context:

  • The pattern is consistently used across multiple services (application-system-api, services-auth-delegation-api, etc.)
  • Other environment-specific configurations like XROAD_BASE_PATH do have different values (e.g., securityserver.dev01.devland.is vs securityserver.staging01.devland.is vs securityserver.island.is)
  • The .svc.cluster.local suffix is a standard Kubernetes DNS pattern for in-cluster service discovery
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Check for environment-specific service URLs in other configurations
rg "web-user-notification" -A 1 -B 1

Length of output: 8710

charts/services/application-system-api-worker/values.dev.yaml (1)

44-44: Verify service discovery configuration

The URL uses Kubernetes service discovery pattern (.svc.cluster.local). Please verify:

  1. The service name 'web-user-notification' exists in the 'user-notification' namespace
  2. The service is accessible from the 'application-system' namespace
✅ Verification successful

Service discovery configuration is correctly set up

The verification confirms:

  1. The service 'web-user-notification' in the 'user-notification' namespace exists and is consistently referenced across multiple environments (dev, staging, prod)
  2. The 'application-system' namespace has explicit access to the 'user-notification' namespace as shown in the namespace grants:
    • Found in user-notification/values.*.yaml: grantNamespaces: ['application-system']
    • Cross-namespace communication is properly configured with grantNamespacesEnabled set to true
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Check for references to the user-notification service and namespace
rg "user-notification\.svc\.cluster\.local" -A 1 -B 1

# Check for namespace grants
rg "grantNamespaces:" -A 5 | grep "user-notification"

Length of output: 15930

infra/src/uber-charts/islandis.ts (1)

88-90: LGTM! Service integration looks good.

The addition of userNotificationService to appSystemApiWorkerSetup is properly implemented and follows the established pattern for service integration in the codebase.

apps/application-system/api/infra/application-system-api.ts (2)

52-54: LGTM! Function signature update is correct.

The workerSetup function signature has been properly updated to accept the userNotificationService parameter with the correct type.


95-97: LGTM! Environment variable configuration looks good.

The USER_NOTIFICATION_API_URL is properly configured using the ref helper to construct the service URL, following the established pattern in the codebase.

charts/islandis/values.staging.yaml (1)

852-852: LGTM! Environment variable value is correctly configured.

The USER_NOTIFICATION_API_URL is properly set to use internal service communication with the correct service and namespace.

charts/islandis/values.prod.yaml (1)

843-843: LGTM! Verify service connectivity.

The addition of USER_NOTIFICATION_API_URL follows the proper Kubernetes service naming pattern.

Let's verify the service exists and is accessible:

✅ Verification successful

Service configuration verified in Helm charts

The web-user-notification service is properly configured across all environments (dev, staging, and prod) in the Helm charts. The service:

  • Is defined in the correct namespace (user-notification)
  • Has consistent URL pattern across all environments
  • Has corresponding deployments (web service, worker, and cleanup worker) with proper configurations
  • Is referenced as a dependency in other services
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify user-notification service exists and is accessible

# Check if the service exists in the user-notification namespace
kubectl get service web-user-notification -n user-notification 2>/dev/null

# Check if there are any endpoints for the service
kubectl get endpoints web-user-notification -n user-notification 2>/dev/null

Length of output: 131


Script:

#!/bin/bash
# Let's verify the service configuration in the Helm charts

# Check if the service is defined in Helm charts
fd -e yaml -e yml . charts/islandis --exec grep -l "web-user-notification"

# Check for service configuration in kubernetes manifests
fd -e yaml -e yml . charts/islandis --exec grep -A5 -B5 "web-user-notification"

# Look for related configuration or references
rg "user-notification" charts/islandis/

Length of output: 19407

charts/islandis/values.dev.yaml (1)

855-855: LGTM! Configuration matches production.

The USER_NOTIFICATION_API_URL configuration in development environment matches the pattern used in production.


Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link

codecov bot commented Dec 5, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 35.74%. Comparing base (f3a5069) to head (2123f4a).
Report is 1 commits behind head on main.

Additional details and impacted files

Impacted file tree graph

@@           Coverage Diff           @@
##             main   #17140   +/-   ##
=======================================
  Coverage   35.74%   35.74%           
=======================================
  Files        6925     6925           
  Lines      147569   147569           
  Branches    42010    42039   +29     
=======================================
  Hits        52747    52747           
  Misses      94822    94822           
Flag Coverage Δ
web 2.43% <ø> (ø)

Flags with carried forward coverage won't be shown. Click here to find out more.

see 5 files with indirect coverage changes


Continue to review full report in Codecov by Sentry.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update f3a5069...2123f4a. Read the comment docs.

@datadog-island-is
Copy link

datadog-island-is bot commented Dec 5, 2024

Datadog Report

Branch report: worker-usernotifurl
Commit report: 0a0152c
Test service: web

✅ 0 Failed, 84 Passed, 0 Skipped, 25.39s Total Time
➡️ Test Sessions change in coverage: 1 no change

Copy link
Member

@brynjarorng brynjarorng left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@obmagnusson obmagnusson added the automerge Merge this PR as soon as all checks pass label Dec 5, 2024
@kodiakhq kodiakhq bot merged commit 4ff3865 into main Dec 5, 2024
38 checks passed
@kodiakhq kodiakhq bot deleted the worker-usernotifurl branch December 5, 2024 13:05
thorhildurt pushed a commit that referenced this pull request Dec 11, 2024
…rl (#17140)

Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
automerge Merge this PR as soon as all checks pass
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants