-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
kube-prometheus-stack
stuck in OutOfSync
#11074
Comments
I got the same issue and I verified it works without Not sure if it should be tracked in a separate issue but I also had problem with Loki using |
Thank you very much. |
Yes I think so. At least I want to use ServerSideApply because it fixes other problems like being able to apply large CRDs etc. |
|
I have just reinstalled from scratch, unfortunately one or more objects failed to apply, reason: CustomResourceDefinition.apiextensions.k8s.io "prometheuses.monitoring.coreos.com" is invalid: metadata.annotations: Too long: must have at most 262144 bytes,resource mapping not found for name: "system-kube-prometheus-sta-prometheus" namespace: "system" from "/dev/shm/3789263989": no matches for kind "Prometheus" in version "monitoring.coreos.com/v1" ensure CRDs are installed first. Retrying attempt #5 at 2:58PM. |
I am not sure but I'd like to understand if this is a problem with Helm Charts only or if it can be reproduced with Kustomize as well? If so, can someone provide a Kustomize based |
@Cowboy-coder Can you provide the full yaml of your live resource? Please include the |
@leoluz I have the same issue / background. Currently, syncing prometheus-operator-kubelet live manifest (after recreating it with ServerSideApply=true on 2.5.0)apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
creationTimestamp: "2022-10-27T16:03:32Z"
generation: 1
labels:
app: prometheus-operator-kubelet
app.kubernetes.io/instance: prometheus-operator
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/part-of: prometheus-operator
app.kubernetes.io/version: 40.5.0
chart: kube-prometheus-stack-40.5.0
heritage: Helm
release: prometheus-operator
name: prometheus-operator-kubelet
namespace: services
resourceVersion: "97862637"
uid: f8e4315e-ff4c-46ae-b86b-9a3c51cfd9c1
spec:
endpoints:
- bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
honorLabels: true
port: https-metrics
relabelings:
- action: replace
sourceLabels:
- __metrics_path__
targetLabel: metrics_path
scheme: https
tlsConfig:
caFile: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
insecureSkipVerify: true
- bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
honorLabels: true
metricRelabelings:
- action: drop
regex: container_cpu_(cfs_throttled_seconds_total|load_average_10s|system_seconds_total|user_seconds_total)
sourceLabels:
- __name__
- action: drop
regex: container_fs_(io_current|io_time_seconds_total|io_time_weighted_seconds_total|reads_merged_total|sector_reads_total|sector_writes_total|writes_merged_total)
sourceLabels:
- __name__
- action: drop
regex: container_memory_(mapped_file|swap)
sourceLabels:
- __name__
- action: drop
regex: container_(file_descriptors|tasks_state|threads_max)
sourceLabels:
- __name__
- action: drop
regex: container_spec.*
sourceLabels:
- __name__
- action: drop
regex: .+;
sourceLabels:
- id
- pod
path: /metrics/cadvisor
port: https-metrics
relabelings:
- action: replace
sourceLabels:
- __metrics_path__
targetLabel: metrics_path
scheme: https
tlsConfig:
caFile: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
insecureSkipVerify: true
- bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
honorLabels: true
path: /metrics/probes
port: https-metrics
relabelings:
- action: replace
sourceLabels:
- __metrics_path__
targetLabel: metrics_path
scheme: https
tlsConfig:
caFile: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
insecureSkipVerify: true
jobLabel: k8s-app
namespaceSelector:
matchNames:
- kube-system
selector:
matchLabels:
app.kubernetes.io/name: kubelet
k8s-app: kubelet Desired resource definition (generated locally via helm template; but synced via kustomize by ArgoCD)---
# Source: kube-prometheus-stack/templates/exporters/kubelet/servicemonitor.yaml
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: prometheus-operator-kubelet
namespace: services
labels:
app: prometheus-operator-kubelet
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/instance: prometheus-operator
app.kubernetes.io/version: "40.5.0"
app.kubernetes.io/part-of: prometheus-operator
chart: kube-prometheus-stack-40.5.0
release: "prometheus-operator"
heritage: "Helm"
spec:
endpoints:
- port: https-metrics
scheme: https
tlsConfig:
caFile: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
insecureSkipVerify: true
bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
honorLabels: true
relabelings:
- sourceLabels:
- __metrics_path__
targetLabel: metrics_path
- port: https-metrics
scheme: https
path: /metrics/cadvisor
honorLabels: true
tlsConfig:
caFile: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
insecureSkipVerify: true
bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
metricRelabelings:
- action: drop
regex: container_cpu_(cfs_throttled_seconds_total|load_average_10s|system_seconds_total|user_seconds_total)
sourceLabels:
- __name__
- action: drop
regex: container_fs_(io_current|io_time_seconds_total|io_time_weighted_seconds_total|reads_merged_total|sector_reads_total|sector_writes_total|writes_merged_total)
sourceLabels:
- __name__
- action: drop
regex: container_memory_(mapped_file|swap)
sourceLabels:
- __name__
- action: drop
regex: container_(file_descriptors|tasks_state|threads_max)
sourceLabels:
- __name__
- action: drop
regex: container_spec.*
sourceLabels:
- __name__
- action: drop
regex: .+;
sourceLabels:
- id
- pod
relabelings:
- sourceLabels:
- __metrics_path__
targetLabel: metrics_path
- port: https-metrics
scheme: https
path: /metrics/probes
honorLabels: true
tlsConfig:
caFile: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
insecureSkipVerify: true
bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
relabelings:
- sourceLabels:
- __metrics_path__
targetLabel: metrics_path
jobLabel: k8s-app
namespaceSelector:
matchNames:
- kube-system
selector:
matchLabels:
app.kubernetes.io/name: kubelet
k8s-app: kubelet I suspect the default value from the CRD plays a role here. |
Live manifest
Desired manifest
I also saw this old issue #4126 related to what looks like the same problem. Apart from that I also now see another issue using the loki helmchart in a ServiceMonitor - Probably same issue as with Live for loki servicemonitor
Desired for loki servicemonitor
|
Currently to workaround I just added
To the dashboards that were too long and resources that needed it |
@hsharrison This is actually expected. There is currently a limitation in Argo CD, only for CRDs, that avoids default values to be considered during diff calculation. This affects Argo CD Application status as it thinks it is out-of-sync when in fact it isn't. All Application's resources in the cluster are correctly applied. This limitation just affects Argo CD diff logic for CRDs. While there is no fix for the current CRD diff limitation for default values the suggested workaround is configuring |
i've resolved the issue by basically using the same approach as mentioned here: just replaced the Replace with ServerSideApply |
I installed it using this way: helmCharts:
- name: kube-prometheus-stack
repo: https://prometheus-community.github.io/helm-charts
version: 41.7.0
releaseName: kube-prometheus-stack
namespace: kube-prometheus-stack
includeCRDs: true
valuesFile: values.yml
patches:
- patchAnnotationTooLong.yml Where apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
argocd.argoproj.io/sync-options: Replace=true
name: prometheuses.monitoring.coreos.com It fixes the annotation too long error |
Like mentioned above, there are a few approaches that can be used to address this issue in Argo CD:
All the approaches above will fix the problem but require some amount of work to be done. In Argo CD 2.5 you can now use ServerSideApply to avoid the error with big CRDs while syncing. However, Argo CD is unable to consider CRD default values during diff calculation which causes it to show resources out-of-sync when in fact they aren't. To address this issue with the minimal amount of work users can leverage To deploy prometheus stack with Argo CD you can apply this apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: monitoring
namespace: argocd
spec:
project: default
source:
chart: kube-prometheus-stack
repoURL: https://prometheus-community.github.io/helm-charts
targetRevision: 41.6.1
destination:
namespace: monitoring
server: https://kubernetes.default.svc
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- ServerSideApply=true
- CreateNamespace=true
ignoreDifferences:
- group: monitoring.coreos.com
kind: ServiceMonitor
jqPathExpressions:
- .spec.endpoints[]?.relabelings[]?.action With this approach users don't need to create an additional project to patch CRDs. Everything can be configured from within the Ideally Argo CD should be able to retrieve all schemas from the target cluster with the proper structure so it can be used to consider CRD default values during diff calculation. I created the following issue to track this enhancement (#11139). Please vote for it if you want to see it implemented. Closing this issue for now. |
But how come this works without any diff when using client-side apply? Does client-side-apply have some special handling for these issues that server-side-apply doesn't? |
@Cowboy-coder |
@Cowboy-coder Can you confirm if this diff is in StatefulSet? If so, this is another edgecase with statefulsets and it is better if tracked as a separate issue. |
@leoluz Yes, this is a StatefulSet. Do you want me to create the issue? |
@Cowboy-coder yes please.. Just copy/paste the statefulset details in the new ticket from your previous comment: |
Due to: - argoproj/argo-cd#11143 - argoproj/argo-cd#11074 Signed-off-by: Nicolas Lamirault <[email protected]>
The ignoredifferences solution above worked for me except I had to specify something different to match:
|
The issue also occurs with relabling via remoteWrite.writeRelabelConfigs section. ignoreDifferences:
- group: monitoring.coreos.com
kind: Prometheus
jqPathExpressions:
- .spec.remoteWrite[]?.writeRelabelConfigs[]?.action |
* fix: unreasonably high delays for probes * Adding refresh remplates for bootstrap * Renaming GITLAB TOKEN * Adding comments * Change in permission * including setcivars script * Integrate minio with Loki * refactor * ensure only 1 nat gw * fix subnet ordering * Adding generation of custom config for pm4ml * Adding default config tag changes * Revert "Merge pull request #196 from mojaloop/muz/iprod-502/integrate-minio-with-loki" This reverts commit 3f6325a, reversing changes made to c156aa6. * placeholders for vnext add * missed appdeploy placeholder * 2nd draft * feat: enhance mysql logging * fix stateful resource env vars, new values file * add missing vars * adding sts.json in list * correction * fix configs for stateful resources * fix mongodb secret naming, add vnext app * clean up missing vars * another missing var * fix chart repo * fix anchors * fix: use local storage by default * disable ingresses * add es and reconfigure mongo url secret * bump release * fix secret name * bump version * fix path * fix name again * add service confs to value * bump version * try root pw on mongodb no db * fix: added custom dumper for pm4ml merge function * blanket apply env vars * updaate topics for kafka * fix hostnames in nginx.conf * try adding ttk * fix ttk config * turn off ingress * add api base url * add admin ui vs * Adding minio provider, minio tf code for loki and loghorn data storage * Bringing the docker volume size to env.yaml * Fixing the typo * provider config * Adding output deps * Adding stored params * correcting longhorn typo * Changing the attrbut name * Changes for accessing minio loki creds * Adding to kustmz * passing external_secret_sync_wave * correcting the secret name * Adding converstion and decoding strategy * Adding minio config in loki values * debug * fixing the retrval * Correcting the minio api port * adding policy attachemnt * removing taint * adding changes for longhorn backup * adding data resource for longhorn bucket * correcting longhorn config * commenting out longhorn s3 backups * adding lifecycle rule * correcting the variable reference * Removing longhorn old refs * removing commented lines * fix typo on internal/external * change in policy * change in policy * adding changes in permission * add more values pt-1 * bump to latest chart version * revert output change to use old secret for migration * add http for non ssl url * Add multi-line config in promtail configuration (#206) * add more dynamic variables for mysql * IPROD-525: Display offending processes (cpu+memory) on performance-troubleshooting-dashboard (#204) * Add process-exporter * add prometheus_process_exporter_version * turn off service monitor * enable service monitor again * added process-exporter-service monitor * add a recording rule * add tpl to rules file * update performance-troubleshooting url * add instance_nodename:node_memory_MemTotal_bytes * use v16.0.0-snapshot.6 tag for dashboards * fix dashboard-performance-troubleshooting url * update kafka-topic-overview * update dashboard urls * add node-exporter relabellings * remove recording rules * add comment in process exporter service monitor * upgrade performance troublesshoting dashboard to v16.1.0-snapshot.7 * rm resources folder --------- Co-authored-by: David Fry <[email protected]> Co-authored-by: David Fry <[email protected]> * add more dynamic variables for mysql (#207) * feat: standardise poc demos changes (#205) * fix: pm4ml vs paths * fix: indent in mojaloop tolerations * feat: added vs for payment token adapter in pm4ml * feat: added core connector customization logic to pm4ml * argoproj/argo-cd#11074 (#208) * set version tags in default cluster config (#209) * [IPROD-563] Make loki run on monitoring nodes (#210) * add monitoring workload label * add node affinities for different components * added a comment * IPROD-563: Run Prometheus, Grafana and Tempo on monitoring nodes only (#212) * set node affinities for tempo * add node affinities for prometheus and related services * move grafana to monitoring nodes as well * enable updating version tags for prometheus and grafana CRDs * Polling freq and backup job freq (#213) * disable default logs for mysql * IPROD-525: Display offending processes (cpu+memory) on performance-troubleshooting-dashboard (#204) * Add process-exporter * add prometheus_process_exporter_version * turn off service monitor * enable service monitor again * added process-exporter-service monitor * add a recording rule * add tpl to rules file * update performance-troubleshooting url * add instance_nodename:node_memory_MemTotal_bytes * use v16.0.0-snapshot.6 tag for dashboards * fix dashboard-performance-troubleshooting url * update kafka-topic-overview * update dashboard urls * add node-exporter relabellings * remove recording rules * add comment in process exporter service monitor * upgrade performance troublesshoting dashboard to v16.1.0-snapshot.7 * rm resources folder --------- Co-authored-by: David Fry <[email protected]> Co-authored-by: David Fry <[email protected]> * feat: standardise poc demos changes (#205) * fix: pm4ml vs paths * fix: indent in mojaloop tolerations * feat: added vs for payment token adapter in pm4ml * feat: added core connector customization logic to pm4ml * argoproj/argo-cd#11074 (#208) * set version tags in default cluster config (#209) * [IPROD-563] Make loki run on monitoring nodes (#210) * add monitoring workload label * add node affinities for different components * added a comment * IPROD-563: Run Prometheus, Grafana and Tempo on monitoring nodes only (#212) * set node affinities for tempo * add node affinities for prometheus and related services * move grafana to monitoring nodes as well * enable updating version tags for prometheus and grafana CRDs * Polling freq and backup job freq (#213) * set min and max block duration to 30m * fix typo * clean up and making aws objects' name unique (#211) * Enabled s3 read for loki-querier (#218) * Enabled s3 read for loki-querier * give minio credentials to compactor as well * addon module support (#216) * ffirst draft * cleanup of optional tg module support add addons boilerplate * cleanup inputs * refactor stateful svcs * rename common st resources * fix vars for module calls * fix missing ref * fix app name and check length > 0 * fix typo * Enable log deletion using compactor (#220) * enable deletion using compactor * add commit * move comment message * update compactor/shared_store * parametrize loki_retention_enabled * Feature/refactor istio gw for using 2 separate domains (#219) * Initial commit for istio gw private and public zone * adding the var map changes * commiting unsaved :( * another one * changing internal domain * including new files in kustom.yaml * some cleaning * Change in gitlab app for argocd oidc * correcting locals * correcting the local var * changes for monitoring and vault * Adding missed save * fixing typo * file name change * Keycloak changes * resolving commit - adding missing vars in var map * resolving commit - Changes for ttk * finance portal changes * fixing the missing var * adding missing var * adding vnext * correcting the ref * resolving commit - changes for mcm * resolving commit - mcm changes in vnext * additional changes for mcm in vnext * validating conditional stmt in for expressino * adding merge changes * additional changes for pm4ml * removal of ory_stack_enabled flag * correction * Fix * fix for the access of internal_interop_switch_fqdn * control center change for callbackurl and short private subdomain * Code to get the inputs * correcting the input * fix typo * Getting the internal lb flag for argocd, vault and grafana * adding try * Correction in kuztomize file * fix for vault and argocd oidc * Correction in grafana oidc * fix for 1.6.1 chart, add flag for backup job (#223) * cleanup (#222) * Fix/refactor igw (#228) * fixing grafna oidc * fixing non existing index * Draft - Refactoring app-deploy.tf (#229) * update configs for performance * update configs for performance * first draft patch kustomization * cleanup naming * add istio log config * rm values from default * app-deploy refactoring * fix: scale account lookup service * Removing unwanted variable assignements * Removing unwanted variable definition * Inclding variable finanace_portal_ingress_internal_lb in vnext * removing fin portal fqdn * Removing fin_portal assignment in vnext * Removing the var definition * Removing the var definition from mojaloop * Moving pm4ml_keycloak_realm_env_secret_map * Removing local var definition from app deploy * Removing duplicate pm4ml_var_map * Fixing variable issues * removing the first two from allowedurllist * rm interop vars not needed anymore * Removing the commented line * cleanup internal/external lb vars --------- Co-authored-by: Kalin Krustev <[email protected]> Co-authored-by: David Fry <[email protected]> Co-authored-by: David Fry <[email protected]> * first draft override kustomization (#225) * update configs for performance * update configs for performance * first draft patch kustomization * cleanup naming * add istio log config * rm values from default * fix: scale account lookup service * rebase kustomization refactor for mojaloop (#233) * Fix/refactor igw (#228) * fixing grafna oidc * fixing non existing index * app-deploy refactoring * Removing unwanted variable assignements * Removing unwanted variable definition * Inclding variable finanace_portal_ingress_internal_lb in vnext * removing fin portal fqdn * Removing fin_portal assignment in vnext * Removing the var definition * Removing the var definition from mojaloop * Moving pm4ml_keycloak_realm_env_secret_map * Removing local var definition from app deploy * Removing duplicate pm4ml_var_map * Fixing variable issues * removing the first two from allowedurllist * rm interop vars not needed anymore * Removing the commented line * cleanup internal/external lb vars --------- Co-authored-by: Sijo George <[email protected]> Co-authored-by: Sijo George <[email protected]> --------- Co-authored-by: Kalin Krustev <[email protected]> Co-authored-by: Sijo George <[email protected]> Co-authored-by: Sijo George <[email protected]> * Revert "Draft - Refactoring app-deploy.tf (#229)" This reverts commit 1b54a10. * New PR Feature/refactor appdeploy (#236) * update configs for performance * update configs for performance * first draft patch kustomization * cleanup naming * add istio log config * rm values from default * app-deploy refactoring * fix: scale account lookup service * Removing unwanted variable assignements * Removing unwanted variable definition * Inclding variable finanace_portal_ingress_internal_lb in vnext * removing fin portal fqdn * Removing fin_portal assignment in vnext * Removing the var definition * Removing the var definition from mojaloop * Moving pm4ml_keycloak_realm_env_secret_map * Removing local var definition from app deploy * Removing duplicate pm4ml_var_map * Fixing variable issues * removing the first two from allowedurllist * rm interop vars not needed anymore * Removing the commented line * cleanup internal/external lb vars * rm bad merge * add mojaloop-values-override.yaml --------- Co-authored-by: Kalin Krustev <[email protected]> Co-authored-by: Sijo George <[email protected]> Co-authored-by: Sijo George <[email protected]> * update versions (#237) * Fixing typo (#238) * Fix typo (#239) * make tempo buckets in minio * add tempo_data_expiry_days in terragrunt configs * add minio_tempo_bucket variable to gitlab * move all the resources to a single file * fix the variable * Increase loki and longhorn data TTL to 7 days in minio * use 1d for longhorn data * fix: admin portal name limit * Fix for auth and wrong backend (#246) * Correcting the default values (#247) * fine tune addons module config (#240) * reduce loki_ingester_pvc_size to 10Gi (#245) * renamed minio_credentials_secret_name to minio_loki_credentials_secret_name (#244) * updated references to minio_loki_credentials_secret_name * updated value of minio_loki_credentials_secret_name * IPROD-565: Setup tempo to use minio (#232) * enable env variable expansion in config * update tempo chart version * add minio_tempo_credentials_secret_name * update * minio tempo credentials secert * added tempo datasource * replace extraArgs with args * remove extra args * upadte config * fix bugs * added extraEnvVarsSecret to remaining services * switch to s3 * add tempo retension period * use hours instead of days * get minio_tempo_bucket from gitlab * use minio api url * use minio_tempo_credentials_secret_name variable * refactor --------- Co-authored-by: David Fry <[email protected]> * typo on minio_loki_credentials_secret_name (#248) * rm consul inject (#249) * Increase resource limits for tempo (#250) * feat: exposed ttk test cases tag and added ttk test cases labels (#252) * Verify IAC deployment using eks (#255) * Moving to a compatible version * adding vpc cni specific version * Upgrading to new version * addnig vpc cni service account role * private zone change * ns record * Changes for public_int_domain * fixing zone * fixing zone * temprly setting the flag to true * removing ns record * try using defaults from self managed * rm configmap * cleanup and add ns record * fix typo on ns * fix output for eks module for int domain * add zone for int to post config * missed local var * add prefix delegation and sgs * just use primary * adding try for taints and labels * adding try for node pool ref * Fixing null nodepool * correcting the condition * use latest cni * revert * go back to latest cni addon --------- Co-authored-by: David Fry <[email protected]> * increase resouce limit for tempo services (#259) * IPROD-668: Update command and args of loki memcached (#254) * update command and args of loki memcached * add comments * enable metrics for memcachedChunks (#260) * enable metrics for memcachedChunks * added memcached exporter dashboard * update command and args of loki memcached * add comments * enable service monitor for memcache exporter * Fix/node pool map (#261) * node pool map change * fix post config domain and asg/sgs * reverting irsa * setting longhorn_backup_job_enabled: false --------- Co-authored-by: David Fry <[email protected]> * expose minio-loki-credentails to queryfrontend and distributor (#263) * Upgrading netmaker version * All mojaloop grafana dashboards use same git tag (#262) * Correcting the instance class for mysql rds * Bringing managed services changes * Correcting the newline * Chaging the type of variable * IPROD-686 : add loki-query-scheduler (#265) * add query sceduler * give minio access to gateway as well * Revert "give minio access to gateway as well" This reverts commit 3440f34. * run two replicas of queryFrontend * Revert "run two replicas of queryFrontend" This reverts commit 43f9480. * Adding bastion to k8s nm network along with cc * Correcting the quotes * adding changes for external ms * Correcting the variable names * Adding the map changes * adding managed_db_host var in middle layers * Passing the variable * adding map variable for port and destination for ms * adding map variable assignment * correcting the syntax * correcting the syntax * correcting the syntax * ading variable * Removing the inner loop * Passing yaml encoded value * changing the ds to list of maps * change in inventory map * Adding managed kafka * Formatting ansible tf * IPROD-694: Enable loki metrics monitoring (#268) * Change in ref obj * Separating msk and rds * adding local external_kafka_stateful_resource_instance_addresses * Adding sg rule for kafka access * IPROD-694: Add dashboards for monitoring loki (#269) * feat: re-generate apps in branch pipeline (#257) * feat: re-generate apps in branch pipeline * remove unused property * small fixes * fix mocks * set defaults * set defaults * including bootstrap_brokers_plaintext * chaing the expressin and instance type * Correcting the expression * changing the out * Chaging the output * converting list to string * change the default protocol for msk * Finance portal override (#270) * allow overriding variables for finance portal * typo * Default value to PLAINTEXT * Setting the bastion instance type to t2.micro * use valid yaml in default (#272) * fix: optimize defaults (#278) * fix: optimize defaults * fix: optimize defaults * IPROD-545: Enable prometheus remote write and read (#275) * IPROD-545: Enable remote write on client prometheus * fix url address * extract configs in params * test disabling remote write * refactor * add remote read configs * added default values for central monitoring configs * remove a comment * chore: update versions * update * revert ttk version * update services * update tkk * bump * bump * update charts * revert quoting * downgrade * undo * bump * bump services * downgrade quoting --------- Co-authored-by: vijayg10 <[email protected]> Co-authored-by: Kalin Krustev <[email protected]> Co-authored-by: Sijo George <[email protected]> Co-authored-by: David Fry <[email protected]> Co-authored-by: muzammil360 <[email protected]> Co-authored-by: David Fry <[email protected]> Co-authored-by: Aaron Reynoza <[email protected]> Co-authored-by: Vijay <[email protected]> Co-authored-by: Sijo George <[email protected]>
Checklist:
argocd version
.Describe the bug
Hi,
I am deploying the
kube-prometheus-stack
helm chart using ArgoCD:It create all the resources but it stays in
Current sync status: OutOfSync
due to the resource:If I click the resource, using the ArgoCD WebUI,
Summary
andDiff
I get:Expected behavior
Current sync status: Sync
Version
Logs
From the
argocd-application-controller-0
pod it shows:Thanks
The text was updated successfully, but these errors were encountered: