You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Expected behavior and actual behavior:
The performance of Harbor would be influenced when run retention job. The CPU usage of DB would be immediately soar to almost 100%. The retention job run slowly.
Our Harbor has huge images data, almost 37Ti.
Steps to reproduce the problem:
Run a retention job on a project with many tags to delete.
Versions:
Please specify the versions of following systems.
Installed by Helm and use external Redis, PostgreSQL, EFS on AWS
harbor version: v2.5.1
K8S version: v1.21.10
Helm: v3.7.1
Redis: v5.0.6. 6.38Gi memory + up to 10 Gi network performance
PostgreSQL: v11.13, 4CPU+16Gi
Additional context:
Harbor config files:
values.yaml
expose:
# Set how to expose the service. Set the type as "ingress", "clusterIP", "nodePort" or "loadBalancer"# and fill the information in the corresponding sectiontype: ingresstls:
# Enable TLS or not.# Delete the "ssl-redirect" annotations in "expose.ingress.annotations" when TLS is disabled and "expose.type" is "ingress"# Note: if the "expose.type" is "ingress" and TLS is disabled,# the port must be included in the command when pulling/pushing images.# Refer to https://github.com/goharbor/harbor/issues/5291 for details.enabled: true# The source of the tls certificate. Set as "auto", "secret"# or "none" and fill the information in the corresponding section# 1) auto: generate the tls certificate automatically# 2) secret: read the tls certificate from the specified secret.# The tls certificate can be generated manually or by cert manager# 3) none: configure no tls certificate for the ingress. If the default# tls certificate is configured in the ingress controller, choose this optioncertSource: secretauto:
# The common name used to generate the certificate, it's necessary# when the type isn't "ingress"commonName: ""secret:
# The name of secret which contains keys named:# "tls.crt" - the certificate# "tls.key" - the private keysecretName: "harbor.XXX"# The name of secret which contains keys named:# "tls.crt" - the certificate# "tls.key" - the private key# Only needed when the "expose.type" is "ingress".notarySecretName: "notary.XXX"ingress:
hosts:
core: harbor.XXXnotary: notary.XXX# set to the type of ingress controller if it has specific requirements.# leave as `default` for most ingress controllers.# set to `gce` if using the GCE ingress controller# set to `ncp` if using the NCP (NSX-T Container Plugin) ingress controllercontroller: default## Allow .Capabilities.KubeVersion.Version to be overridden while creating ingresskubeVersionOverride: ""className: ""annotations:
# note different ingress controllers may require a different ssl-redirect annotation# for Envoy, use ingress.kubernetes.io/force-ssl-redirect: "true" and remove the nginx lines belowingress.kubernetes.io/ssl-redirect: "true"ingress.kubernetes.io/proxy-body-size: "0"nginx.ingress.kubernetes.io/ssl-redirect: "true"nginx.ingress.kubernetes.io/proxy-body-size: "0"nginx.ingress.kubernetes.io/proxy-connect-timeout: "300"nginx.ingress.kubernetes.io/proxy-read-timeout: "300"nginx.ingress.kubernetes.io/proxy-send-timeout: "300"kubernetes.io/ingress.class: "nginx"cert-manager.io/cluster-issuer: "letsencrypt-prod"notary:
# notary ingress-specific annotationsannotations: {}# notary ingress-specific labelslabels: {}harbor:
# harbor ingress-specific annotationsannotations: {}# harbor ingress-specific labelslabels: {}clusterIP:
# The name of ClusterIP servicename: harbor# Annotations on the ClusterIP serviceannotations: {}ports:
# The service port Harbor listens on when serving HTTPhttpPort: 80# The service port Harbor listens on when serving HTTPShttpsPort: 443# The service port Notary listens on. Only needed when notary.enabled# is set to truenotaryPort: 4443nodePort:
# The name of NodePort servicename: harborports:
http:
# The service port Harbor listens on when serving HTTPport: 80# The node port Harbor listens on when serving HTTPnodePort: 30002https:
# The service port Harbor listens on when serving HTTPSport: 443# The node port Harbor listens on when serving HTTPSnodePort: 30003# Only needed when notary.enabled is set to truenotary:
# The service port Notary listens onport: 4443# The node port Notary listens onnodePort: 30004loadBalancer:
# The name of LoadBalancer servicename: harbor# Set the IP if the LoadBalancer supports assigning IPIP: ""ports:
# The service port Harbor listens on when serving HTTPhttpPort: 80# The service port Harbor listens on when serving HTTPShttpsPort: 443# The service port Notary listens on. Only needed when notary.enabled# is set to truenotaryPort: 4443annotations: {}sourceRanges: []# The external URL for Harbor core service. It is used to# 1) populate the docker/helm commands showed on portal# 2) populate the token service URL returned to docker/notary client## Format: protocol://domain[:port]. Usually:# 1) if "expose.type" is "ingress", the "domain" should be# the value of "expose.ingress.hosts.core"# 2) if "expose.type" is "clusterIP", the "domain" should be# the value of "expose.clusterIP.name"# 3) if "expose.type" is "nodePort", the "domain" should be# the IP address of k8s node## If Harbor is deployed behind the proxy, set it as the URL of proxyexternalURL: https://harbor.XXX# The internal TLS used for harbor components secure communicating. In order to enable https# in each components tls cert files need to provided in advance.internalTLS:
# If internal TLS enabledenabled: false# There are three ways to provide tls# 1) "auto" will generate cert automatically# 2) "manual" need provide cert file manually in following value# 3) "secret" internal certificates from secretcertSource: "auto"# The content of trust ca, only available when `certSource` is "manual"trustCa: ""# core related cert configurationcore:
# secret name for core's tls certssecretName: ""# Content of core's TLS cert file, only available when `certSource` is "manual"crt: ""# Content of core's TLS key file, only available when `certSource` is "manual"key: ""# jobservice related cert configurationjobservice:
# secret name for jobservice's tls certssecretName: ""# Content of jobservice's TLS key file, only available when `certSource` is "manual"crt: ""# Content of jobservice's TLS key file, only available when `certSource` is "manual"key: ""# registry related cert configurationregistry:
# secret name for registry's tls certssecretName: ""# Content of registry's TLS key file, only available when `certSource` is "manual"crt: ""# Content of registry's TLS key file, only available when `certSource` is "manual"key: ""# portal related cert configurationportal:
# secret name for portal's tls certssecretName: ""# Content of portal's TLS key file, only available when `certSource` is "manual"crt: ""# Content of portal's TLS key file, only available when `certSource` is "manual"key: ""# chartmuseum related cert configurationchartmuseum:
# secret name for chartmuseum's tls certssecretName: ""# Content of chartmuseum's TLS key file, only available when `certSource` is "manual"crt: ""# Content of chartmuseum's TLS key file, only available when `certSource` is "manual"key: ""# trivy related cert configurationtrivy:
# secret name for trivy's tls certssecretName: ""# Content of trivy's TLS key file, only available when `certSource` is "manual"crt: ""# Content of trivy's TLS key file, only available when `certSource` is "manual"key: ""ipFamily:
# ipv6Enabled set to true if ipv6 is enabled in cluster, currently it affected the nginx related componentipv6:
enabled: true# ipv4Enabled set to true if ipv4 is enabled in cluster, currently it affected the nginx related componentipv4:
enabled: true# The persistence is enabled by default and a default StorageClass# is needed in the k8s cluster to provision volumes dynamically.# Specify another StorageClass in the "storageClass" or set "existingClaim"# if you already have existing persistent volumes to use## For storing images and charts, you can also use "azure", "gcs", "s3",# "swift" or "oss". Set it in the "imageChartStorage" sectionpersistence:
enabled: true# Setting it to "keep" to avoid removing PVCs during a helm delete# operation. Leaving it empty will delete PVCs after the chart deleted# (this does not apply for PVCs that are created for internal database# and redis components, i.e. they are never deleted automatically)resourcePolicy: "keep"persistentVolumeClaim:
registry:
# Use the existing PVC which must be created manually before bound,# and specify the "subPath" if the PVC is shared with other componentsexistingClaim: ""# Specify the "storageClass" used to provision the volume. Or the default# StorageClass will be used (the default).# Set it to "-" to disable dynamic provisioningstorageClass: "-"subPath: ""accessMode: ReadWriteManysize: 1Gi # no effect for nfsefsName: <efs-address>chartmuseum:
existingClaim: ""storageClass: "-"subPath: ""accessMode: ReadWriteManysize: 1Gi # no effect for nfsefsName: <efs-address>jobservice:
existingClaim: ""storageClass: "-"subPath: ""accessMode: ReadWriteOncesize: 1Giannotations: {}# If external database is used, the following settings for database will# be ignoreddatabase:
existingClaim: ""storageClass: ""subPath: ""accessMode: ReadWriteOncesize: 1Giannotations: {}# If external Redis is used, the following settings for Redis will# be ignoredredis:
existingClaim: ""storageClass: ""subPath: ""accessMode: ReadWriteOncesize: 1Giannotations: {}trivy:
existingClaim: ""storageClass: ""subPath: ""accessMode: ReadWriteOncesize: 5Giannotations: {}# Define which storage backend is used for registry and chartmuseum to store# images and charts. Refer to# https://github.com/docker/distribution/blob/master/docs/configuration.md#storage# for the detail.imageChartStorage:
# Specify whether to disable `redirect` for images and chart storage, for# backends which not supported it (such as using minio for `s3` storage type), please disable# it. To disable redirects, simply set `disableredirect` to `true` instead.# Refer to# https://github.com/docker/distribution/blob/master/docs/configuration.md#redirect# for the detail.disableredirect: false# Specify the "caBundleSecretName" if the storage service uses a self-signed certificate.# The secret must contain keys named "ca.crt" which will be injected into the trust store# of registry's and chartmuseum's containers.# caBundleSecretName:# Specify the type of storage: "filesystem", "azure", "gcs", "s3", "swift",# "oss" and fill the information needed in the corresponding section. The type# must be "filesystem" if you want to use persistent volumes for registry# and chartmuseumtype: filesystemfilesystem:
rootdirectory: /storage#maxthreads: 100azure:
accountname: accountnameaccountkey: base64encodedaccountkeycontainer: containername#realm: core.windows.netgcs:
bucket: bucketname# The base64 encoded json file which contains the keyencodedkey: base64-encoded-json-key-file#rootdirectory: /gcs/object/name/prefix#chunksize: "5242880"s3:
region: us-west-1bucket: bucketname#accesskey: awsaccesskey#secretkey: awssecretkey#regionendpoint: http://myobjects.local#encrypt: false#keyid: mykeyid#secure: true#skipverify: false#v4auth: true#chunksize: "5242880"#rootdirectory: /s3/object/name/prefix#storageclass: STANDARD#multipartcopychunksize: "33554432"#multipartcopymaxconcurrency: 100#multipartcopythresholdsize: "33554432"swift:
authurl: https://storage.myprovider.com/v3/authusername: usernamepassword: passwordcontainer: containername#region: fr#tenant: tenantname#tenantid: tenantid#domain: domainname#domainid: domainid#trustid: trustid#insecureskipverify: false#chunksize: 5M#prefix:#secretkey: secretkey#accesskey: accesskey#authversion: 3#endpointtype: public#tempurlcontainerkey: false#tempurlmethods:oss:
accesskeyid: accesskeyidaccesskeysecret: accesskeysecretregion: regionnamebucket: bucketname#endpoint: endpoint#internal: false#encrypt: false#secure: true#chunksize: 10M#rootdirectory: rootdirectoryimagePullPolicy: IfNotPresent# Use this set to assign a list of default pullSecretsimagePullSecrets:
# - name: docker-registry-secret# - name: internal-registry-secret# The update strategy for deployments with persistent volumes(jobservice, registry# and chartmuseum): "RollingUpdate" or "Recreate"# Set it as "Recreate" when "RWM" for volumes isn't supportedupdateStrategy:
type: RollingUpdate# debug, info, warning, error or fatallogLevel: info# The initial password of Harbor admin. Change it from portal after launching HarborharborAdminPassword: "Harbor12345"# The name of the secret which contains key named "ca.crt". Setting this enables the# download link on portal to download the CA certificate when the certificate isn't# generated automaticallycaSecretName: ""# The secret key used for encryption. Must be a string of 16 chars.secretKey: "not-a-secure-key"# The proxy settings for updating trivy vulnerabilities from the Internet and replicating# artifacts from/to the registries that cannot be reached directlyproxy:
httpProxy:
httpsProxy:
noProxy: 127.0.0.1,localhost,.local,.internalcomponents:
- core
- jobservice
- trivy# Run the migration job via helm hookenableMigrateHelmHook: false# The custom ca bundle secret, the secret must contain key named "ca.crt"# which will be injected into the trust store for chartmuseum, core, jobservice, registry, trivy components# caBundleSecretName: ""## UAA Authentication Options# If you're using UAA for authentication behind a self-signed# certificate you will need to provide the CA Cert.# Set uaaSecretName below to provide a pre-created secret that# contains a base64 encoded CA Certificate named `ca.crt`.# uaaSecretName:# If service exposed via "ingress", the Nginx will not be usednginx:
image:
repository: goharbor/nginx-photontag: v2.5.1# set the service account to be used, default if left emptyserviceAccountName: ""# mount the service account tokenautomountServiceAccountToken: falsereplicas: 1revisionHistoryLimit: 10# resources:# requests:# memory: 256Mi# cpu: 100mnodeSelector: {}tolerations: []affinity: {}## Additional deployment annotationspodAnnotations: {}## The priority class to run the pod aspriorityClassName:
portal:
image:
repository: goharbor/harbor-portaltag: v2.5.1# set the service account to be used, default if left emptyserviceAccountName: ""# mount the service account tokenautomountServiceAccountToken: falsereplicas: 1revisionHistoryLimit: 10# resources:# requests:# memory: 256Mi# cpu: 100mnodeSelector: {}tolerations: []affinity: {}## Additional deployment annotationspodAnnotations: {}## The priority class to run the pod aspriorityClassName:
core:
image:
repository: goharbor/harbor-coretag: v2.5.1# set the service account to be used, default if left emptyserviceAccountName: ""# mount the service account tokenautomountServiceAccountToken: falsereplicas: 1revisionHistoryLimit: 10## Startup probe valuesstartupProbe:
enabled: trueinitialDelaySeconds: 10resources:
requests:
memory: 300Micpu: 300mlimits:
memory: 4Gcpu: 2nodeSelector: {}tolerations: []affinity: {}## Additional deployment annotationspodAnnotations: {}# Secret is used when core server communicates with other components.# If a secret key is not specified, Helm will generate one.# Must be a string of 16 chars.secret: ""# Fill the name of a kubernetes secret if you want to use your own# TLS certificate and private key for token encryption/decryption.# The secret must contain keys named:# "tls.crt" - the certificate# "tls.key" - the private key# The default key pair will be used if it isn't setsecretName: ""# The XSRF key. Will be generated automatically if it isn't specifiedxsrfKey: ""## The priority class to run the pod aspriorityClassName:
# The time duration for async update artifact pull_time and repository# pull_count, the unit is second. Will be 10 seconds if it isn't set.# eg. artifactPullAsyncFlushDuration: 10artifactPullAsyncFlushDuration:
jobservice:
image:
repository: goharbor/harbor-jobservicetag: v2.5.1replicas: 1revisionHistoryLimit: 10# set the service account to be used, default if left emptyserviceAccountName: ""# mount the service account tokenautomountServiceAccountToken: falsemaxJobWorkers: 20# The logger for jobs: "file", "database" or "stdout"jobLoggers:
- file# - database# - stdout# The jobLogger sweeper duration (ignored if `jobLogger` is `stdout`)loggerSweeperDuration: 14#days# resources:# requests:# memory: 256Mi# cpu: 100mnodeSelector: {}tolerations: []affinity: {}## Additional deployment annotationspodAnnotations: {}# Secret is used when job service communicates with other components.# If a secret key is not specified, Helm will generate one.# Must be a string of 16 chars.secret: ""## The priority class to run the pod aspriorityClassName:
registry:
# set the service account to be used, default if left emptyserviceAccountName: ""# mount the service account tokenautomountServiceAccountToken: falseregistry:
image:
repository: goharbor/registry-photontag: v2.5.1resources:
requests:
memory: 60Gcpu: 500mlimits:
memory: 60Gcpu: 2controller:
image:
repository: goharbor/harbor-registryctltag: v2.5.1# resources:# requests:# memory: 256Mi# cpu: 100mreplicas: 2revisionHistoryLimit: 10nodeSelector: {}tolerations: []affinity: {}## Additional deployment annotationspodAnnotations: {}## The priority class to run the pod aspriorityClassName:
# Secret is used to secure the upload state from client# and registry storage backend.# See: https://github.com/docker/distribution/blob/master/docs/configuration.md#http# If a secret key is not specified, Helm will generate one.# Must be a string of 16 chars.secret: ""# If true, the registry returns relative URLs in Location headers. The client is responsible for resolving the correct URL.relativeurls: falsecredentials:
username: "harbor_registry_user"password: "harbor_registry_password"# Login and password in htpasswd string format. Excludes `registry.credentials.username` and `registry.credentials.password`. May come in handy when integrating with tools like argocd or flux. This allows the same line to be generated each time the template is rendered, instead of the `htpasswd` function from helm, which generates different lines each time because of the salt.htpasswd: "XXX"middleware:
enabled: falsetype: cloudFrontcloudFront:
baseurl: example.cloudfront.netkeypairid: KEYPAIRIDduration: 3000sipfilteredby: none# The secret key that should be present is CLOUDFRONT_KEY_DATA, which should be the encoded private key# that allows access to CloudFrontprivateKeySecret: "my-secret"# enable purge _upload directoriesupload_purging:
enabled: true# remove files in _upload directories which exist for a period of time, default is one week.age: 168h# the interval of the purge operationsinterval: 24hdryrun: falsechartmuseum:
enabled: true# set the service account to be used, default if left emptyserviceAccountName: ""# mount the service account tokenautomountServiceAccountToken: false# Harbor defaults ChartMuseum to returning relative urls, if you want using absolute url you should enable it by change the following value to 'true'absoluteUrl: falseimage:
repository: harbor.XXX/eureka/chartmuseum-codetag: 0.0.15-hotfixreplicas: 1resources:
requests:
memory: 1Gcpu: 1limits:
memory: 4Gcpu: 2revisionHistoryLimit: 10nodeSelector: {}tolerations: []affinity: {}## Additional deployment annotationspodAnnotations: {}## The priority class to run the pod aspriorityClassName:
## limit the number of parallel indexersindexLimit: 0trivy:
# enabled the flag to enable Trivy scannerenabled: falseimage:
# repository the repository for Trivy adapter imagerepository: goharbor/trivy-adapter-photon# tag the tag for Trivy adapter imagetag: v2.5.1# set the service account to be used, default if left emptyserviceAccountName: ""# mount the service account tokenautomountServiceAccountToken: false# replicas the number of Pod replicasreplicas: 1# debugMode the flag to enable Trivy debug mode with more verbose scanning logdebugMode: false# vulnType a comma-separated list of vulnerability types. Possible values are `os` and `library`.vulnType: "os,library"# severity a comma-separated list of severities to be checkedseverity: "UNKNOWN,LOW,MEDIUM,HIGH,CRITICAL"# ignoreUnfixed the flag to display only fixed vulnerabilitiesignoreUnfixed: false# insecure the flag to skip verifying registry certificateinsecure: false# gitHubToken the GitHub access token to download Trivy DB## Trivy DB contains vulnerability information from NVD, Red Hat, and many other upstream vulnerability databases.# It is downloaded by Trivy from the GitHub release page https://github.com/aquasecurity/trivy-db/releases and cached# in the local file system (`/home/scanner/.cache/trivy/db/trivy.db`). In addition, the database contains the update# timestamp so Trivy can detect whether it should download a newer version from the Internet or use the cached one.# Currently, the database is updated every 12 hours and published as a new release to GitHub.## Anonymous downloads from GitHub are subject to the limit of 60 requests per hour. Normally such rate limit is enough# for production operations. If, for any reason, it's not enough, you could increase the rate limit to 5000# requests per hour by specifying the GitHub access token. For more details on GitHub rate limiting please consult# https://developer.github.com/v3/#rate-limiting## You can create a GitHub token by following the instructions in# https://help.github.com/en/github/authenticating-to-github/creating-a-personal-access-token-for-the-command-linegitHubToken: ""# skipUpdate the flag to disable Trivy DB downloads from GitHub## You might want to set the value of this flag to `true` in test or CI/CD environments to avoid GitHub rate limiting issues.# If the value is set to `true` you have to manually download the `trivy.db` file and mount it in the# `/home/scanner/.cache/trivy/db/trivy.db` path.skipUpdate: false# The offlineScan option prevents Trivy from sending API requests to identify dependencies.## Scanning JAR files and pom.xml may require Internet access for better detection, but this option tries to avoid it.# For example, the offline mode will not try to resolve transitive dependencies in pom.xml when the dependency doesn't# exist in the local repositories. It means a number of detected vulnerabilities might be fewer in offline mode.# It would work if all the dependencies are in local.# This option doesn’t affect DB download. You need to specify skipUpdate as well as offlineScan in an air-gapped environment.offlineScan: false# The duration to wait for scan completiontimeout: 5m0sresources:
requests:
cpu: 200mmemory: 512Milimits:
cpu: 1memory: 1GinodeSelector: {}tolerations: []affinity: {}## Additional deployment annotationspodAnnotations: {}## The priority class to run the pod aspriorityClassName:
notary:
enabled: trueserver:
# set the service account to be used, default if left emptyserviceAccountName: ""# mount the service account tokenautomountServiceAccountToken: falseimage:
repository: goharbor/notary-server-photontag: v2.5.1replicas: 1# resources:# requests:# memory: 256Mi# cpu: 100mnodeSelector: {}tolerations: []affinity: {}## Additional deployment annotationspodAnnotations: {}## The priority class to run the pod aspriorityClassName:
signer:
# set the service account to be used, default if left emptyserviceAccountName: ""# mount the service account tokenautomountServiceAccountToken: falseimage:
repository: goharbor/notary-signer-photontag: v2.5.1replicas: 1# resources:# requests:# memory: 256Mi# cpu: 100mnodeSelector: {}tolerations: []affinity: {}## Additional deployment annotationspodAnnotations: {}## The priority class to run the pod aspriorityClassName:
# Fill the name of a kubernetes secret if you want to use your own# TLS certificate authority, certificate and private key for notary# communications.# The secret must contain keys named ca.crt, tls.crt and tls.key that# contain the CA, certificate and private key.# They will be generated if not set.secretName: ""database:
# if external database is used, set "type" to "external"# and fill the connection informations in "external" sectiontype: externalinternal:
# set the service account to be used, default if left emptyserviceAccountName: ""# mount the service account tokenautomountServiceAccountToken: falseimage:
repository: goharbor/harbor-dbtag: v2.5.1# The initial superuser password for internal databasepassword: "changeit"# The size limit for Shared memory, pgSQL use it for shared_buffer# More details see:# https://github.com/goharbor/harbor/issues/15034shmSizeLimit: 512Mi# resources:# requests:# memory: 256Mi# cpu: 100mnodeSelector: {}tolerations: []affinity: {}## The priority class to run the pod aspriorityClassName:
initContainer:
migrator: {}# resources:# requests:# memory: 128Mi# cpu: 100mpermissions: {}# resources:# requests:# memory: 128Mi# cpu: 100mexternal:
host: "rds on AWS"port: "5432"username: "postgres"coreDatabase: "registry"notaryServerDatabase: "notaryserver"notarySignerDatabase: "notarysigner"# "disable" - No SSL# "require" - Always SSL (skip verification)# "verify-ca" - Always SSL (verify that the certificate presented by the# server was signed by a trusted CA)# "verify-full" - Always SSL (verify that the certification presented by the# server was signed by a trusted CA and the server host name matches the one# in the certificate)sslmode: "disable"# The maximum number of connections in the idle connection pool per pod (core+exporter).# If it <=0, no idle connections are retained.maxIdleConns: 100# The maximum number of open connections to the database per pod (core+exporter).# If it <= 0, then there is no limit on the number of open connections.# Note: the default number of connections is 1024 for postgre of harbor.maxOpenConns: 900## Additional deployment annotationspodAnnotations: {}## used AWS EFS need run container with ROOT usersecurityContext:
runAsUser: 0redis:
# if external Redis is used, set "type" to "external"# and fill the connection informations in "external" sectiontype: externalinternal:
# set the service account to be used, default if left emptyserviceAccountName: ""# mount the service account tokenautomountServiceAccountToken: falseimage:
repository: goharbor/redis-photontag: v2.5.1# resources:# requests:# memory: 256Mi# cpu: 100mnodeSelector: {}tolerations: []affinity: {}## The priority class to run the pod aspriorityClassName:
external:
# support redis, redis+sentinel# addr for redis: <host_redis>:<port_redis># addr for redis+sentinel: <host_sentinel1>:<port_sentinel1>,<host_sentinel2>:<port_sentinel2>,<host_sentinel3>:<port_sentinel3>addr: "redis on aws"# The name of the set of Redis instances to monitor, it must be set to support redis+sentinelsentinelMasterSet: ""# The "coreDatabaseIndex" must be "0" as the library Harbor# used doesn't support configuring itcoreDatabaseIndex: "0"jobserviceDatabaseIndex: "1"registryDatabaseIndex: "2"chartmuseumDatabaseIndex: "3"trivyAdapterIndex: "5"password: ""## Additional deployment annotationspodAnnotations: {}exporter:
replicas: 1revisionHistoryLimit: 10# resources:# requests:# memory: 256Mi# cpu: 100mpodAnnotations: {}serviceAccountName: ""# mount the service account tokenautomountServiceAccountToken: falseimage:
repository: goharbor/harbor-exportertag: v2.5.1nodeSelector: {}tolerations: []affinity: {}cacheDuration: 23cacheCleanInterval: 14400## The priority class to run the pod aspriorityClassName:
metrics:
enabled: truecore:
path: /metricsport: 8001registry:
path: /metricsport: 8001jobservice:
path: /metricsport: 8001exporter:
path: /metricsport: 8001## Create prometheus serviceMonitor to scrape harbor metrics.## This requires the monitoring.coreos.com/v1 CRD. Please see## https://github.com/prometheus-operator/prometheus-operator/blob/master/Documentation/user-guides/getting-started.md##serviceMonitor:
enabled: falseadditionalLabels: {}# Scrape interval. If not set, the Prometheus default scrape interval is used.interval: ""# Metric relabel configs to apply to samples before ingestion.metricRelabelings: []# - action: keep# regex: 'kube_(daemonset|deployment|pod|namespace|node|statefulset).+'# sourceLabels: [__name__]# Relabel configs to apply to samples before ingestion.relabelings: []# - sourceLabels: [__meta_kubernetes_pod_node_name]# separator: ;# regex: ^(.*)$# targetLabel: nodename# replacement: $1# action: replacetrace:
enabled: false# trace provider: jaeger or otel# jaeger should be 1.26+provider: jaeger# set sample_rate to 1 if you wanna sampling 100% of trace data; set 0.5 if you wanna sampling 50% of trace data, and so forthsample_rate: 1# namespace used to differentiate different harbor services# namespace:# attributes is a key value dict contains user defined attributes used to initialize trace provider# attributes:# application: harborjaeger:
# jaeger supports two modes:# collector mode(uncomment endpoint and uncomment username, password if needed)# agent mode(uncomment agent_host and agent_port)endpoint: http://hostname:14268/api/traces# username:# password:# agent_host: hostname# export trace data by jaeger.thrift in compact mode# agent_port: 6831otel:
endpoint: hostname:4318url_path: /v1/tracescompression: falseinsecure: truetimeout: 10s
Log files: You can get them by package the /var/log/harbor/ .
Metrics of DB
Metrics of Harbor Core and Jobservice
Logs of retention job
A lot of error "the object has been modified; please apply your changes to the latest version and try again""
The text was updated successfully, but these errors were encountered:
This issue is being marked stale due to a period of inactivity. If this issue is still relevant, please comment or remove the stale label. Otherwise, this issue will close in 30 days.
Expected behavior and actual behavior:
The performance of Harbor would be influenced when run retention job. The CPU usage of DB would be immediately soar to almost 100%. The retention job run slowly.
Our Harbor has huge images data, almost 37Ti.
Steps to reproduce the problem:
Run a retention job on a project with many tags to delete.
Versions:
Please specify the versions of following systems.
Installed by Helm and use external Redis, PostgreSQL, EFS on AWS
Additional context:
values.yaml
/var/log/harbor/
.Metrics of DB
Metrics of Harbor Core and Jobservice
Logs of retention job
A lot of error "the object has been modified; please apply your changes to the latest version and try again""
The text was updated successfully, but these errors were encountered: