Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

incorrect count of components (found: 4; expected: 3), this is a bug! #12202

Closed
ClementGautier opened this issue Apr 25, 2022 · 3 comments
Closed
Assignees
Labels
bug c-di Internal Customer Reference regression

Comments

@ClementGautier
Copy link

Description

What happened:
Teleport won't start after fresh reinstall, saying the number of component doesn't match the expected state. The pod never comes to a running state as a result of that. I don't know how to give you more details, I didn't use any specific flags unless the listener mode to use multiplex.

What you expected to happen:
Teleport cluster to start properly

Reproduction Steps

helm install --set proxyListenerMode=multiplex

Server Details

  • Teleport version (run teleport version): 9.1.1
  • Server OS (e.g. from /etc/os-release): ubuntu
  • Where are you running Teleport? (e.g. AWS, GCP, Dedicated Hardware): gcp

Debug Logs

2022-04-25T09:40:50Z WARN             both kubernetes_service and proxy_service are enabled, but proxy_service doesn't set kube_listen_addr; consider setting kube_listen_addr on proxy_service, to handle incoming Kubernetes requests config/configuration.go:1036
2022-04-25T09:40:50Z INFO             Generating new host UUID: 94a461ae-8722-4a78-8598-73cca8875d90. service/service.go:689
2022-04-25T09:40:50Z INFO [PROC:1]    Service diag is creating new listener on 0.0.0.0:3000. service/signals.go:213
2022-04-25T09:40:50Z INFO [DIAG:1]    Starting diagnostic service on 0.0.0.0:3000. service/service.go:2301
2022-04-25T09:40:50Z INFO [AUTH]      Updating cluster networking configuration: Kind:"cluster_networking_config" Version:"v2" Metadata:<Name:"cluster-networking-config" Namespace:"default" Labels:<key:"teleport.dev/origin" value:"config-file" > > Spec:<KeepAliveInterval:300000000000 KeepAliveCountMax:3 ProxyListenerMode:Multiplex > . auth/init.go:434
2022-04-25T09:40:50Z INFO [AUTH]      Updating session recording configuration: Kind:"session_recording_config" Version:"v2" Metadata:<Name:"session-recording-config" Namespace:"default" Labels:<key:"teleport.dev/origin" value:"config-file" > > Spec:<Mode:"node" ProxyChecksHostKeys:"\010\001" > . auth/init.go:452
2022-04-25T09:40:50Z INFO [AUTH]      Updating cluster auth preference: AuthPreference(Type="local",SecondFactor="otp"). auth/init.go:416
2022-04-25T09:40:50Z INFO [AUTH]      Updating cluster configuration: StaticTokens([]). auth/init.go:309
2022-04-25T09:40:50Z INFO [AUTH]      Created namespace: "default". auth/init.go:316
2022-04-25T09:40:50Z INFO [AUTH]      First start: generating host certificate authority. auth/init.go:326
2022-04-25T09:40:50Z INFO [AUTH]      First start: generating user certificate authority. auth/init.go:326
2022-04-25T09:40:51Z INFO [AUTH]      First start: generating jwt certificate authority. auth/init.go:326
2022-04-25T09:40:51Z INFO [AUTH]      Auth server is running periodic operations. auth/init.go:394
2022-04-25T09:40:51Z INFO [CA]        Generating TLS certificate {0x84debf0 0xc000847170 1.3.9999.1.7=#131167616e64616c662e6f776b696e2e636f6d,CN=94a461ae-8722-4a78-8598-73cca8875d90.xxxx.xxxxx.xxx,O=Admin,POSTALCODE=null,STREET= 2032-04-22 09:40:51.833284301 +0000 UTC [teleport-cluster-5ccd97748c-zzkc2 *.teleport.cluster.local teleport.cluster.local] [] 5 []}. common_name:94a461ae-8722-4a78-8598-73cca8875d90.xxxx.xxxxx.xxx dns_names:[teleport-cluster-5ccd97748c-zzkc2 *.teleport.cluster.local teleport.cluster.local] locality:[] not_after:2032-04-22 09:40:51.833284301 +0000 UTC org:[Admin] org_unit:[] tlsca/ca.go:800
2022-04-25T09:40:51Z INFO [PROC:1]    Admin has obtained credentials to connect to the cluster. service/connect.go:429
2022-04-25T09:40:51Z INFO [PROC:1]    The process successfully wrote the credentials and state of Admin to the disk. service/connect.go:470
2022-04-25T09:40:51Z INFO [AUTH:1:CA] Cache "auth" first init succeeded. cache/cache.go:713
2022-04-25T09:40:51Z INFO [PROC:1]    Service auth is creating new listener on 0.0.0.0:3025. service/signals.go:213
2022-04-25T09:40:51Z INFO [AUTH:1]    Starting Auth service with PROXY protocol support. service/service.go:1341
2022-04-25T09:40:51Z WARN [AUTH:1]    Configuration setting auth_service/advertise_ip is not set. guessing 10.252.0.67:3025. service/service.go:1421
2022-04-25T09:40:51Z INFO [AUTH:1]    Auth service 9.1.1:v9.1.1-0-gb0129ff5e is starting on 10.252.0.67:3025. utils/cli.go:275
2022-04-25T09:40:52Z INFO [CA]        Generating TLS certificate {0x84debf0 0xc0010c41b0 1.3.9999.1.7=#131167616e64616c662e6f776b696e2e636f6d,CN=94a461ae-8722-4a78-8598-73cca8875d90.xxxx.xxxxx.xxx,O=Kube,POSTALCODE=null,STREET= 2032-04-22 09:40:52.031248519 +0000 UTC [teleport-cluster-5ccd97748c-zzkc2 localhost 127.0.0.1 ::1 remote.kube.proxy.teleport.cluster.local *.teleport.cluster.local teleport.cluster.local] [] 5 []}. common_name:94a461ae-8722-4a78-8598-73cca8875d90.xxxx.xxxxx.xxx dns_names:[teleport-cluster-5ccd97748c-zzkc2 localhost 127.0.0.1 ::1 remote.kube.proxy.teleport.cluster.local *.teleport.cluster.local teleport.cluster.local] locality:[] not_after:2032-04-22 09:40:52.031248519 +0000 UTC org:[Kube] org_unit:[] tlsca/ca.go:800
2022-04-25T09:40:52Z INFO [PROC:1]    Kube has obtained credentials to connect to the cluster. service/connect.go:429
2022-04-25T09:40:52Z INFO [PROC:1]    The process successfully wrote the credentials and state of Kube to the disk. service/connect.go:470
2022-04-25T09:40:52Z INFO [PROC:1]    Kube: features loaded from auth server: Kubernetes:true App:true DB:true Desktop:true  service/connect.go:76
2022-04-25T09:40:52Z INFO [KUBERNETE] Cache "kube" first init succeeded. cache/cache.go:713
2022-04-25T09:40:52Z INFO [AUDIT:1]   Creating directory /var/lib/teleport/log. service/service.go:2083
2022-04-25T09:40:52Z INFO [AUDIT:1]   Creating directory /var/lib/teleport/log/upload. service/service.go:2083
2022-04-25T09:40:52Z INFO [AUDIT:1]   Creating directory /var/lib/teleport/log/upload/sessions. service/service.go:2083
2022-04-25T09:40:52Z INFO [AUDIT:1]   Creating directory /var/lib/teleport/log/upload/sessions/default. service/service.go:2083
2022-04-25T09:40:52Z INFO [AUDIT:1]   Creating directory /var/lib/teleport/log. service/service.go:2083
2022-04-25T09:40:52Z INFO [AUDIT:1]   Creating directory /var/lib/teleport/log/upload. service/service.go:2083
2022-04-25T09:40:52Z INFO [AUDIT:1]   Creating directory /var/lib/teleport/log/upload/streaming. service/service.go:2083
2022-04-25T09:40:52Z INFO [AUDIT:1]   Creating directory /var/lib/teleport/log/upload/streaming/default. service/service.go:2083
2022-04-25T09:40:52Z INFO [PROC:1]    Service kubernetes is creating new listener on 0.0.0.0:443. service/signals.go:213
2022-04-25T09:40:52Z INFO [KUBERNETE] Starting Kube service on [::]:443. service/kubernetes.go:269
2022-04-25T09:40:52Z INFO [KUBERNETE] Kubernetes service 9.1.1:v9.1.1-0-gb0129ff5e is starting on [::]:443. utils/cli.go:275
2022-04-25T09:40:52Z INFO [CA]        Generating TLS certificate {0x84debf0 0xc000214580 1.3.9999.1.7=#131167616e64616c662e6f776b696e2e636f6d,CN=94a461ae-8722-4a78-8598-73cca8875d90.xxxx.xxxxx.xxx,O=Proxy,POSTALCODE=null,STREET= 2032-04-22 09:40:52.544272801 +0000 UTC [teleport-cluster-5ccd97748c-zzkc2 xxxx.xxxxx.xxx 127.0.0.1 localhost ::1 remote.kube.proxy.teleport.cluster.local *.teleport.cluster.local teleport.cluster.local *.xxxx.xxxxx.xxx] [] 5 []}. common_name:94a461ae-8722-4a78-8598-73cca8875d90.xxxx.xxxxx.xxx dns_names:[teleport-cluster-5ccd97748c-zzkc2 xxxx.xxxxx.xxx 127.0.0.1 localhost ::1 remote.kube.proxy.teleport.cluster.local *.teleport.cluster.local teleport.cluster.local *.xxxx.xxxxx.xxx] locality:[] not_after:2032-04-22 09:40:52.544272801 +0000 UTC org:[Proxy] org_unit:[] tlsca/ca.go:800
2022-04-25T09:40:52Z INFO [PROC:1]    Proxy has obtained credentials to connect to the cluster. service/connect.go:429
2022-04-25T09:40:52Z INFO [PROC:1]    The process successfully wrote the credentials and state of Proxy to the disk. service/connect.go:470
2022-04-25T09:40:52Z INFO [PROC:1]    Proxy: features loaded from auth server: Kubernetes:true App:true DB:true Desktop:true  service/connect.go:76
2022-04-25T09:40:52Z INFO [PROXY:1:C] Cache "proxy" first init succeeded. cache/cache.go:713
2022-04-25T09:40:52Z INFO [PROC:1]    Service proxy:web is creating new listener on 0.0.0.0:3080. service/signals.go:213
2022-04-25T09:40:52Z INFO [REVERSE:G] Cache "remote-proxy" first init succeeded. cache/cache.go:713
2022-04-25T09:40:52Z INFO             Managing certs using ACME https://datatracker.ietf.org/doc/rfc8555/. service/service.go:3305
2022-04-25T09:40:52Z INFO [PROXY:SER] Reverse tunnel service 9.1.1:v9.1.1-0-gb0129ff5e is starting on . utils/cli.go:275
2022-04-25T09:40:52Z INFO [PROXY:SER] Starting 9.1.1:v9.1.1-0-gb0129ff5e on  using in-memory cache service/service.go:2838
2022-04-25T09:40:52Z WARN [WEB]       Invalid SSH proxy address "", will use default port 3023. error:[missing port in address] web/apiserver.go:261
2022-04-25T09:40:52Z INFO [PROXY:PRO] Starting Kube proxy on . service/service.go:3060
2022-04-25T09:40:52Z INFO [PROXY:AGE] Starting reverse tunnel agent pool. service/service.go:3001
2022-04-25T09:40:52Z INFO [DB:SERVIC] Starting Postgres proxy server on 0.0.0.0:3080. service/service.go:3129
2022-04-25T09:40:52Z INFO [DB:SERVIC] Starting Database TLS proxy server on 0.0.0.0:3080. service/service.go:3147
2022-04-25T09:40:52Z INFO [PROXY:SER] Web proxy service 9.1.1:v9.1.1-0-gb0129ff5e is starting on 0.0.0.0:3080. utils/cli.go:275
2022-04-25T09:40:52Z INFO [PROXY:SER] Web proxy service 9.1.1:v9.1.1-0-gb0129ff5e is starting on 0.0.0.0:3080. service/service.go:2927
2022-04-25T09:40:52Z INFO [PROXY:SER] SSH proxy service 9.1.1:v9.1.1-0-gb0129ff5e is starting on . utils/cli.go:275
2022-04-25T09:40:52Z INFO [PROXY:SER] SSH proxy service 9.1.1:v9.1.1-0-gb0129ff5e is starting on {  } service/service.go:2968
2022-04-25T09:40:52Z INFO [PROC:1]    The new service has started successfully. Starting syncing rotation status with period 10m0s. service/connect.go:482
2022-04-25T09:40:52Z INFO [AUDIT:1]   Creating directory /var/lib/teleport/log. service/service.go:2083
2022-04-25T09:40:52Z INFO [AUDIT:1]   Creating directory /var/lib/teleport/log/upload. service/service.go:2083
2022-04-25T09:40:52Z INFO [AUDIT:1]   Creating directory /var/lib/teleport/log/upload/sessions. service/service.go:2083
2022-04-25T09:40:52Z INFO [AUDIT:1]   Creating directory /var/lib/teleport/log/upload/sessions/default. service/service.go:2083
2022-04-25T09:40:52Z INFO [AUDIT:1]   Creating directory /var/lib/teleport/log. service/service.go:2083
2022-04-25T09:40:52Z INFO [AUDIT:1]   Creating directory /var/lib/teleport/log/upload. service/service.go:2083
2022-04-25T09:40:52Z INFO [AUDIT:1]   Creating directory /var/lib/teleport/log/upload/streaming. service/service.go:2083
2022-04-25T09:40:52Z INFO [AUDIT:1]   Creating directory /var/lib/teleport/log/upload/streaming/default. service/service.go:2083
2022-04-25T09:40:52Z INFO [PROXY:SER] Starting proxy gRPC server on [::]:3080. service/service.go:3181
2022-04-25T09:40:52Z INFO             Service has started successfully. service/service.go:523
2022-04-25T09:40:52Z INFO [PROXY:SER] Starting TLS ALPN SNI proxy server on [::]:3080. service/service.go:3210
2022-04-25T09:40:52Z ERRO [PROC:1]    incorrect count of components (found: 4; expected: 3), this is a bug! service/state.go:164
2022-04-25T09:40:54Z WARN [PROXY:1]   Restart watch on error: empty proxy list. resource-kind:proxy services/watcher.go:218
2022-04-25T09:40:56Z ERRO [PROC:1]    incorrect count of components (found: 4; expected: 3), this is a bug! service/state.go:164
2022-04-25T09:40:57Z ERRO [PROC:1]    incorrect count of components (found: 4; expected: 3), this is a bug! service/state.go:164
2022-04-25T09:40:57Z ERRO [PROC:1]    incorrect count of components (found: 4; expected: 3), this is a bug! service/state.go:164
2022-04-25T09:40:59Z ERRO [PROC:1]    incorrect count of components (found: 4; expected: 3), this is a bug! service/state.go:164
2022-04-25T09:41:01Z ERRO [PROC:1]    incorrect count of components (found: 4; expected: 3), this is a bug! service/state.go:164
@pschisa pschisa added the c-di Internal Customer Reference label Apr 26, 2022
@Valien
Copy link
Contributor

Valien commented Apr 26, 2022

Can confirm non-Helm issue as well. Upgraded a cluster from v9.1.0 to v9.1.1 and rebooted. Getting the following errors in logs:

Apr 26 13:57:55 ip-172-16-10-81 teleport[1140]: 2022-04-26T13:57:55Z [PROC:1]    ERRO incorrect count of components (found: 4; expected: 3), this is a bug! service/state.go:164
Apr 26 13:57:55 ip-172-16-10-81 teleport[1140]: 2022-04-26T13:57:55Z [PROC:1]    ERRO incorrect count of components (found: 4; expected: 3), this is a bug! service/state.go:164
Apr 26 13:57:55 ip-172-16-10-81 teleport[1140]: 2022-04-26T13:57:55Z [PROC:1]    ERRO incorrect count of components (found: 4; expected: 3), this is a bug! service/state.go:164
Apr 26 13:58:00 ip-172-16-10-81 teleport[1140]: 2022-04-26T13:58:00Z [PROC:1]    ERRO incorrect count of components (found: 4; expected: 3), this is a bug! service/state.go:164
Apr 26 13:58:00 ip-172-16-10-81 teleport[1140]: 2022-04-26T13:58:00Z [PROC:1]    ERRO incorrect count of components (found: 4; expected: 3), this is a bug! service/state.go:164
Apr 26 13:58:00 ip-172-16-10-81 teleport[1140]: 2022-04-26T13:58:00Z [PROC:1]    ERRO incorrect count of components (found: 4; expected: 3), this is a bug! service/state.go:16

Incidentally, after a reboot, the cluster is operating with no issue. Desktop access, ssh, etc is working with no problems that I could find.

@r0mant
Copy link
Collaborator

r0mant commented Apr 26, 2022

We are reverting the offending changes in the following PRs:

Will publish patch releases after they get merged.

@r0mant
Copy link
Collaborator

r0mant commented Apr 27, 2022

The fix was released in 9.1.2.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug c-di Internal Customer Reference regression
Projects
None yet
Development

No branches or pull requests

5 participants