You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe:
Deploy TiDB cluster with only 1 PD Pod and bad configuration, PD crashLoopBackoff, correct the configuration, PD cluster cannot recover:
E1126 14:20:22.228535 1 tidb_cluster_controller.go:240] TidbCluster: csn/hot-new, sync failed tidbcluster: [csn/hot-new]'s pd status sync failed,can not to be upgraded, requeuing
E1126 14:20:47.802457 1 pd_member_manager.go:196] failed to sync TidbCluster: [csn/hot-new]'s status, error: Get http://hot-new-pd.csn:2379/pd/api/v1/cluster: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
E1126 14:20:47.803078 1 tidb_cluster_controller.go:240] TidbCluster: csn/hot-new, sync failed tidbcluster: [csn/hot-new]'s pd status sync failed,can not to be upgraded, requeuing
E1126 14:21:13.599951 1 pd_member_manager.go:196] failed to sync TidbCluster: [csn/hot-new]'s status, error: Get http://hot-new-pd.csn:2379/pd/api/v1/cluster: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
E1126 14:21:13.600222 1 tidb_cluster_controller.go:240] TidbCluster: csn/hot-new, sync failed tidbcluster: [csn/hot-new]'s pd status sync failed,can not to be upgraded, requeuing
E1126 14:21:42.100753 1 pd_member_manager.go:196] failed to sync TidbCluster: [csn/hot-new]'s status, error: Get http://hot-new-pd.csn:2379/pd/api/v1/cluster: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
E1126 14:21:42.101092 1 tidb_cluster_controller.go:240] TidbCluster: csn/hot-new, sync failed tidbcluster: [csn/hot-new]'s pd status sync failed,can not to be upgraded, requeuing
User have to manually add force-delete annotation to recover, which is tedious.
Describe the feature you'd like:
Perform force upgrade when replica is 1. There's no need to be "graceful" here because 1 replica don't have peers to transfer leader.
Feature Request
Is your feature request related to a problem? Please describe:
Deploy TiDB cluster with only 1 PD Pod and bad configuration, PD crashLoopBackoff, correct the configuration, PD cluster cannot recover:
User have to manually add force-delete annotation to recover, which is tedious.
Describe the feature you'd like:
Perform force upgrade when replica is 1. There's no need to be "graceful" here because 1 replica don't have peers to transfer leader.
/cc @DanielZhangQD
The text was updated successfully, but these errors were encountered: