-
Notifications
You must be signed in to change notification settings - Fork 52
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Warnings when importing a freshly created GKE cluster with default parameters #844
Comments
@Frassle I'd love your thoughts on this... |
Even though these were just warnings, and the import was successful, I had to remove a bunch of conflicting fields for I still have no idea how the cluster was actually configured on Google Cloud. Did it use this or did it use that? Pulumi said it couldn't use both, but that was precisely what it imported. |
@stack72 This will be based on whatever the I'd try making a gke cluster and then seeing what provider |
I also get these conflicts when importing an existing Google GKE cluster. pulumi import gcp:container/cluster:Cluster my-cluster com-my-dev-760a2504/us-central1/com-my-us-gke-dev Then execute pulumi up with my code:
The only way to resolve these errors is changes to the imported YAML to address the conflict. For instance ipAllocationPolicy has conflicting entries so I remove the cidr blocks from them.
Then I have to remove
This fixes the conflicts. But it causes state drift against the remote API.
|
This issue needs ownership. |
I used @jondkelley's fixes and do not experience state drift with remote API; however, it still seems odd that I'd have to manually edit an import from the actual state of the GCP Cluster. How can the current state be in conflict with itself...? |
Toward #1225 - this fixes the special case of ConflictsWith warnings. This fixes spurious warnings on `pulumi import`, popular bugs such as: - pulumi/pulumi-aws#2318 - pulumi/pulumi-aws#3670 - pulumi/pulumi-gitlab#293 - pulumi/pulumi-gcp#844 - pulumi/pulumi-linode#373 TF does not guarantee Read results to be compatible with calling Check on, in particular Read can return results that run afoul of ConflictsWith constraint. This change compensates by arbitrarily dropping data from the Read result until it passes ConflictsWith checks. This affects `pulumi refresh` as well as I think it should although I have not seen "in the wild" cases where refresh is affected, since it typically will not copy these properties to the input bag unless they're present in old inputs, which are usually correct wrt to ConflictsWith.
Given this cluster:
I can now do an import without warnings: Importing (dev2)
View in Browser (Ctrl+O): https://app.pulumi.com/anton-pulumi-corp/pulumi-gcp-844/dev2/updates/2
Type Name Status
pulumi:pulumi:Stack pulumi-gcp-844-dev2
= └─ gcp:container:Cluster c2 imported (0.95s)
Outputs:
clusterId: "projects/pulumi-development/locations/us-central1/clusters/my-gke-cluster"
Resources:
= 1 imported
2 unchanged
Duration: 2s
Please copy the following code into your Pulumi application. Not doing so
will cause Pulumi to report that an update will happen on the next update command.
Please note that the imported resources are marked as protected. To destroy them
you will need to remove the `protect` option and run `pulumi update` *before*
the destroy will take effect.
import * as pulumi from "@pulumi/pulumi";
import * as gcp from "@pulumi/gcp";
const c2 = new gcp.container.Cluster("c2", {
addonsConfig: {
gcePersistentDiskCsiDriverConfig: {
enabled: true,
},
networkPolicyConfig: {
disabled: true,
},
},
clusterIpv4Cidr: "10.80.0.0/14",
clusterTelemetry: {
type: "ENABLED",
},
databaseEncryption: {
state: "DECRYPTED",
},
defaultMaxPodsPerNode: 110,
defaultSnatStatus: {
disabled: false,
},
initialNodeCount: 1,
location: "us-central1",
loggingConfig: {
enableComponents: [
"SYSTEM_COMPONENTS",
"WORKLOADS",
],
},
masterAuth: {
clientCertificateConfig: {
issueClientCertificate: false,
},
},
monitoringConfig: {
advancedDatapathObservabilityConfigs: [{
enableMetrics: false,
enableRelay: false,
}],
enableComponents: ["SYSTEM_COMPONENTS"],
managedPrometheus: {
enabled: true,
},
},
name: "my-gke-cluster",
network: "projects/pulumi-development/global/networks/default",
networkPolicy: {
enabled: false,
provider: "PROVIDER_UNSPECIFIED",
},
networkingMode: "VPC_NATIVE",
nodeLocations: [
"us-central1-b",
"us-central1-c",
"us-central1-a",
],
nodePoolDefaults: {
nodeConfigDefaults: {
loggingVariant: "DEFAULT",
},
},
nodeVersion: "1.29.4-gke.1043002",
notificationConfig: {
pubsub: {
enabled: false,
},
},
podSecurityPolicyConfig: {
enabled: false,
},
privateClusterConfig: {
masterGlobalAccessConfig: {
enabled: false,
},
},
project: "pulumi-development",
protectConfig: {
workloadConfig: {
auditMode: "BASIC",
},
workloadVulnerabilityMode: "WORKLOAD_VULNERABILITY_MODE_UNSPECIFIED",
},
releaseChannel: {
channel: "REGULAR",
},
securityPostureConfig: {
mode: "BASIC",
vulnerabilityMode: "VULNERABILITY_MODE_UNSPECIFIED",
},
serviceExternalIpsConfig: {
enabled: false,
},
subnetwork: "projects/pulumi-development/regions/us-central1/subnetworks/def}, {
protect: true,
});
This is accomplished by dropping out conflicting properties in pulumi-terraform-bridge during import. The dropout is not very intelligent but attempts to resolve conflicts. Versions:
I will close this as fixed but please feel free to open another issue if something is not working as expected. |
What happened?
Pulumi outputs warnings when importing a freshly created GKE cluster from Google Cloud console.
Steps to reproduce
Expected Behavior
No warnings
Actual Behavior
Versions used
Additional context
No response
Contributing
Vote on this issue by adding a 👍 reaction.
To contribute a fix for this issue, leave a comment (and link to your pull request, if you've opened one already).
The text was updated successfully, but these errors were encountered: