Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[import] AWS autoscaling group import and subsequent up tries to fix defaults to actual default values #4457

Closed
rulatir opened this issue Sep 12, 2024 · 10 comments · Fixed by #4510
Assignees
Labels
area/import An issue related to `pulumi import` or the import resource option. awaiting-upstream The issue cannot be resolved without action in another repository (may be owned by Pulumi). impact/regression Something that used to work, but is now broken kind/bug Some behavior is incorrect or out of spec p1 A bug severe enough to be the next item assigned to an engineer resolution/fixed This issue was fixed

Comments

@rulatir
Copy link

rulatir commented Sep 12, 2024

What happened?

I imported an autoscaling group with pulumi import, added the generated code to the pulumi program, and issued pulumi preview.

Expected: no changes.

Actual: pulumi insists on fixing some properties that are set to "Default" in the cloud reality to their respective actual default values. This is undesirable.

      + forceDelete                  : false
      + forceDeleteWarmPool          : false
      + ignoreFailedScalingActivities: false
      + waitForCapacityTimeout       : "10m"

I inspected the state using pulumi stack export, and all these properties are null in the state after import, and they are not specified in code either. If the state says null and the cloud reality says "Default" (as an option selected in a select widget in the AWS Console), and the program says nothing, then where does false or "10m" even come from?

Example

Import command:

pulumi import --generate-code 'aws:autoscaling/group:Group' platform Infra-ECS-Cluster-zaffre-37cd4e5b-ECSAutoScalingGroup-982zexeihkOe

Generated code:

import * as pulumi from "@pulumi/pulumi";
import * as aws from "@pulumi/aws";

const platform = new aws.autoscaling.Group("platform", {
    availabilityZones: ["eu-central-1a"],
    defaultCooldown: 300,
    healthCheckGracePeriod: 0,
    healthCheckType: "EC2",
    launchTemplate: {
        id: "lt-0789d9b35b0a9d959",
        version: "$Latest",
    },
    maxSize: 2,
    minSize: 0,
    name: "Infra-ECS-Cluster-zaffre-37cd4e5b-ECSAutoScalingGroup-982zexeihkOe",
    serviceLinkedRoleArn: "arn:aws:iam::164629628951:role/aws-service-role/autoscaling.amazonaws.com/AWSServiceRoleForAutoScaling",
    tags: [
        {
            key: "AmazonECSManaged",
            propagateAtLaunch: true,
            value: "",
        },
        {
            key: "Name",
            propagateAtLaunch: true,
            value: "ECS Instance - zaffre",
        },
    ],
}, {
    protect: true,
});

Output of pulumi about

CLI          
Version      3.131.0
Go Version   go1.23.0
Go Compiler  gc

Plugins
KIND      NAME    VERSION
resource  aws     6.51.0
resource  aws     6.51.0
resource  awsx    2.14.0
resource  awsx    2.14.0
resource  docker  4.5.5
resource  docker  4.5.5
resource  docker  3.6.1
resource  docker  3.6.1
language  nodejs  unknown

Host     
OS       arch
Version  "rolling"
Arch     x86_64

This project is written in nodejs: executable='/home/rulatir/projects/zaffre/internment/nave/store/installed/node-modern/bin/node' version='v22.2.0'

Current Stack: organization/zuu-iac/permanent

TYPE                                   URN
[REDACTED]

Found no pending operations associated with permanent

Backend        
Name           berbelek
URL            s3://zuu-iac/attempts/PPM
User           rulatir
Organizations  
Token type     personal

Pulumi locates its logs in /tmp by default
warning: Failed to get information about the Pulumi program's dependencies: could not find either /home/rulatir/projects/zaffre/cloud/attempts/PPM/yarn.lock or /home/rulatir/projects/zaffre/cloud/attempts/PPM/package-lock.json

Note about the last warning: I chose pnpm during pulumi new; it seems pulumi about can't handle that yet.

Additional context

No response

Contributing

Vote on this issue by adding a 👍 reaction.
To contribute a fix for this issue, leave a comment (and link to your pull request, if you've opened one already).

@rulatir rulatir added kind/bug Some behavior is incorrect or out of spec needs-triage Needs attention from the triage team labels Sep 12, 2024
@Frassle
Copy link
Member

Frassle commented Sep 13, 2024

I'm going to move this to the AWS repo as it's the provider that determines what the input values after import looks like. The engine currently has no real concept of defaults.

Having said that, there's a few issues of this shape and providers may need to sync up with core to work out how to actually support defaults like this in a consistent way that works.

@Frassle Frassle transferred this issue from pulumi/pulumi Sep 13, 2024
@rulatir
Copy link
Author

rulatir commented Sep 13, 2024

Can this be worked around somehow? I must emphasize that by "work around" I don't mean "give up", i.e. "just let pulumi fix those defaults".

(Ceterum censeo, "real concept of defaults" is a fundamental domain concept for software like pulumi, i.e. software that manages configurations).

@flostadler flostadler added the area/import An issue related to `pulumi import` or the import resource option. label Sep 13, 2024
@flostadler
Copy link
Contributor

flostadler commented Sep 13, 2024

Hey @rulatir sorry you're running into this! This is caused by a recent change that was rolled out in v6.51.0.
It is tracked here: pulumi/pulumi-terraform-bridge#2372.
What's notable is that this aligns Pulumi with the upstream terraform provider. Importing the ASG also yields this diff in defaults there. We should try fixing up the resource there. I opened this upstream issue for it: hashicorp/terraform-provider-aws#39308.

As a workaround you could roll back to v6.50.1 of the provider.

Alternatively, running pulumi up should align the state without modifying the cloud resource. None of those parameters are part of the cloud state of that resource:

  • forceDelete and forceDeleteWarmPool are inputs for the DeleteAutoScalingGroup and DeleteWarmPool API calls.
  • ignoreFailedScalingActivities and waitForCapacityTimeout are provider level settings for how to handle updates to the Auto Scaling Groups

@flostadler flostadler added impact/regression Something that used to work, but is now broken awaiting-upstream The issue cannot be resolved without action in another repository (may be owned by Pulumi). p1 A bug severe enough to be the next item assigned to an engineer and removed needs-triage Needs attention from the triage team labels Sep 13, 2024
@t0yv0 t0yv0 self-assigned this Sep 23, 2024
t0yv0 added a commit that referenced this issue Sep 23, 2024
Fix a regression in the import experience on aws.autoscaling.Group. Specifically the conflict between undefined and
default values that `pulumi import` used to detect is no longer a conflict by automatically injecting DiffSuppressFunc
for the relevant properties.

Fixes #4457
t0yv0 added a commit that referenced this issue Sep 24, 2024
Fix a regression in the import experience on aws.autoscaling.Group.
Specifically the conflict between undefined and default values that
`pulumi import` used to detect is no longer a conflict by automatically
injecting DiffSuppressFunc for the relevant properties.

Fixes #4457
@pulumi-bot pulumi-bot added the resolution/fixed This issue was fixed label Sep 24, 2024
@pulumi-bot
Copy link
Contributor

This issue has been addressed in PR #4510 and shipped in release v6.53.0.

@rulatir
Copy link
Author

rulatir commented Sep 28, 2024

I upgraded to 6.54.0 and tried to run pulumi preview on the code/state/cloud reality combo that hasn't been touched since I ran into this issue and got stuck. I expected to get unstuck. I did not: I got the exact same update plan as previously, with pulumi declaring that it WOULD, as opposed to ABSOLUTELY WOULDN'T, attempt to immediately modify ASGs that were just imported and the generated code was added to the program. Therefore the issue is not fixed. Please reopen.

  pulumi:pulumi:Stack: (same)
    [urn=urn:pulumi:permanent::zuu-iac::pulumi:pulumi:Stack::zuu-iac-permanent]
    ~ aws:autoscaling/group:Group: (update)
        [id=Infra-ECS-Cluster-zaffre-37cd4e5b-ECSAutoScalingGroup-982zexeihkOe]
        [urn=urn:pulumi:permanent::zuu-iac::aws:autoscaling/group:Group::platform]
        [provider: urn:pulumi:permanent::zuu-iac::pulumi:providers:aws::default_6_51_0::3a397c39-393f-49e9-8fee-b34c39d39065 => urn:pulumi:permanent::zuu-iac::pulumi:providers:aws::default_6_54_0::output<string>]
      + forceDelete                  : false
      + forceDeleteWarmPool          : false
      + ignoreFailedScalingActivities: false
      + waitForCapacityTimeout       : "10m"
    ~ aws:autoscaling/group:Group: (update)
        [id=Infra-ECS-Cluster-zaffre-mongodb-682cc63e-ECSAutoScalingGroup-tznhtknUgPkL]
        [urn=urn:pulumi:permanent::zuu-iac::aws:autoscaling/group:Group::mongodb]
        [provider: urn:pulumi:permanent::zuu-iac::pulumi:providers:aws::default_6_51_0::3a397c39-393f-49e9-8fee-b34c39d39065 => urn:pulumi:permanent::zuu-iac::pulumi:providers:aws::default_6_54_0::output<string>]
      + forceDelete                  : false
      + forceDeleteWarmPool          : false
      + ignoreFailedScalingActivities: false
      + waitForCapacityTimeout       : "10m"

EDIT: I tried to run pulumi refresh before pulumi preview, and refresh did show that it "removed" these four properties from the imported ASGs:

Previewing refresh (permanent):
     Type                       Name               Plan       Info
     pulumi:pulumi:Stack        zuu-iac-permanent             
     ├─ aws:ec2:LaunchTemplate  platform                      
     ├─ aws:ec2:LaunchTemplate  mongodb                       
 ~   ├─ aws:autoscaling:Group   mongodb            update     [diff: -forceDelete,forceDeleteWarmPool,ignoreFailedScalingActivities,waitForCapacityTimeout]
 ~   └─ aws:autoscaling:Group   platform           update     [diff: -forceDelete,forceDeleteWarmPool,ignoreFailedScalingActivities,waitForCapacityTimeout]

However, this had no effect on subsequent pulumi preview; it still wants to re-add these properties, and I'm still stuck.

@t0yv0 t0yv0 reopened this Sep 28, 2024
@t0yv0
Copy link
Member

t0yv0 commented Sep 28, 2024

[provider: urn:pulumi:permanent::zuu-iac::pulumi:providers:aws::default_6_51_0

Pulumi's using a provider version 5.61.0 here that doesn't have the fix yet. Would it be possible to upgrade to 6.54.0 before proceeding?

@rulatir
Copy link
Author

rulatir commented Sep 28, 2024

I assumed that was just a recorded reference to the version the resource's representation in the state was created with.

I updated the provider with pnpm and @pulumi/aws is now at 6.54 in package.json. Additionally, when I first ran pulumi preview after doing that, it further downloaded and installed something with 6.54 in it. I honestly thought I'd done due dilligence upgrading the provider at that point. What else does it take? Does Pulumi keep all previous versions of the provider, and a resource's state entry is forever tied to the version that created it, until I delete the resource from the state and re-import it?

@t0yv0
Copy link
Member

t0yv0 commented Sep 28, 2024

IN some cases Pulumi uses the provider version that's written into your state to manage the said resource. Do you have references to 6.51.0 in pulumi stack export? You can edit that manually, or is possible just do pulumi up on the new version accepting the diff above and that should move that forward.

@rulatir
Copy link
Author

rulatir commented Sep 28, 2024

The diff above tries to assign concrete values to properties that are either not even present in the AWS console for this resource, or set to "Default". "Default" is semantically distinct from any concrete value. It must be assumed to mean "Auto", i.e. its effective value can depend in documented or undocumented ways on documented or undocumented other properties of this or related resources.

In the output of pulumi stack export the reference to the provider version is not only via the raw version number, but also via a complicated URN with UUIDs, and I have no idea how to sculpt an equivalent URN for the new version. The preview shows something that looks like the new URN, but it has a very different form from the old URN, and that's suspicious.

@t0yv0
Copy link
Member

t0yv0 commented Sep 28, 2024

For this particular resource, and these particular values, there will be no difference to your cloud in accepting the diff as TF code will populate these values anyway before doing Create/Update calls. I'm starting to think that the evidence points to the bug is indeed fixed in the latest version of the provider, but since stack editing is not working for you I'd recommend accepting the diff with pulumi up --yes.

   + forceDelete                  : false
      + forceDeleteWarmPool          : false
      + ignoreFailedScalingActivities: false
      + waitForCapacityTimeout       : "10m"

There is also pulumi/pulumi#9878 that can be helpful to up-vote to prioritize making upgrades easier. I think there isn't much else we can do for this case in pulumi-aws at the moment unfortunately.

@t0yv0 t0yv0 closed this as completed Sep 28, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/import An issue related to `pulumi import` or the import resource option. awaiting-upstream The issue cannot be resolved without action in another repository (may be owned by Pulumi). impact/regression Something that used to work, but is now broken kind/bug Some behavior is incorrect or out of spec p1 A bug severe enough to be the next item assigned to an engineer resolution/fixed This issue was fixed
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants