Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

BuilderTrait nodeSelector configuration not working for jvm builds #5838

Closed
lsergio opened this issue Sep 10, 2024 · 9 comments · Fixed by #5862
Closed

BuilderTrait nodeSelector configuration not working for jvm builds #5838

lsergio opened this issue Sep 10, 2024 · 9 comments · Fixed by #5862
Assignees
Labels
kind/bug Something isn't working
Milestone

Comments

@lsergio
Copy link
Contributor

lsergio commented Sep 10, 2024

What happened?

Builder trait nodeSelector configuration is not being used on jvm builders

Steps to reproduce

  • Configure the IntegrationPlatform to use the pod build strategy, and also to add a nodeSelector to the builder pods:
  spec:
    build:
      buildConfiguration:
        strategy: pod
    traits:
      builder:
        nodeSelector:
          kubernetes.io/arch: amd64
  • Create the following Integration:
apiVersion: camel.apache.org/v1
kind: Integration
metadata:
  name: rest
spec:  
  sources:
  - name: main.yaml
    content: |-
      - from:
          uri: rest:get:/demo
          steps:
          - setBody:
              constant: "It worked"
  traits:
    knative-service:
      enabled: true
      minScale: 1
  • Check the generated builder pod spec. There will be no nodeSelector.
  • Add the quarkus trait to the Integration spec:
    quarkus:
      buildMode:
        - native
  • Recreate the Integration and recheck the builder pod. Now the nodeSelector will be there

Relevant log output

No response

Camel K version

2.4.0

@lsergio lsergio added the kind/bug Something isn't working label Sep 10, 2024
@lsergio
Copy link
Contributor Author

lsergio commented Sep 10, 2024

I had previously worked on this area in #4968. I may have missed something at that time...

@squakez
Copy link
Contributor

squakez commented Sep 11, 2024

It seems that, for some reason, when the build configuration is inherited by the platform is not applied to the Build object as it is happening with the trait. As a workaround you can add the builder.strategy=pod trait as well and it should solve. However it would be good to find the root cause and fix it.

@lsergio
Copy link
Contributor Author

lsergio commented Sep 11, 2024

@squakez I can work on finding the root cause and fixing this one.

@squakez squakez added this to the 2.5.0 milestone Sep 11, 2024
@squakez
Copy link
Contributor

squakez commented Sep 11, 2024

@squakez I can work on finding the root cause and fixing this one.

Nice, thanks! I've set 2.5.0 as milestone. I think we could do a release by the end of the month, beginning of next one.

@lsergio
Copy link
Contributor Author

lsergio commented Sep 11, 2024

After some initial investigation, I found out that I can define strategy: pod and nodeSelector at a few locations. They could be at

  • the Integration spec builder trait
  • the integrationplatform spec builder trait
  • the integrationplatform spec build.buildConfiguration field

If I set both the build strategy and the node selector at the same location, they work as expected.

But this gets a little confusing as we mix up the locations:
If I set the pod strategy at the Integration spec and the nodeSelector at the platform builder trait, it works
If I set the pod strategy at the Integration spec and the nodeSelector at the platform buildConfiguration, it doesn't work.

I think we may have to define the rules for handling the configuration.
If it's a requirement that we set both (and maybe other configs) at the same location, we might not need any fix.
It it's expected that the configs are mixed, we probably need some rules for what takes precedence over what.

@squakez How does that sound?

@squakez
Copy link
Contributor

squakez commented Sep 12, 2024

Thanks for providing this analysis. I think we need to figure it out a single place where this configuration is expected to avoid creating such confusion. I tend to think the builder trait is the right place as it allows to make this work even individually on the Integration. If we agree, we can deprecate the rest and keep this working as it is.

@lsergio
Copy link
Contributor Author

lsergio commented Sep 13, 2024

Yes, that sounds reasonable. I'll evaluate the change in the next few days and decide whether I can tackle it in my available time, as it may require more effort than I anticipated. I'll let you know.

@lsergio
Copy link
Contributor Author

lsergio commented Sep 18, 2024

Hi @squakez. I tried working on this for the last few days, but it would definitely require more effort and a deeper understanding of the code than I have.

I see that BuildConfiguration is being used at the IntegrationPlatform spec and at the Build spec. In the Build it's used as a Build property and also at the Task level.
Also, as I have mentioned before, this has a lot in common with the BuilderTrait.

I tried using specific structs for each location where BuildConfiguration is currently used, keeping only what makes sense at each location, but there was a lot of impacts in the code, and that's where I think someone with a deeper understanding of the project might handle it better than me.

I filed it as bug, but maybe this has to change to some refactoring work for 2.6.0+, as the 2.5.0 date is very close.

@squakez
Copy link
Contributor

squakez commented Sep 19, 2024

Thanks @lsergio, I may have a look to see how the change impact the rest of the code. As for the release, we need to wait Camel Quarkus 3.15 so we can default the runtime to the new Camel LTS (4.8). We don't have any strict date, so, if we manage to work before, we can include in next minor, otherwise we can defer to next one.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants