-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[question] Constraint "${attr.vault.version} version >= 0.6.1" filtered 1 nodes #4276
Comments
@karma0, This is the intended behavior. If no client node meets this constraint, then the cluster is unable to run the vault-enabled job. Is there an element to your issue report that I am missing? |
Where is this constraint coming from? Is Nomad not compatible with Vault >= 0.6.1? |
"Constraint "${attr.vault.version} version >= 0.6.1" filtered 1 nodes" indicates that Nomad believes that you don't have Vault greater than or equal to v0.6.1. The constraint text is the expectation. More conversationally, this would be read as: "One node was filtered from the eligibility because it does not meet the constraint that ${attr.vault.version} is greater than or equal to 0.6.1." Nomad dynamically generates this constraint on vault-enabled jobs. When this constraint filters nodes, it often indicates a misconfiguration in or other problem with your Vault configuration for Nomad. |
@karma0 I will say that I was just fighting this in my local lab cluster. Turned out that I had accidentally set my the address in my vault stanza to https://active.vault.service.consul:8200 when I wasn't running TLS on my Vault server. |
@angrycub What is the difference between Also,
Here is the vault config:
|
I think I see your issue. You have "https://vault.services.consul:8200" and I believe it should be "https://vault.service.consul:8200" — no trailing s on service. The only other thing I noticed in the configurations is that you don't supply As to active.vault.service.consul, a Vault in an HA configuration will tag the currently active node with Hope this gets you unjammed! |
The issue turned out to be a bit more convoluted. The certs weren't setup with the correct IP addresses and/or DNS names. I was using Adding Thanks for the help! |
@karma0 Thanks for posting your solution! That's a great note for future explorers. |
Should have Nomad and Consul deployed and configured with mTLS. ACLs are currently not enabled on Consul, only Nomad. This should provide the minimal working example using mTLS to get the cought dashboard working after a ton of tinkering. 😭 The links I used during my investigation/debugging session: * hashicorp/nomad#6463 * https://learn.hashicorp.com/nomad/consul-integration/nomad-connect-acl#run-a-connect-enabled-job * hashicorp/nomad#6594 * hashicorp/nomad#4276 hashicorp/nomad#7715 * https://www.consul.io/docs/agent/options ⭐ * hashicorp/nomad#7602
I'm going to lock this issue because it has been closed for 120 days ⏳. This helps our maintainers find and focus on the active issues. |
Nomad version
Nomad v0.8.3 (c85483d)
Vault v0.10.1 ('756fdc4587350daf1c65b93647b2cc31a6f119cd')
Operating system and Environment details
Terraform modules in AWS:
terraform-aws-vault
terraform-aws-nomad
These modules both launch ASGs, totaling 3 EC2 clusters: vault, nomad servers, and nomad clients.
Issue
Attempting to plan or execute a job that uses Vault results in a hidden constraint that filters nodes with Vault version
>= 0.6.1
.Reproduction steps
Executing
nomad job plan test.nomad
for my attempt at the given job reveals:Nomad Server logs (if appropriate)
None.
Nomad Client logs (if appropriate)
None.
Job file (if appropriate)
The text was updated successfully, but these errors were encountered: