You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In the ndb-chef recipe, I want to generate a public-private key in the mgmd-server node. Then, in the subsequent ndbd recipes (they come after the mgmd node in the DAG), i would like to pass in the public key to the mgmd node as a parameter (update .ssh/authorized_keys with the mgmd's public key), so that the mgmd node can ssh into the machines without a password.
However, there's no way of passing variables between recipes.
It would be nice to have a variable defined in the cluster file that is bound during recipe execution, and becomes available for subsequent recipes in the DAG. We could now only write the role.json file immediately before we run a recipe (not at the start like we currently do).
One way of implementing this would be for the receipe to generate an output file with modifications to the DAG. So, for example, if I want to set the public key of the mgmd as a variable for the next recipe, I would generate an output file ndb___mgmd.output:
node[:ndb][:mgmd][:public_key] = "........................"
Then Karamel can read in the file after the recipe executes, update the DAG, and execute the next recipe in the DAG.
The text was updated successfully, but these errors were encountered:
…ed do not proceed to the next node in the pipelie but wait until the recipe has run successfully (karamelchef#10)
* [HWORKS-680] Blocked status
[HWORKS-680] In case of parallelism constraints, if a recipe has failed do not proceed to the next node in the pipelie but wait until the recipe has run successfully
* [HWORKS-680] Remove type cast
SirOibaf
pushed a commit
to SirOibaf/karamel
that referenced
this issue
Sep 11, 2023
…ed do not proceed to the next node in the pipelie but wait until the recipe has run successfully (karamelchef#10)
* [HWORKS-680] Blocked status
[HWORKS-680] In case of parallelism constraints, if a recipe has failed do not proceed to the next node in the pipelie but wait until the recipe has run successfully
* [HWORKS-680] Remove type cast
In the ndb-chef recipe, I want to generate a public-private key in the mgmd-server node. Then, in the subsequent ndbd recipes (they come after the mgmd node in the DAG), i would like to pass in the public key to the mgmd node as a parameter (update .ssh/authorized_keys with the mgmd's public key), so that the mgmd node can ssh into the machines without a password.
However, there's no way of passing variables between recipes.
It would be nice to have a variable defined in the cluster file that is bound during recipe execution, and becomes available for subsequent recipes in the DAG. We could now only write the role.json file immediately before we run a recipe (not at the start like we currently do).
One way of implementing this would be for the receipe to generate an output file with modifications to the DAG. So, for example, if I want to set the public key of the mgmd as a variable for the next recipe, I would generate an output file ndb___mgmd.output:
node[:ndb][:mgmd][:public_key] = "........................"
Then Karamel can read in the file after the recipe executes, update the DAG, and execute the next recipe in the DAG.
The text was updated successfully, but these errors were encountered: