-
Notifications
You must be signed in to change notification settings - Fork 442
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add deprecation warning for ExternalArtifact
non-value features
#2375
Add deprecation warning for ExternalArtifact
non-value features
#2375
Conversation
…://github.com/zenml-io/zenml into feature/OSSK-356-scalable-lazy-client-methods
…://github.com/zenml-io/zenml into feature/OSSK-356-scalable-lazy-client-methods
…://github.com/zenml-io/zenml into feature/OSSK-356-scalable-lazy-client-methods
Co-authored-by: Alex Strick van Linschoten <[email protected]>
…ture/OSSK-357-deprecate-external-artifact-with-non-value-inputs
Important Auto Review SkippedAuto reviews are disabled on this repository. Please check the settings in the CodeRabbit UI or the To trigger a single review, invoke the Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (invoked as PR comments)
Additionally, you can add CodeRabbit Configration File (
|
Quickstart template updates in |
Check out this pull request on See visual diffs & provide feedback on Jupyter Notebooks. Powered by ReviewNB |
E2E template updates in |
Quickstart template updates in |
…act-with-non-value-inputs
E2E template updates in |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When I was reviewing the changes, I happened to look at the old pages as well. I left a few small adjustments. Otherwise, everything looks good.
|
||
# Late materialization in pipelines | ||
|
||
Often ZenML pipeline steps consume artifacts produced by one another directly in the pipeline code, but there are scenarios where you need to pull external data into your steps. Such external data could be artifacts produced by non-ZenML codes. For those cases it is advised to use [ExternalArtifact](../../../user-guide/starter-guide/manage-artifacts.md#consuming-external-artifacts-within-a-pipeline), but what if we plan to exchange data created with other ZenML pipelines? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Often ZenML pipeline steps consume artifacts produced by one another directly in the pipeline code, but there are scenarios where you need to pull external data into your steps. Such external data could be artifacts produced by non-ZenML codes. For those cases it is advised to use [ExternalArtifact](../../../user-guide/starter-guide/manage-artifacts.md#consuming-external-artifacts-within-a-pipeline), but what if we plan to exchange data created with other ZenML pipelines? | |
Often ZenML pipeline steps consume artifacts produced by one another directly in the pipeline code, but there are scenarios where you need to pull external data into your steps. Such external data could be artifacts produced by non-ZenML codes. For those cases, it is advised to use [ExternalArtifact](../../../user-guide/starter-guide/manage-artifacts.md#consuming-external-artifacts-within-a-pipeline), but what if we plan to exchange data created with other ZenML pipelines? |
|
||
Often ZenML pipeline steps consume artifacts produced by one another directly in the pipeline code, but there are scenarios where you need to pull external data into your steps. Such external data could be artifacts produced by non-ZenML codes. For those cases it is advised to use [ExternalArtifact](../../../user-guide/starter-guide/manage-artifacts.md#consuming-external-artifacts-within-a-pipeline), but what if we plan to exchange data created with other ZenML pipelines? | ||
|
||
ZenML pipelines are first compiled and only executed at some later point. During the compilation phase all function calls are executed and this data is fixed as step input parameters. Given all this, the late materialization of dynamic objects, like data artifacts, is crucial. Without late materialization it would not be possible to pass not-yet-existing artifacts as step inputs, or their metadata, which is often the case in a multi-pipeline setting. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ZenML pipelines are first compiled and only executed at some later point. During the compilation phase all function calls are executed and this data is fixed as step input parameters. Given all this, the late materialization of dynamic objects, like data artifacts, is crucial. Without late materialization it would not be possible to pass not-yet-existing artifacts as step inputs, or their metadata, which is often the case in a multi-pipeline setting. | |
ZenML pipelines are first compiled and only executed at some later point. During the compilation phase, all function calls are executed and this data is fixed as step input parameters. Given all this, the late materialization of dynamic objects, like data artifacts, is crucial. Without late materialization, it would not be possible to pass not-yet-existing artifacts as step inputs, or their metadata, which is often the case in a multi-pipeline setting. |
do_predictions() | ||
``` | ||
|
||
In the example above we used `get_pipeline_context().model` property to acquire the model context in which the pipeline is running. During pipeline compilation this context will not yet have been evaluated, because `Production` model version is not a stable version name and another model version can become `Production` before it comes to the actual step execution. The same applies to calls like `model.get_model_artifact("trained_model")`; it will get stored in the step configuration for delayed materialization which will only happen during the step run itself. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the example above we used `get_pipeline_context().model` property to acquire the model context in which the pipeline is running. During pipeline compilation this context will not yet have been evaluated, because `Production` model version is not a stable version name and another model version can become `Production` before it comes to the actual step execution. The same applies to calls like `model.get_model_artifact("trained_model")`; it will get stored in the step configuration for delayed materialization which will only happen during the step run itself. | |
In the example above we used `get_pipeline_context().model` property to acquire the model context in which the pipeline is running. During pipeline compilation, this context will not yet have been evaluated, because `Production` model version is not a stable version name and another model version can become `Production` before it comes to the actual step execution. The same applies to calls like `model.get_model_artifact("trained_model")`; it will get stored in the step configuration for delayed materialization which will only happen during the step run itself. |
) | ||
``` | ||
|
||
In this case the evaluation of the actual artifact will happen only when the step is actually running. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In this case the evaluation of the actual artifact will happen only when the step is actually running. | |
In this case, the evaluation of the actual artifact will happen only when the step is actually running. |
do_predictions() | ||
``` | ||
|
||
Here we also enriched the `predict` step logic with a metric comparison by MSE metric, so predictions are done on the best possible model. As before, calls like `Client().get_artifact_version("trained_model", version="42")` or `model_latest.run_metadata["MSE"].value` are not evaluating the actual objects behind them at pipeline compilation time. Rather, they do so only at the point of step execution. By doing so we ensure that latest version is actually the latest at the moment and not just the latest at the point of pipeline compilation. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here we also enriched the `predict` step logic with a metric comparison by MSE metric, so predictions are done on the best possible model. As before, calls like `Client().get_artifact_version("trained_model", version="42")` or `model_latest.run_metadata["MSE"].value` are not evaluating the actual objects behind them at pipeline compilation time. Rather, they do so only at the point of step execution. By doing so we ensure that latest version is actually the latest at the moment and not just the latest at the point of pipeline compilation. | |
Here we also enriched the `predict` step logic with a metric comparison by MSE metric, so predictions are done on the best possible model. As before, calls like `Client().get_artifact_version("trained_model", version="42")` or `model_latest.run_metadata["MSE"].value` are not evaluating the actual objects behind them at pipeline compilation time. Rather, they do so only at the point of step execution. By doing so, we ensure that the latest version is actually the latest at the moment and not just the latest at the point of pipeline compilation. |
|
||
Documentation for [ZenML Models](model-management.md#linking-artifacts-to-models) describes in great detail how you can link various artifacts produced within pipelines to the model. Here we will focus more on the part that relates to consumption. | ||
|
||
First, let's have a look at a two-pipeline project, where the first pipeline is running training logic and the second runs batch inference leveraging trained model artifact(s): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This sentence was a bit misleading as I was reading through the page. After reading this, I expected the code block to display the training pipeline first and then the inference pipeline. We can perhaps change the phrasing slightly to make the flow better.
…nml-io#2375) * generic `Client()` getters lazy loading * Auto-update of Starter template * coderabbitai * Auto-update of E2E template * lint, after coderabbitai * Auto-update of E2E template * Auto-update of NLP template * update test signatures * add `get_model` and `get_model_version` to lazy loaders * update test signature * add `evaluate_all_lazy_load_args` * lint up again * make IDE great again * Auto-update of Starter template * lint again with new ruff * DOCS! * update toc * update link * Apply suggestions from code review Co-authored-by: Alex Strick van Linschoten <[email protected]> * add MCP link * fix misuse of static methods * fix wrapping/evaluation * fix misuse of static methods * gentle handle static methods * check for ClientLazyLoader instances * docs update * bump templates * add deprecation note * Auto-update of Starter template * Auto-update of E2E template * Auto-update of Starter template * Auto-update of E2E template --------- Co-authored-by: GitHub Actions <[email protected]> Co-authored-by: Alex Strick van Linschoten <[email protected]>
Describe changes
I updated docs and added a long-shot deprecation note for users about
ExternalArtifact
with params besidesvalue
.Reminder: merge2024.01.22
of E2E and Starter before merging thisPre-requisites
Please ensure you have done the following:
develop
and the open PR is targetingdevelop
. If your branch wasn't based on develop read Contribution guide on rebasing branch to develop.Types of changes