Skip to content

Releases: zenml-io/zenml

0.56.0 [YANKED]

21 Mar 09:54
75f5ece
Compare
Choose a tag to compare

[NOTICE] This version introduced the services class that is causing a bug for those users who are migrating from older versions. 0.56.3 will be out shortly in place of this release. For now, this release has been yanked.

ZenML 0.56.0 introduces a wide array of new features, enhancements, and bug fixes,
with a strong emphasis on elevating the user experience and streamlining machine
learning workflows. Most notably, you can now deploy models using Hugging Face inference endpoints thanks for an open-source community contribution of this model deployer stack component!

This release also comes with a breaking change to the services
architecture.

Breaking Change

A significant change in this release is the migration of the Service (ZenML's technical term for deployment)
registration and deployment from local or remote environments to the ZenML server.
This change will be reflected in an upcoming tab in the dashboard which will
allow users to explore and see the deployed models in the dashboard with their live
status and metadata. This architectural shift also simplifies the model deployer
abstraction and streamlines the model deployment process for users by moving from
limited built-in steps to a more documented and flexible approach.

Important note: If you have models that you previously deployed with ZenML, you might
want to redeploy them to have them stored in the ZenML server and tracked by ZenML,
ensuring they appear in the dashboard.

Additionally, the find_model_server method now retrieves models (services) from the
ZenML server instead of local or remote deployment environments. As a result, any
usage of find_model_server will only return newly deployed models stored in the server.

It is also no longer recommended to call service functions like service.start().
Instead, use model_deployer.start_model_server(service_id), which will allow ZenML
to update the changed status of the service in the server.

Starting a service

Old syntax:

from zenml import pipeline, 
from zenml.integrations.bentoml.services.bentoml_deployment import BentoMLDeploymentService

@step
def predictor(
    service: BentoMLDeploymentService,
) -> None:
    # starting the service
    service.start(timeout=10)

New syntax:

from zenml import pipeline
from zenml.integrations.bentoml.model_deployers import BentoMLModelDeployer
from zenml.integrations.bentoml.services.bentoml_deployment import BentoMLDeploymentService

@step
def predictor(
    service: BentoMLDeploymentService,
) -> None:
    # starting the service
    model_deployer = BentoMLModelDeployer.get_active_model_deployer()
    model_deployer.start_model_server(service_id=service.service_id, timeout=10)

Enabling continuous deployment

Instead of replacing the parameter that was used in the deploy_model method to replace the
existing service (if it matches the exact same pipeline name and step name without
taking into accounts other parameters or configurations), we now have a new parameter,
continuous_deployment_mode, that allows you to enable continuous deployment for
the service. This will ensure that the service is updated with the latest version
if it's on the same pipeline and step and the service is not already running. Otherwise,
any new deployment with different configurations will create a new service.

from zenml import pipeline, step, get_step_context
from zenml.client import Client

@step
def deploy_model() -> Optional[MLFlowDeploymentService]:
    # Deploy a model using the MLflow Model Deployer
    zenml_client = Client()
    model_deployer = zenml_client.active_stack.model_deployer
    mlflow_deployment_config = MLFlowDeploymentConfig(
        name: str = "mlflow-model-deployment-example",
        description: str = "An example of deploying a model using the MLflow Model Deployer",
        pipeline_name: str = get_step_context().pipeline_name,
        pipeline_step_name: str = get_step_context().step_name,
        model_uri: str = "runs:/<run_id>/model" or "models:/<model_name>/<model_version>",
        model_name: str = "model",
        workers: int = 1
        mlserver: bool = False
        timeout: int = DEFAULT_SERVICE_START_STOP_TIMEOUT
    )
    service = model_deployer.deploy_model(mlflow_deployment_config, continuous_deployment_mode=True)
    logger.info(f"The deployed service info: {model_deployer.get_model_server_info(service)}")
    return service

Major Features and Enhancements:

  • A new Huggingface Model Deployer has been introduced, allowing you to seamlessly
    deploy your Huggingface models using ZenML. (Thank you so much @dudeperf3ct for the contribution!)
  • Faster Integration and Dependency Management ZenML now leverages the uv library,
    significantly improving the speed of integration installations and dependency management,
    resulting in a more streamlined and efficient workflow.
  • Enhanced Logging and Status Tracking Logging have been improved, providing better
    visibility into the state of your ZenML services.
  • Improved Artifact Store Isolation: ZenML now prevents unsafe operations that access
    data outside the scope of the artifact store, ensuring better isolation and security.
  • Adding admin user notion for the user accounts and added protection to certain operations
    performed via the REST interface to ADMIN-allowed only.
  • Rate limiting for login API to prevent abuse and protect the server from potential
    security threats.
  • The LLM template is now supported in ZenML, allowing you to use the LLM template
    for your pipelines.

🥳 Community Contributions 🥳

We'd like to give a special thanks to @dudeperf3ct he contributed to this release
by introducing the Huggingface Model Deployer. We'd also like to thank @moesio-f
for their contribution to this release by adding a new attribute to the Kaniko image builder.
Additionally, we'd like to thank @christianversloot for his contributions to this release.

All changes:

Read more

0.55.5

06 Mar 16:01
8e13b42
Compare
Choose a tag to compare

This patch contains a number of bug fixes and security improvements.

We improved the isolation of artifact stores so that various artifacts cannot be stored or accessed outside of the configured artifact store scope. Such unsafe operations are no longer allowed. This may have an impact on existing codebases if you have used unsafe file operations in the past.

To illustrate such a side effect, let's consider a remote S3 artifact store is configured for the path s3://some_bucket/some_sub_folder and in the code you use artifact_store.open("s3://some_bucket/some_other_folder/dummy.txt","w") -> this operation is considered unsafe as it accesses the data outside the scope of the artifact store. If you really need this to achieve your goals, consider switching to s3fs or similar libraries for such cases.

Also with this release, the server global configuration is no longer stored on the server file system to prevent exposure of sensitive information.

User entities are now uniquely constrained to prevent the creation of duplicate users under certain race conditions.

What's Changed

Full Changelog: 0.55.4...0.55.5

0.55.4

29 Feb 16:49
a24ccb1
Compare
Choose a tag to compare

This release brings a host of enhancements and fixes across the board, including
significant improvements to our services logging and status, the integration of
model saving to the registry via CLI methods, and more robust handling of
parallel pipelines and database entities. We've also made strides in optimizing
MLflow interactions, enhancing our documentation, and ensuring our CI processes
are more robust.

Additionally, we've tackled several bug fixes and performance improvements,
making our platform even more reliable and user-friendly.

We'd like to give a special thanks to @christianversloot and @francoisserra for
their contributions.

What's Changed

Full Changelog: 0.55.3...0.55.4

0.55.3

20 Feb 09:52
Compare
Choose a tag to compare

This patch comes with a variety of bug fixes and documentation updates.

With this release you can now download files directly from artifact versions
that you get back from the client without the need to materialize them. If you
would like to bypass materialization entirely and just download the data or
files associated with a particular artifact version, you can use the
download_files method:

from zenml.client import Client

client = Client()
artifact = client.get_artifact_version(name_id_or_prefix="iris_dataset")
artifact.download_files("path/to/save.zip")

What's Changed

Full Changelog: 0.55.2...0.55.3

0.55.2

06 Feb 17:18
72f9fb6
Compare
Choose a tag to compare

This patch comes with a variety of new features, bug-fixes, and documentation updates.

Some of the most important changes include:

  • The ability to add tags to outputs through the step context
  • Allowing the secret stores to utilize the implicit authentication method of AWS/GCP/Azure Service Connectors
  • Lazy loading client methods in a pipeline context
  • Updates on the Vertex orchestrator to switch to the native VertexAI scheduler
  • The new HyperAI integration featuring a new orchestrator and service connector
  • Bumping the mlflow version to 2.10.0

We'd like to give a special thanks to @christianversloot and @francoisserra for their contributions.

What's Changed

Full Changelog: 0.55.1...0.55.2

0.55.1

26 Jan 09:18
cdd1452
Compare
Choose a tag to compare

If you are actively using the Model Control Plane features, we suggest that you directly upgrade to 0.55.1, bypassing 0.55.0.

This is a patch release bringing backward compatibility for breaking changes introduced in 0.55.0, so that appropriate migration actions can be performed at the desired pace. Please refer to the 0.55.0 release notes for specific information on breaking changes and how to update your code to align with the new way of doing things. We also have updated our documentation to serve you better and introduced PipelineNamespace models in our API.

Also, this release is packed with Database recovery in case the upgrade failed to migrate the Database to a newer version of ZenML.

What's Changed

New Contributors

Full Changelog: 0.55.0...0.55.1

0.55.0

23 Jan 12:04
734b205
Compare
Choose a tag to compare

This release comes with a range of new features, bug fixes and documentation updates. The most notable changes are the ability to do lazy loading of Artifacts and their Metadata and Model and its Metadata inside the pipeline code using pipeline context object, and the ability to link Artifacts to Model Versions implicitly via the save_artifact function.

Additionally, we've updated the documentation to include a new starter guide on how to manage artifacts, and a new production guide that walks you through how to configure your pipelines to run in production.

Breaking Change

The ModelVersion concept was renamed to Model going forward, which affects code bases using the Model Control Plane feature. This change is not backward compatible.

Pipeline decorator

@pipeline(model_version=ModelVersion(...)) -> @pipeline(model=Model(...))

Old syntax:

from zenml import pipeline, ModelVersion

@pipeline(model_version=ModelVersion(name="model_name",version="v42"))
def p():
  ...

New syntax:

from zenml import pipeline, Model

@pipeline(model=Model(name="model_name",version="v42"))
def p():
  ...

Step decorator

@step(model_version=ModelVersion(...)) -> @step(model=Model(...))

Old syntax:

from zenml import step, ModelVersion

@step(model_version=ModelVersion(name="model_name",version="v42"))
def s():
  ...

New syntax:

from zenml import step, Model

@step(model=Model(name="model_name",version="v42"))
def s():
  ...

Acquiring model configuration from pipeline/step context

Old syntax:

from zenml import pipeline, step, ModelVersion, get_step_context, get_pipeline_context

@pipeline(model_version=ModelVersion(name="model_name",version="v42"))
def p():
  model_version = get_pipeline_context().model_version
  ...

@step(model_version=ModelVersion(name="model_name",version="v42"))
def s():
  model_version = get_step_context().model_version
  ...

New syntax:

from zenml import pipeline, step, Model, get_step_context, get_pipeline_context

@pipeline(model=Model(name="model_name",version="v42"))
def p():
  model = get_pipeline_context().model
  ...

@step(model=Model(name="model_name",version="v42"))
def s():
  model = get_step_context().model
  ...

Usage of model configuration inside pipeline YAML config file

Old syntax:

model_version:
  name: model_name
  version: v42
  ...

New syntax:

model:
  name: model_name
  version: v42
  ...

ModelVersion.metadata -> Model.run_metadata

Old syntax:

from zenml import ModelVersion

def s():
  model_version = ModelVersion(name="model_name",version="production")
  some_metadata = model_version.metadata["some_metadata"].value
  ... 

New syntax:

from zenml import Model

def s():
  model = Model(name="model_name",version="production")
  some_metadata = model.run_metadata["some_metadata"].value
  ... 

Those using the older syntax are requested to update their code accordingly.

Full set of changes are highlighted here: #2267

What's Changed

New Contributors

Full Changelog: 0.54.1...0.55.0

0.43.1

23 Jan 14:28
Compare
Choose a tag to compare

Backports some important fixes that have been introduced in more recent versions
of ZenML to the 0.43.x release line.

Full Changelog: 0.43.0...0.43.1

0.42.2

23 Jan 14:28
Compare
Choose a tag to compare

Backports some important fixes that have been introduced in more recent versions
of ZenML to the 0.42.x release line.

Full Changelog: 0.42.1...0.42.2

0.53.1

16 Jan 04:26
Compare
Choose a tag to compare

Important

This release has been updated (16th January, 2024)

A bug was introduced in the helm chart starting from version 0.50.0. All releases from that version have been updated with the fix. More details: #2234

This minor release contains a hot fix for a bug that was introduced in 0.53.0
where the secrets manager flavors were not removed from the database
properly. This release fixes that issue.

What's Changed

Full Changelog: 0.53.0...0.53.1