.. tags:: Dask, Integration, DistributedComputing, Data, Advanced
Flyte can execute dask jobs natively on a Kubernetes Cluster, which manages a virtual dask
cluster's lifecycle. To
do so, it leverages the open-sourced Dask Kubernetes Operator
and can be enabled without signing up for any service. This is like running a ephemeral dask
cluster, which gets
created for the specific Flyte task and gets torn down after completion.
In Flyte/K8s, the cost is amortized because pods are faster to create than a machine, but the penalty of downloading Docker images may affect the performance. Also, remember that starting a pod is not as fast a running a process.
Flytekit makes it possible to write dask
code natively as a task and the dask
cluster will be automatically
configured using the decorated Dask()
config. The examples in this section provide a hands-on tutorial for writing
dask
Flyte tasks.
The plugin has been tested against the 2022.12.0
version of the dask-kubernetes-operator
.
Managing Python dependencies is hard. Flyte makes it easy to version and manage dependencies using containers. The
K8s dask
plugin brings all the benefits of containerization to dask
without needing to manage special dask
clusters.
Pros:
- Extremely easy to get started; get complete isolation between workloads
- Every job runs in isolation and has its own virtual cluster - no more nightmarish dependency management!
- Flyte manages everything for you!
Cons:
- Short running, bursty jobs are not a great fit because of the container overhead
- No interactive Spark capabilities are available with Flyte K8s dask, which is more suited for running adhoc and scheduled jobs.
Flyte dask uses the Dask Kubernetes Operator and a custom built Flyte Dask Plugin. This is a backend plugin which has to be enabled in your deployment; you can follow the steps mentioned in the :ref:`flyte:deployment-plugin-setup-k8s` section.
Install
flytekitplugins-dask
usingpip
in your environment.pip install flytekitplugins-dask
Ensure you have enough resources on your K8s cluster. Based on the resources required for your
dask
job (across job runner, scheduler and workers), you may have to tweak resource quotas for the namespace.
It is advised to set limits
as this will set the --nthreads
and --memory-limit
arguments for the workers
as advised by dask
best practices.
When specifying resources, the following precedence is followed for all components of the dask
job (job-runner pod,
scheduler pod and worker pods):
If no resources are specified, the platform resources are used
When
task
resources are used, those will be applied to all components of thedask
jobfrom flytekit import Resources, task from flytekitplugins.dask import Dask @task( task_config=Dask(), limits=Resources(cpu="1", mem="10Gi") # Will be applied to all components ) def my_dask_task(): ...
When resources are specified for the single components, they take the highest precedence
from flytekit import Resources, task from flytekitplugins.dask import Dask, JobPodSpec, Cluster @task( task_config=Dask( job_pod_spec=JobPodSpec( limits=Resources(cpu="1", mem="2Gi"), # Will be applied to the job pod ), cluster=DaskCluster( limits=Resources(cpu="4", mem="10Gi"), # Will be applied to the scheduler and worker pods ), ), ) def my_dask_task(): ...
By default, all components of the deployed dask
job (job runner pod, scheduler pod and worker pods) will all use the
the image that was used whilst registering (this image should have dask[distributed]
installed in it's Python
environment). This helps keeping the Python environments of all cluster components in sync.
However, there is the possibility to specify different images for the components. This allows for usecases such as using
different images between tasks of the same workflow. While it is possible to use different images for the different
components of the dask
job, it is not advised, as this can quickly lead to Python environments getting our of sync.
from flytekit import Resources, task from flytekitplugins.dask import Dask, JobPodSpec, Cluster @task( task_config=Dask( job_pod_spec=JobPodSpec( image="my_image:0.1.0", # Will be used by the job pod ), cluster=DaskCluster( image="my_image:0.1.0", # Will be used by the scheduler and worker pods ), ), ) def my_dask_task(): ...
Environment variables set in the @task
decorator will be passed on to all dask
job components (job runner pod,
scheduler pod and worker pods)
from flytekit import Resources, task from flytekitplugins.dask import Dask @task( task_config=Dask(), env={"FOO": "BAR"} # Will be applied to all components ) def my_dask_task(): ...
Labels and annotations set in a LaunchPlan
will be passed on to all dask
job components (job runner pod,
scheduler pod and worker pods)
from flytekit import Resources, task, workflow, Labels, Annotations from flytekitplugins.dask import Dask @task(task_config=Dask()) def my_dask_task(): ... @workflow def my_dask_workflow(): my_dask_task() # Labels and annotations will be passed on to all dask cluster components my_launch_plan = my_dask_workflow.create_launch_plan( labels=Labels({"myexecutionlabel": "bar", ...}), annotations=Annotations({"region": "SEA", ...}), )