From 7d33fd85ef2950b5ccb539744421cff4e932e68d Mon Sep 17 00:00:00 2001 From: Dongjie Shi Date: Thu, 14 Oct 2021 10:06:22 +0800 Subject: [PATCH] Migrate hyperzoo (#4958) * add hyperzoo for k8s support (#2140) * add hyperzoo for k8s support * format * format * format * format * run examples on k8s readme (#2163) * k8s readme * fix jdk download issue (#2219) * add doc for submit jupyter notebook and cluster serving to k8s (#2221) * add hyperzoo doc * add hyperzoo doc * add hyperzoo doc * add hyperzoo doc * fix jdk download issue (#2223) * bump to 0.9s (#2227) * update jdk download url (#2259) * update some previous docs (#2284) * K8docsupdate (#2306) * Update README.md * Update s3 related links in readme and documents (#2489) * Update s3 related links in readme and documents * Update s3 related links in readme and documents * Update s3 related links in readme and documents * Update s3 related links in readme and documents * Update s3 related links in readme and documents * Update s3 related links in readme and documents * update * update * modify line length limit * update * Update mxnet-mkl version in hyper-zoo dockerfile (#2720) Co-authored-by: gaoping * update bigdl version (#2743) * update bigdl version * hyperzoo dockerfile add cluster-serving (#2731) * hyperzoo dockerfile add cluster-serving * update * update * update * update jdk url * update jdk url * update Co-authored-by: gaoping * Support init_spark_on_k8s (#2813) * initial * fix * code refactor * bug fix * update docker * style * add conda to docker image (#2894) * add conda to docker image * Update Dockerfile * Update Dockerfile Co-authored-by: glorysdj * Fix code blocks indents in .md files (#2978) * Fix code blocks indents in .md files Previously a lot of the code blocks in markdown files were horribly indented with bad white spaces in the beginning of lines. Users can't just select, copy, paste, and run (in the case of python). I have fixed all these, so there is no longer any code block with bad white space at the beginning of the lines. It would be nice if you could try to make sure in future commits that all code blocks are properly indented inside and have the right amount of white space in the beginning! * Fix small style issue * Fix indents * Fix indent and add \ for multiline commands Change indent from 3 spaces to 4, and add "\" for multiline bash commands Co-authored-by: Yifan Zhu * enable bigdl 0.12 (#3101) * switch to bigdl 0.12 * Hyperzoo example ref (#3143) * specify pip version to fix oserror 0 of proxy (#3165) * Bigdl0.12.1 (#3155) * bigdl 0.12.1 * bump 0.10.0-Snapshot (#3237) * update runtime image name (#3250) * update jdk download url (#3316) * update jdk8 url (#3411) Co-authored-by: ardaci * update hyperzoo docker image (#3429) * update hyperzoo image (#3457) * fix jdk in az docker (#3478) * fix jdk in az docker * fix jdk for hyperzoo * fix jdk in jenkins docker * fix jdk in cluster serving docker * fix jdk * fix readme * update python dep to fit cnvrg (#3486) * update ray version doc (#3568) * fix deploy hyperzoo issue (#3574) Co-authored-by: gaoping * add spark fix and net-tools and status check (#3742) * intsall netstat and add check status * add spark fix for graphene * bigdl 0.12.2 (#3780) * bump to 0.11-S and fix version issues except ipynb * add multi-stage build Dockerfile (#3916) * add multi-stage build Dockerfile * multi-stage build dockerfile * multi-stage build dockerfile * Rename Dockerfile.multi to Dockerfile * delete Dockerfile.multi * remove comments, add TINI_VERSION to common arg, remove Dockerfile.multi * multi-stage add tf_slim Co-authored-by: shaojie * update hyperzoo doc and k8s doc (#3959) * update userguide of k8s * update k8s guide * update hyperzoo doc * Update k8s.md add note * Update k8s.md add note * Update k8s.md update notes * fix 4087 issue (#4097) Co-authored-by: shaojie * fixed 4086 and 4083 issues (#4098) Co-authored-by: shaojie * Reduce image size (#4132) * Reduce Dockerfile size 1. del redis stage 2. del flink stage 3. del conda & exclude some python packages 4. add copies layer stage * update numpy version to 1.18.1 Co-authored-by: zzti-bsj * update hyperzoo image (#4250) Co-authored-by: Adria777 * bigdl 0.13 (#4210) * bigdl 0.13 * update * print exception * pyspark2.4.6 * update release PyPI script * update * flip snapshot-0.12.0 and spark2.4.6 (#4254) * s-0.12.0 master * Update __init__.py * Update python.md * fix docker issues due to version update (#4280) * fix docker issues * fix docker issues * update Dockerfile to support spark 3.1.2 && 2.4.6 (#4436) Co-authored-by: shaojie * update hyperzoo, add lib for tf2 (#4614) * delete tf 1.15.0 (#4719) Co-authored-by: Le-Zheng <30695225+Le-Zheng@users.noreply.github.com> Co-authored-by: pinggao18 <44043817+pinggao18@users.noreply.github.com> Co-authored-by: pinggao187 <44044110+pinggao187@users.noreply.github.com> Co-authored-by: gaoping Co-authored-by: Kai Huang Co-authored-by: GavinGu07 <55721214+GavinGu07@users.noreply.github.com> Co-authored-by: Yifan Zhu Co-authored-by: Yifan Zhu Co-authored-by: Song Jiaming Co-authored-by: ardaci Co-authored-by: Yang Wang Co-authored-by: zzti-bsj <2779090360@qq.com> Co-authored-by: shaojie Co-authored-by: Lingqi Su <33695124+Adria777@users.noreply.github.com> Co-authored-by: Adria777 Co-authored-by: shaojie --- docker/hyperzoo/Dockerfile | 176 +++ docker/hyperzoo/README.md | 404 +++++ docker/hyperzoo/download-analytics-zoo.sh | 32 + .../download-cluster-serving-all-zip.sh | 38 + docker/hyperzoo/freeze_checkpoint.py | 73 + ...cation_and_object_detection_quick_start.py | 64 + docker/hyperzoo/perf/cat1.jpeg | Bin 0 -> 5579 bytes .../perf/cluster-serving-enqueue-test | 15 + docker/hyperzoo/perf/offline-benchmark | 24 + docker/hyperzoo/quick_start.py | 50 + .../recommendation_ncf_quick_start.py | 17 + .../hyperzoo/resources/test_image/cat1.jpeg | Bin 0 -> 5579 bytes .../hyperzoo/resources/test_image/dog1.jpeg | Bin 0 -> 7098 bytes .../hyperzoo/resources/test_image/fish1.jpeg | Bin 0 -> 3444 bytes docker/hyperzoo/start-notebook-k8s.sh | 92 ++ docker/hyperzoo/start-notebook-spark.sh | 80 + docker/hyperzoo/submit-examples-on-k8s.md | 1296 +++++++++++++++++ 17 files changed, 2361 insertions(+) create mode 100644 docker/hyperzoo/Dockerfile create mode 100644 docker/hyperzoo/README.md create mode 100644 docker/hyperzoo/download-analytics-zoo.sh create mode 100644 docker/hyperzoo/download-cluster-serving-all-zip.sh create mode 100644 docker/hyperzoo/freeze_checkpoint.py create mode 100644 docker/hyperzoo/image_classification_and_object_detection_quick_start.py create mode 100644 docker/hyperzoo/perf/cat1.jpeg create mode 100755 docker/hyperzoo/perf/cluster-serving-enqueue-test create mode 100755 docker/hyperzoo/perf/offline-benchmark create mode 100644 docker/hyperzoo/quick_start.py create mode 100644 docker/hyperzoo/recommendation_ncf_quick_start.py create mode 100644 docker/hyperzoo/resources/test_image/cat1.jpeg create mode 100644 docker/hyperzoo/resources/test_image/dog1.jpeg create mode 100644 docker/hyperzoo/resources/test_image/fish1.jpeg create mode 100644 docker/hyperzoo/start-notebook-k8s.sh create mode 100644 docker/hyperzoo/start-notebook-spark.sh create mode 100644 docker/hyperzoo/submit-examples-on-k8s.md diff --git a/docker/hyperzoo/Dockerfile b/docker/hyperzoo/Dockerfile new file mode 100644 index 00000000000..5d7ae877bf6 --- /dev/null +++ b/docker/hyperzoo/Dockerfile @@ -0,0 +1,176 @@ +ARG SPARK_VERSION=2.4.6 +ARG SPARK_HOME=/opt/spark +ARG JDK_VERSION=8u192 +ARG JDK_URL=your_jdk_url +ARG BIGDL_VERSION=0.13.0 +ARG ANALYTICS_ZOO_VERSION=0.12.0-SNAPSHOT +ARG TINI_VERSION=v0.18.0 + +# stage.1 jdk & spark +FROM ubuntu:18.04 as spark +ARG SPARK_VERSION +ARG JDK_VERSION +ARG JDK_URL +ARG SPARK_HOME +ENV TINI_VERSION v0.18.0 +ENV SPARK_VERSION ${SPARK_VERSION} +ENV SPARK_HOME ${SPARK_HOME} +RUN apt-get update --fix-missing && \ + apt-get install -y apt-utils vim curl nano wget unzip maven git && \ +# java + wget $JDK_URL && \ + gunzip jdk-$JDK_VERSION-linux-x64.tar.gz && \ + tar -xf jdk-$JDK_VERSION-linux-x64.tar -C /opt && \ + rm jdk-$JDK_VERSION-linux-x64.tar && \ + mv /opt/jdk* /opt/jdk$JDK_VERSION && \ + ln -s /opt/jdk$JDK_VERSION /opt/jdk && \ +# spark + wget https://archive.apache.org/dist/spark/spark-${SPARK_VERSION}/spark-${SPARK_VERSION}-bin-hadoop2.7.tgz && \ + tar -zxvf spark-${SPARK_VERSION}-bin-hadoop2.7.tgz && \ + mv spark-${SPARK_VERSION}-bin-hadoop2.7 /opt/spark && \ + rm spark-${SPARK_VERSION}-bin-hadoop2.7.tgz && \ + cp /opt/spark/kubernetes/dockerfiles/spark/entrypoint.sh /opt + +RUN ln -fs /bin/bash /bin/sh +RUN if [ $SPARK_VERSION = "3.1.2" ]; then \ + rm $SPARK_HOME/jars/okhttp-*.jar && \ + wget -P $SPARK_HOME/jars https://repo1.maven.org/maven2/com/squareup/okhttp3/okhttp/3.8.0/okhttp-3.8.0.jar; \ + elif [ $SPARK_VERSION = "2.4.6" ]; then \ + rm $SPARK_HOME/jars/kubernetes-client-*.jar && \ + wget -P $SPARK_HOME/jars https://repo1.maven.org/maven2/io/fabric8/kubernetes-client/4.4.2/kubernetes-client-4.4.2.jar; \ + fi + +ADD https://github.com/krallin/tini/releases/download/${TINI_VERSION}/tini /sbin/tini + +# stage.2 analytics-zoo +FROM ubuntu:18.04 as analytics-zoo +ARG SPARK_VERSION +ARG BIGDL_VERSION +ARG ANALYTICS_ZOO_VERSION + +ENV SPARK_VERSION ${SPARK_VERSION} +ENV BIGDL_VERSION ${BIGDL_VERSION} +ENV ANALYTICS_ZOO_VERSION ${ANALYTICS_ZOO_VERSION} +ENV ANALYTICS_ZOO_HOME /opt/analytics-zoo-${ANALYTICS_ZOO_VERSION} + +RUN apt-get update --fix-missing && \ + apt-get install -y apt-utils vim curl nano wget unzip maven git +ADD ./download-analytics-zoo.sh /opt + +RUN chmod a+x /opt/download-analytics-zoo.sh && \ + mkdir -p /opt/analytics-zoo-examples/python +RUN /opt/download-analytics-zoo.sh && \ + rm analytics-zoo-bigdl*.zip && \ + unzip $ANALYTICS_ZOO_HOME/lib/*.zip 'zoo/examples/*' -d /opt/analytics-zoo-examples/python && \ + mv /opt/analytics-zoo-examples/python/zoo/examples/* /opt/analytics-zoo-examples/python && \ + rm -rf /opt/analytics-zoo-examples/python/zoo/examples + +# stage.3 copies layer +FROM ubuntu:18.04 as copies-layer +ARG ANALYTICS_ZOO_VERSION + +COPY --from=analytics-zoo /opt/analytics-zoo-${ANALYTICS_ZOO_VERSION} /opt/analytics-zoo-${ANALYTICS_ZOO_VERSION} +COPY --from=analytics-zoo /opt/analytics-zoo-examples/python /opt/analytics-zoo-examples/python +COPY --from=spark /opt/jdk /opt/jdk +COPY --from=spark /opt/spark /opt/spark +COPY --from=spark /opt/spark/kubernetes/dockerfiles/spark/entrypoint.sh /opt + + +# stage.4 +FROM ubuntu:18.04 +MAINTAINER The Analytics-Zoo Authors https://github.com/intel-analytics/analytics-zoo +ARG ANALYTICS_ZOO_VERSION +ARG BIGDL_VERSION +ARG SPARK_VERSION +ARG SPARK_HOME +ARG TINI_VERSION + +ENV ANALYTICS_ZOO_VERSION ${ANALYTICS_ZOO_VERSION} +ENV SPARK_HOME ${SPARK_HOME} +ENV SPARK_VERSION ${SPARK_VERSION} +ENV ANALYTICS_ZOO_HOME /opt/analytics-zoo-${ANALYTICS_ZOO_VERSION} +ENV FLINK_HOME /opt/flink-${FLINK_VERSION} +ENV OMP_NUM_THREADS 4 +ENV NOTEBOOK_PORT 12345 +ENV NOTEBOOK_TOKEN 1234qwer +ENV RUNTIME_SPARK_MASTER local[4] +ENV RUNTIME_K8S_SERVICE_ACCOUNT spark +ENV RUNTIME_K8S_SPARK_IMAGE intelanalytics/hyper-zoo:${ANALYTICS_ZOO_VERSION}-${SPARK_VERSION} +ENV RUNTIME_DRIVER_HOST localhost +ENV RUNTIME_DRIVER_PORT 54321 +ENV RUNTIME_EXECUTOR_CORES 4 +ENV RUNTIME_EXECUTOR_MEMORY 20g +ENV RUNTIME_EXECUTOR_INSTANCES 1 +ENV RUNTIME_TOTAL_EXECUTOR_CORES 4 +ENV RUNTIME_DRIVER_CORES 4 +ENV RUNTIME_DRIVER_MEMORY 10g +ENV RUNTIME_PERSISTENT_VOLUME_CLAIM myvolumeclaim +ENV SPARK_HOME /opt/spark +ENV HADOOP_CONF_DIR /opt/hadoop-conf +ENV BIGDL_VERSION ${BIGDL_VERSION} +ENV BIGDL_CLASSPATH ${ANALYTICS_ZOO_HOME}/lib/analytics-zoo-bigdl_${BIGDL_VERSION}-spark_${SPARK_VERSION}-${ANALYTICS_ZOO_VERSION}-jar-with-dependencies.jar +ENV JAVA_HOME /opt/jdk +ENV REDIS_HOME /opt/redis-5.0.5 +ENV CS_HOME /opt/work/cluster-serving +ENV PYTHONPATH ${ANALYTICS_ZOO_HOME}/lib/analytics-zoo-bigdl_${BIGDL_VERSION}-spark_${SPARK_VERSION}-${ANALYTICS_ZOO_VERSION}-python-api.zip:${SPARK_HOME}/python/lib/pyspark.zip:${SPARK_HOME}/python/lib/py4j-*.zip:${CS_HOME}/serving-python.zip:/opt/models/research/slim +ENV PATH ${ANALYTICS_ZOO_HOME}/bin/cluster-serving:${JAVA_HOME}/bin:/root/miniconda3/bin:${PATH} +ENV TINI_VERSION ${TINI_VERSION} +ENV LC_ALL C.UTF-8 +ENV LANG C.UTF-8 + + +COPY --from=copies-layer /opt /opt +COPY --from=spark /sbin/tini /sbin/tini +ADD ./start-notebook-spark.sh /opt +ADD ./start-notebook-k8s.sh /opt + +RUN mkdir -p /opt/analytics-zoo-examples/python && \ + mkdir -p /opt/analytics-zoo-examples/scala && \ + apt-get update --fix-missing && \ + apt-get install -y apt-utils vim curl nano wget unzip maven git && \ + apt-get install -y gcc g++ make && \ + apt-get install -y libsm6 libxext6 libxrender-dev && \ + rm /bin/sh && \ + ln -sv /bin/bash /bin/sh && \ + echo "auth required pam_wheel.so use_uid" >> /etc/pam.d/su && \ + chgrp root /etc/passwd && chmod ug+rw /etc/passwd && \ +# python + apt-get install -y python3-minimal && \ + apt-get install -y build-essential python3 python3-setuptools python3-dev python3-pip && \ + pip3 install --no-cache-dir --upgrade pip && \ + pip install --no-cache-dir --upgrade setuptools && \ + pip install --no-cache-dir numpy==1.18.1 scipy && \ + pip install --no-cache-dir pandas==1.0.3 && \ + pip install --no-cache-dir scikit-learn matplotlib seaborn jupyter jupyterlab requests h5py && \ + ln -s /usr/bin/python3 /usr/bin/python && \ + #Fix tornado await process + pip uninstall -y -q tornado && \ + pip install --no-cache-dir tornado && \ + python3 -m ipykernel.kernelspec && \ + pip install --no-cache-dir tensorboard && \ + pip install --no-cache-dir jep && \ + pip install --no-cache-dir cloudpickle && \ + pip install --no-cache-dir opencv-python && \ + pip install --no-cache-dir pyyaml && \ + pip install --no-cache-dir redis && \ + pip install --no-cache-dir ray[tune]==1.2.0 && \ + pip install --no-cache-dir Pillow==6.1 && \ + pip install --no-cache-dir psutil aiohttp && \ + pip install --no-cache-dir py4j && \ + pip install --no-cache-dir cmake==3.16.3 && \ + pip install --no-cache-dir torch==1.7.1 torchvision==0.8.2 && \ + pip install --no-cache-dir horovod==0.19.2 && \ +#tf2 + pip install --no-cache-dir pyarrow && \ + pip install opencv-python==4.2.0.34 && \ + pip install aioredis==1.1.0 && \ + pip install tensorflow==2.4.0 && \ +# chmod + chmod a+x /opt/start-notebook-spark.sh && \ + chmod a+x /opt/start-notebook-k8s.sh && \ + chmod +x /sbin/tini && \ + cp /sbin/tini /usr/bin/tini + +WORKDIR /opt/spark/work-dir + +ENTRYPOINT [ "/opt/entrypoint.sh" ] diff --git a/docker/hyperzoo/README.md b/docker/hyperzoo/README.md new file mode 100644 index 00000000000..85a71efda67 --- /dev/null +++ b/docker/hyperzoo/README.md @@ -0,0 +1,404 @@ +Analytics Zoo hyperzoo image has been built to easily run applications on Kubernetes cluster. The details of pre-installed packages and usage of the image will be introduced in this page. + +- [Launch pre-built hyperzoo image](#launch-pre-built-hyperzoo-image) +- [Run Analytics Zoo examples on k8s](#Run-analytics-zoo-examples-on-k8s) +- [Run Analytics Zoo Jupyter Notebooks on remote Spark cluster or k8s](#Run-Analytics-Zoo-Jupyter-Notebooks-on-remote-Spark-cluster-or-k8s) +- [Launch Analytics Zoo cluster serving](#Launch-Analytics-Zoo-cluster-serving) + +## Launch pre-built hyperzoo image + +#### Prerequisites + +1. Runnable docker environment has been set up. +2. A running Kubernetes cluster is prepared. Also make sure the permission of `kubectl` to create, list and delete pod. + +#### Launch pre-built hyperzoo k8s image + +1. Pull an Analytics Zoo hyperzoo image from [dockerhub](https://hub.docker.com/r/intelanalytics/hyper-zoo/tags): + +```bash +sudo docker pull intelanalytics/hyper-zoo:latest +``` + +- Speed up pulling image by adding mirrors + +To speed up pulling the image from dockerhub in China, add a registry's mirror. For Linux OS (CentOS, Ubuntu etc), if the docker version is higher than 1.12, config the docker daemon. Edit `/etc/docker/daemon.json` and add the registry-mirrors key and value: + +```bash +{ + "registry-mirrors": ["https://"] +} +``` + +For example, add the ustc mirror in China. + +```bash +{ + "registry-mirrors": ["https://docker.mirrors.ustc.edu.cn"] +} +``` + +Flush changes and restart docker: + +```bash +sudo systemctl daemon-reload +sudo systemctl restart docker +``` + +If your docker version is between 1.8 and 1.11, find the docker configuration which location depends on the operation system. Edit and add `DOCKER_OPTS="--registry-mirror=https://"`. Restart docker `sudo service docker restart`. + +If you would like to speed up pulling this image on MacOS or Windows, find the docker setting and config registry-mirrors section by specifying mirror host. Restart docker. + +Then pull the image. It will be faster. + +```bash +sudo docker pull intelanalytics/hyper-zoo:latest +``` + +2.K8s configuration + +Get k8s master as spark master : + +```bash +kubectl cluster-info +``` + +After running this commend, it shows "Kubernetes master is running at https://127.0.0.1:12345 " + +this means : + +```bash +master="k8s://https://127.0.0.1:12345" +``` + +The namespace is default or spark.kubernetes.namespace + +RBAC : + +```bash +kubectl create serviceaccount spark +kubectl create clusterrolebinding spark-role --clusterrole=edit --serviceaccount=default:spark --namespace=default +``` + +View k8s configuration file : + +``` +.kube/config +``` + +or + +```bash +kubectl config view --flatten --minify > kuberconfig +``` + +The k8s data can stored in nfs or ceph, take nfs as an example + +In NFS server, run : + +```bash +yum install nfs-utils +systemctl enable rpcbind +systemctl enable nfs +systemctl start rpcbind +firewall-cmd --zone=public --permanent --add-service={rpc-bind,mountd,nfs} +firewall-cmd --reload +mkdir /disk1/nfsdata +chmod 755 /disk1/nfsdata +nano /etc/exports "/disk1/nfsdata *(rw,sync,no_root_squash,no_all_squash)" +systemctl restart nfs +``` + +In NFS client, run : + +```bash +yum install -y nfs-utils && systemctl start rpcbind && showmount -e +``` + +k8s conf : + +```bash +git clone https://github.com/kubernetes-incubator/external-storage.git +cd /XXX/external-storage/nfs-client +nano deploy/deployment.yaml +nano deploy/rbac.yaml +kubectl create -f deploy/rbac.yaml +kubectl create -f deploy/deployment.yaml +kubectl create -f deploy/class.yaml +``` + +test : + +```bash +kubectl create -f deploy/test-claim.yaml +kubectl create -f deploy/test-pod.yaml +kubectl get pvc +kubectl delete -f deploy/test-pod.yaml +kubectl delete -f deploy/test-claim.yaml +``` + +if the test is success, then run: + +```bash +kubectl create -f deploy/nfs-volume-claim.yaml +``` + +3.Launch a k8s client container: + +Please note the two different containers: **client container** is for user to submit zoo jobs from here, since it contains all the required env and libs except hadoop/k8s configs; executor container is not need to create manually, which is scheduled by k8s at runtime. + +```bash +sudo docker run -itd --net=host \ + -v /etc/kubernetes:/etc/kubernetes \ + -v /root/.kube:/root/.kube \ + intelanalytics/hyper-zoo:latest bash +``` + +Note. To launch the client container, `-v /etc/kubernetes:/etc/kubernetes:` and `-v /root/.kube:/root/.kube` are required to specify the path of kube config and installation. + +To specify more argument, use: + +```bash +sudo docker run -itd --net=host \ + -v /etc/kubernetes:/etc/kubernetes \ + -v /root/.kube:/root/.kube \ + -e NOTEBOOK_PORT=12345 \ + -e NOTEBOOK_TOKEN="your-token" \ + -e http_proxy=http://your-proxy-host:your-proxy-port \ + -e https_proxy=https://your-proxy-host:your-proxy-port \ + -e RUNTIME_SPARK_MASTER=k8s://https://: \ + -e RUNTIME_K8S_SERVICE_ACCOUNT=account \ + -e RUNTIME_K8S_SPARK_IMAGE=intelanalytics/hyper-zoo:latest \ + -e RUNTIME_PERSISTENT_VOLUME_CLAIM=myvolumeclaim \ + -e RUNTIME_DRIVER_HOST=x.x.x.x \ + -e RUNTIME_DRIVER_PORT=54321 \ + -e RUNTIME_EXECUTOR_INSTANCES=1 \ + -e RUNTIME_EXECUTOR_CORES=4 \ + -e RUNTIME_EXECUTOR_MEMORY=20g \ + -e RUNTIME_TOTAL_EXECUTOR_CORES=4 \ + -e RUNTIME_DRIVER_CORES=4 \ + -e RUNTIME_DRIVER_MEMORY=10g \ + intelanalytics/hyper-zoo:latest bash +``` + +- NOTEBOOK_PORT value 12345 is a user specified port number. +- NOTEBOOK_TOKEN value "your-token" is a user specified string. +- http_proxy is to specify http proxy. +- https_proxy is to specify https proxy. +- RUNTIME_SPARK_MASTER is to specify spark master, which should be `k8s://https://:` or `spark://:`. +- RUNTIME_K8S_SERVICE_ACCOUNT is service account for driver pod. Please refer to k8s [RBAC](https://spark.apache.org/docs/latest/running-on-kubernetes.html#rbac). +- RUNTIME_K8S_SPARK_IMAGE is the k8s image. +- RUNTIME_PERSISTENT_VOLUME_CLAIM is to specify volume mount. We are supposed to use volume mount to store or receive data. Get ready with [Kubernetes Volumes](https://spark.apache.org/docs/latest/running-on-kubernetes.html#volume-mounts). +- RUNTIME_DRIVER_HOST is to specify driver localhost (only required when submit jobs as kubernetes client mode). +- RUNTIME_DRIVER_PORT is to specify port number (only required when submit jobs as kubernetes client mode). +- Other environment variables are for spark configuration setting. The default values in this image are listed above. Replace the values as you need. + +Once the container is created, launch the container by: + +```bash +sudo docker exec -it bash +``` + +Then you may see it shows: + +``` +root@[hostname]:/opt/spark/work-dir# +``` + +`/opt/spark/work-dir` is the spark work path. + +Note: The `/opt` directory contains: + +- download-analytics-zoo.sh is used for downloading Analytics-Zoo distributions. +- start-notebook-spark.sh is used for starting the jupyter notebook on standard spark cluster. +- start-notebook-k8s.sh is used for starting the jupyter notebook on k8s cluster. +- analytics-zoo-x.x-SNAPSHOT is `ANALYTICS_ZOO_HOME`, which is the home of Analytics Zoo distribution. +- analytics-zoo-examples directory contains downloaded python example code. +- jdk is the jdk home. +- spark is the spark home. +- redis is the redis home. + +## Run Analytics Zoo examples on k8s + +#### Launch an Analytics Zoo python example on k8s + +Here is a sample for submitting the python [anomalydetection](https://github.com/intel-analytics/analytics-zoo/tree/master/pyzoo/zoo/examples/anomalydetection) example on cluster mode. + +```bash +${SPARK_HOME}/bin/spark-submit \ + --master ${RUNTIME_SPARK_MASTER} \ + --deploy-mode cluster \ + --conf spark.kubernetes.authenticate.driver.serviceAccountName=${RUNTIME_K8S_SERVICE_ACCOUNT} \ + --name analytics-zoo \ + --conf spark.kubernetes.container.image=${RUNTIME_K8S_SPARK_IMAGE} \ + --conf spark.executor.instances=${RUNTIME_EXECUTOR_INSTANCES} \ + --conf spark.kubernetes.driver.volumes.persistentVolumeClaim.${RUNTIME_PERSISTENT_VOLUME_CLAIM}.options.claimName=${RUNTIME_PERSISTENT_VOLUME_CLAIM} \ + --conf spark.kubernetes.driver.volumes.persistentVolumeClaim.${RUNTIME_PERSISTENT_VOLUME_CLAIM}.mount.path=/zoo \ + --conf spark.kubernetes.executor.volumes.persistentVolumeClaim.${RUNTIME_PERSISTENT_VOLUME_CLAIM}.options.claimName=${RUNTIME_PERSISTENT_VOLUME_CLAIM} \ + --conf spark.kubernetes.executor.volumes.persistentVolumeClaim.${RUNTIME_PERSISTENT_VOLUME_CLAIM}.mount.path=/zoo \ + --conf spark.kubernetes.driver.label.=true \ + --conf spark.kubernetes.executor.label.=true \ + --executor-cores ${RUNTIME_EXECUTOR_CORES} \ + --executor-memory ${RUNTIME_EXECUTOR_MEMORY} \ + --total-executor-cores ${RUNTIME_TOTAL_EXECUTOR_CORES} \ + --driver-cores ${RUNTIME_DRIVER_CORES} \ + --driver-memory ${RUNTIME_DRIVER_MEMORY} \ + --properties-file ${ANALYTICS_ZOO_HOME}/conf/spark-analytics-zoo.conf \ + --py-files ${ANALYTICS_ZOO_HOME}/lib/analytics-zoo-bigdl_${BIGDL_VERSION}-spark_${SPARK_VERSION}-${ANALYTICS_ZOO_VERSION}-python-api.zip,/opt/analytics-zoo-examples/python/anomalydetection/anomaly_detection.py \ + --conf spark.driver.extraJavaOptions=-Dderby.stream.error.file=/tmp \ + --conf spark.sql.catalogImplementation='in-memory' \ + --conf spark.driver.extraClassPath=${ANALYTICS_ZOO_HOME}/lib/analytics-zoo-bigdl_${BIGDL_VERSION}-spark_${SPARK_VERSION}-${ANALYTICS_ZOO_VERSION}-jar-with-dependencies.jar \ + --conf spark.executor.extraClassPath=${ANALYTICS_ZOO_HOME}/lib/analytics-zoo-bigdl_${BIGDL_VERSION}-spark_${SPARK_VERSION}-${ANALYTICS_ZOO_VERSION}-jar-with-dependencies.jar \ + file:///opt/analytics-zoo-examples/python/anomalydetection/anomaly_detection.py \ + --input_dir /zoo/data/nyc_taxi.csv +``` + +Options: + +- --master: the spark mater, must be a URL with the format `k8s://https://:`. +- --deploy-mode: submit application in cluster mode. +- --name: the Spark application name. +- --conf: require to specify k8s service account, container image to use for the Spark application, driver volumes name and path, label of pods, spark driver and executor configuration, etc. + check the argument settings in your environment and refer to the [spark configuration page](https://spark.apache.org/docs/latest/configuration.html) and [spark on k8s configuration page](https://spark.apache.org/docs/latest/running-on-kubernetes.html#configuration) for more details. +- --properties-file: the customized conf properties. +- --py-files: the extra python packages is needed. +- file://: local file path of the python example file in the client container. +- --input_dir: input data path of the anomaly detection example. The data path is the mounted filesystem of the host. Refer to more details by [Kubernetes Volumes](https://spark.apache.org/docs/latest/running-on-kubernetes.html#using-kubernetes-volumes). + +See more [python examples](submit-examples-on-k8s.md) running on k8s. + +#### Launch an Analytics Zoo scala example on k8s + +Here is a sample for submitting the scala [anomalydetection](https://github.com/intel-analytics/analytics-zoo/tree/master/zoo/src/main/scala/com/intel/analytics/zoo/examples/anomalydetection) example on cluster mode + +```bash +${SPARK_HOME}/bin/spark-submit \ + --master ${RUNTIME_SPARK_MASTER} \ + --deploy-mode cluster \ + --conf spark.kubernetes.authenticate.driver.serviceAccountName=${RUNTIME_K8S_SERVICE_ACCOUNT} \ + --name analytics-zoo \ + --conf spark.kubernetes.container.image=${RUNTIME_K8S_SPARK_IMAGE} \ + --conf spark.executor.instances=${RUNTIME_EXECUTOR_INSTANCES} \ + --conf spark.kubernetes.driver.volumes.persistentVolumeClaim.${RUNTIME_PERSISTENT_VOLUME_CLAIM}.options.claimName=${RUNTIME_PERSISTENT_VOLUME_CLAIM} \ + --conf spark.kubernetes.driver.volumes.persistentVolumeClaim.${RUNTIME_PERSISTENT_VOLUME_CLAIM}.mount.path=/zoo \ + --conf spark.kubernetes.executor.volumes.persistentVolumeClaim.${RUNTIME_PERSISTENT_VOLUME_CLAIM}.options.claimName=${RUNTIME_PERSISTENT_VOLUME_CLAIM} \ + --conf spark.kubernetes.executor.volumes.persistentVolumeClaim.${RUNTIME_PERSISTENT_VOLUME_CLAIM}.mount.path=/zoo \ + --conf spark.kubernetes.driver.label.=true \ + --conf spark.kubernetes.executor.label.=true \ + --executor-cores ${RUNTIME_EXECUTOR_CORES} \ + --executor-memory ${RUNTIME_EXECUTOR_MEMORY} \ + --total-executor-cores ${RUNTIME_TOTAL_EXECUTOR_CORES} \ + --driver-cores ${RUNTIME_DRIVER_CORES} \ + --driver-memory ${RUNTIME_DRIVER_MEMORY} \ + --properties-file ${ANALYTICS_ZOO_HOME}/conf/spark-analytics-zoo.conf \ + --py-files ${ANALYTICS_ZOO_HOME}/lib/analytics-zoo-bigdl_${BIGDL_VERSION}-spark_${SPARK_VERSION}-${ANALYTICS_ZOO_VERSION}-python-api.zip \ + --conf spark.driver.extraJavaOptions=-Dderby.stream.error.file=/tmp \ + --conf spark.sql.catalogImplementation='in-memory' \ + --conf spark.driver.extraClassPath=${ANALYTICS_ZOO_HOME}/lib/analytics-zoo-bigdl_${BIGDL_VERSION}-spark_${SPARK_VERSION}-${ANALYTICS_ZOO_VERSION}-jar-with-dependencies.jar \ + --conf spark.executor.extraClassPath=${ANALYTICS_ZOO_HOME}/lib/analytics-zoo-bigdl_${BIGDL_VERSION}-spark_${SPARK_VERSION}-${ANALYTICS_ZOO_VERSION}-jar-with-dependencies.jar \ + --class com.intel.analytics.zoo.examples.anomalydetection.AnomalyDetection \ + ${ANALYTICS_ZOO_HOME}/lib/analytics-zoo-bigdl_${BIGDL_VERSION}-spark_${SPARK_VERSION}-${ANALYTICS_ZOO_VERSION}-python-api.zip \ + --inputDir /zoo/data +``` + +Options: + +- --master: the spark mater, must be a URL with the format `k8s://https://:`. +- --deploy-mode: submit application in cluster mode. +- --name: the Spark application name. +- --conf: require to specify k8s service account, container image to use for the Spark application, driver volumes name and path, label of pods, spark driver and executor configuration, etc. + check the argument settings in your environment and refer to the [spark configuration page](https://spark.apache.org/docs/latest/configuration.html) and [spark on k8s configuration page](https://spark.apache.org/docs/latest/running-on-kubernetes.html#configuration) for more details. +- --properties-file: the customized conf properties. +- --py-files: the extra python packages is needed. +- --class: scala example class name. +- --input_dir: input data path of the anomaly detection example. The data path is the mounted filesystem of the host. Refer to more details by [Kubernetes Volumes](https://spark.apache.org/docs/latest/running-on-kubernetes.html#using-kubernetes-volumes). + +See more [scala examples](submit-examples-on-k8s.md) running on k8s. + +#### Access logs to check result and clear pods + +When application is running, it’s possible to stream logs on the driver pod: + +```bash +$ kubectl logs +``` + +To check pod status or to get some basic information around pod using: + +```bash +$ kubectl describe pod +``` + +You can also check other pods using the similar way. + +After finishing running the application, deleting the driver pod: + +```bash +$ kubectl delete +``` + +Or clean up the entire spark application by pod label: + +```bash +$ kubectl delete pod -l +``` + +## Run Analytics Zoo Jupyter Notebooks on remote Spark cluster or k8s + +When started a Docker container with specified argument RUNTIME_SPARK_MASTER=`k8s://https://:` or RUNTIME_SPARK_MASTER=`spark://:`, the container will submit jobs to k8s cluster or spark cluster if you use $RUNTIME_SPARK_MASTER as url of spark master. + +You may also need to specify NOTEBOOK_PORT=`` and NOTEBOOK_TOKEN=`` to start Jupyter Notebook on the specified port and bind to 0.0.0.0. + +To start the Jupyter notebooks on remote spark cluster, please use RUNTIME_SPARK_MASTER=`spark://:`, and attach the client container with command: “docker exec -it `` bash”, then run the shell script: “/opt/start-notebook-spark.sh”, this will start a Jupyter notebook instance on local container, and each tutorial in it will be submitted to the specified spark cluster. User can access the notebook with url `http://:` in a preferred browser, and also need to input required token with `` to browse and run the tutorials of Analytics Zoo. Each tutorial will run driver part code in local container and run executor part code on spark cluster. + +To start the Jupyter notebooks on Kubernetes cluster, please use RUNTIME_SPARK_MASTER=`k8s://https://:`, and attach the client container with command: “docker exec -it `` bash”, then run the shell script: “/opt/start-notebook-k8s.sh”, this will start a Jupyter notebook instance on local container, and each tutorial in it will be submitted to the specified kubernetes cluster. User can access the notebook with url `http://:` in a preferred browser, and also need to input required token with `` to browse and run the tutorials of Analytics Zoo. Each tutorial will run driver part code in local container and run executor part code in dynamic allocated spark executor pods on k8s cluster. + +## Launch Analytics Zoo cluster serving + +To run Analytics Zoo cluster serving in hyper-zoo client container and submit the streaming job on K8S cluster, you may need to specify arguments RUNTIME_SPARK_MASTER=`k8s://https://:`, and you may also need to mount volume from host to container to load model and data files. + +You can leverage an existing Redis instance/cluster, or you can start one in the client container: +```bash +${REDIS_HOME}/src/redis-server ${REDIS_HOME}/redis.conf > ${REDIS_HOME}/redis.log & +``` +And you can check the running logs of redis: +```bash +cat ${REDIS_HOME}/redis.log +``` + +Before starting the cluster serving job, please also modify the config.yaml to configure correct path of the model and redis host url, etc. +```bash +nano /opt/cluster-serving/config.yaml +``` + +After that, you can start the cluster-serving job and submit the streaming job on K8S cluster: +```bash +${SPARK_HOME}/bin/spark-submit \ + --master ${RUNTIME_SPARK_MASTER} \ + --deploy-mode cluster \ + --conf spark.kubernetes.authenticate.driver.serviceAccountName=${RUNTIME_K8S_SERVICE_ACCOUNT} \ + --name analytics-zoo \ + --conf spark.kubernetes.container.image=${RUNTIME_K8S_SPARK_IMAGE} \ + --conf spark.executor.instances=${RUNTIME_EXECUTOR_INSTANCES} \ + --conf spark.kubernetes.driver.volumes.persistentVolumeClaim.${RUNTIME_PERSISTENT_VOLUME_CLAIM}.options.claimName=${RUNTIME_PERSISTENT_VOLUME_CLAIM} \ + --conf spark.kubernetes.driver.volumes.persistentVolumeClaim.${RUNTIME_PERSISTENT_VOLUME_CLAIM}.mount.path=/zoo \ + --conf spark.kubernetes.executor.volumes.persistentVolumeClaim.${RUNTIME_PERSISTENT_VOLUME_CLAIM}.options.claimName=${RUNTIME_PERSISTENT_VOLUME_CLAIM} \ + --conf spark.kubernetes.executor.volumes.persistentVolumeClaim.${RUNTIME_PERSISTENT_VOLUME_CLAIM}.mount.path=/zoo \ + --conf spark.kubernetes.driver.label.=true \ + --conf spark.kubernetes.executor.label.=true \ + --executor-cores ${RUNTIME_EXECUTOR_CORES} \ + --executor-memory ${RUNTIME_EXECUTOR_MEMORY} \ + --total-executor-cores ${RUNTIME_TOTAL_EXECUTOR_CORES} \ + --driver-cores ${RUNTIME_DRIVER_CORES} \ + --driver-memory ${RUNTIME_DRIVER_MEMORY} \ + --properties-file ${ANALYTICS_ZOO_HOME}/conf/spark-analytics-zoo.conf \ + --py-files ${ANALYTICS_ZOO_HOME}/lib/analytics-zoo-bigdl_${BIGDL_VERSION}-spark_${SPARK_VERSION}-${ANALYTICS_ZOO_VERSION}-python-api.zip,/opt/analytics-zoo-examples/python/anomalydetection/anomaly_detection.py \ + --conf spark.driver.extraJavaOptions=-Dderby.stream.error.file=/tmp \ + --conf spark.sql.catalogImplementation='in-memory' \ + --conf spark.driver.extraClassPath=${ANALYTICS_ZOO_HOME}/lib/analytics-zoo-bigdl_${BIGDL_VERSION}-spark_${SPARK_VERSION}-${ANALYTICS_ZOO_VERSION}-jar-with-dependencies.jar:/opt/cluster-serving/spark-redis-2.4.0-jar-with-dependencies.jar \ + --conf spark.executor.extraClassPath=${ANALYTICS_ZOO_HOME}/lib/analytics-zoo-bigdl_${BIGDL_VERSION}-spark_${SPARK_VERSION}-${ANALYTICS_ZOO_VERSION}-jar-with-dependencies.jar:/opt/cluster-serving/spark-redis-2.4.0-jar-with-dependencies.jar \ + --conf "spark.executor.extraJavaOptions=-Dbigdl.engineType=mklblas" \ + --conf "spark.driver.extraJavaOptions=-Dbigdl.engineType=mklblas" \ + --class com.intel.analytics.zoo.serving.ClusterServing \ + local:/opt/analytics-zoo-0.8.0-SNAPSHOT/lib/analytics-zoo-bigdl_0.10.0-spark_2.4.3-0.8.0-SNAPSHOT-jar-with-dependencies.jar +``` \ No newline at end of file diff --git a/docker/hyperzoo/download-analytics-zoo.sh b/docker/hyperzoo/download-analytics-zoo.sh new file mode 100644 index 00000000000..31fec501df3 --- /dev/null +++ b/docker/hyperzoo/download-analytics-zoo.sh @@ -0,0 +1,32 @@ +#!/bin/bash + +# +# Copyright 2016 The Analytics-Zoo Authors. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +echo $ANALYTICS_ZOO_VERSION +echo $BIGDL_VERSION +echo $SPARK_VERSION +SPARK_MAJOR_VERSION=${SPARK_VERSION%%.[0-9]} +echo $SPARK_MAJOR_VERSION + +if [[ $ANALYTICS_ZOO_VERSION == *"SNAPSHOT"* ]]; then + NIGHTLY_VERSION=$(echo $(echo `wget -qO - https://oss.sonatype.org/content/groups/public/com/intel/analytics/zoo/analytics-zoo-bigdl_$BIGDL_VERSION-spark_$SPARK_VERSION/$ANALYTICS_ZOO_VERSION/maven-metadata.xml | sed -n '/[0-9]*\.[0-9]*\.[0-9]*-[0-9][0-9]*\.[0-9][0-9]*-[0-9][0-9]*.*value>/p' | head -n1 | awk -F'>' '{print $2}' | tr '[0-9]*\.[0-9]*\.[0-9]*-[0-9][0-9]*\.[0-9][0-9]*-[0-9][0-9]*.*value>/p' | head -n1 | awk -F'>' '{print $2}' | tr 'P-u^DNbhC6xyEH3}Ad0YbNVf^F-{%*F6bVBLj}U60^Rlj%m9Fc zgL`N4{{X_l$HgN6--)u600R--Unx99$|qd}>ayM|w2(Y&{6V)9S^Q z^$TgAM||F=;~HB!06#WJFS|L7j@=G=$Q`0aMhqdFkM!lJt6-i z&7Ax?1&094>Q}rika{XrDHuQNjRm!2XGEX!$nW~pRFy(|yJyOp{i1oCJ)Ly|JyQtH zXvrwbw_qi^vL-XSI=ItmzWhbFj>d) z^?ik)*Gn;}Y>KLbn2jSJ53?hOLq%8B?Kj`X+dlrFKn>imPz*arrv}uLVAg3QeA^OF zj7P7u)EULbZw{s z<#m={yYcd3gL_pYjZ{vd@p9=Atq^0UF@I$yexiF6Cl_oqD!hPGG|ZNeP;HnStA>~0 zNSl-z#es)`N8u_{zGYB1npNAQwA8?BIIdIpR!LM0Au6_6rYJ0G=p=Wlrn>6AjE+$i zE18Wy2Jk-!zZf{E2+C7;O^5w#7FZHm@t6RlSU(yI%oTR-=xP_OJ@w2oP6B$5*1a#y=QF5K51 z)$${2$Cs<X`qjDA&!;Aix^s}ifG~Yu&@lbC7-1)FQKJm2Xg8a8 zESN1;D@Mq|O4WkJi{p_Ya~Y)7k=bmC*UJP;VY)i@zd1dJ2$_wTl#=((&l;Z(O}orD zH~V(zVeA-LJX#Im(0C4SVEmEAw3q2kef3KAY;%wDC@vRvYGn0icK$}4QFU^G>j_Ej z7mq!m))1sfLwgV^IvzOY&9oEg5blO>e+s_G7ntEx7Q~!4BqJBLbQ3c4{8hQ%QlMfX z<`(d*?~Ukpa_!a^w9fe%Z1Cm4T4|VrT#^|XGQ^+$3De{d;SP6BW=oM16cS&|Rh&>$v}2(p+c(I~oBM}4iG`jKxo?PvZ9&|l zj-1@P95iquCZnj-u*{+f-L$+?ggTbsZuhSQB*ULbgBtnw7!r{habF7iN%W`wW(pog zzYBqp{fYk3$-uJPvnuHCFYi!{l$V-E4vCTfgvOkd1O%YlL@|1DZ*Pnj4uVZ@`aHp|W!Vn!{> z)~id0P`~Bej30S0!0qD);_)B}Rd`^(javX(hgqLw&u=!uV7lBeXvXK^iKY~z z1p_E&^=|*XJg;13OCTe9+ zSrH1C^lvEc2a z>&b_w<64LVldysI5snkU5Wn_~&WiSE5c(Bekh$;FEns&G5YGz42$)$4I2I24G;Gdj zwW=}`E#1Y@lz&21R7NIs)ugic;-In3RDJa9l3euQ>3UOB8}WJ@khnFtCr1|%#2%k5 z`5f0`#!JWiq21Bek?3l92H{=G*}JUt}b>Tdm9$PuG*6_flQV#*`r2?z1VA z=9hd~`taImP2}EB#ueCSyzKQ+v)_t(Y`P(~&*Q_Vy)-n`M35gIlwP!Qh3Y>1Bseuk zGEc&2_7;M+$@{*lS2#tk@aA`qyeMY9Q6oMr#c}qzi~?bCYZcb2$Yrnaz?c0jNSLLG z$*Z;Yn?`6m@ghk$`_{50N^OGO?pqHq>l%M4q?6oR-mAi*d<)FEy0|p7_|^0MwS@Tx z+j8}8S0R4Z@g?sZ-vT8!Pp)gYL`EJ+XYXLu92a8i6hfT-%A3QvL8X}Lv^rL!bsB-* zE(Tu~1GCN9Rs&6`BObk#fp~ezEzYvslBwEtf=Wf>A_oWF(nJ3a=vyyQZyXjFS+d_V z^lR<6OiFT2h^vKSpO+8`V&>sbLGXLZvBV< z1SyCP&87}YXvDLaO%2s|Vi$5j)l3#UbshA7FdvAG{lec-R!4?gq?jhwNg1au>)|66 zatrNfI29oCa=-WlUqfjUQ1*ys+KEc2_Jk`e!Uz>%Q^130N{y6L#|m(hk2!Lu<&`@L z6tI@5Cr~oUygkgd$QPk{+HUD#sj`Tjhy;58D9>Awit|TWq>a<9gNtOzL-@hbE zbzgazI9oI|?5;(M>zg*ydOws`{5S+MGV^w_S2Iy0XbjFBT|ng~hZNjBGHI0Orm{gN z;W5lwPk?jX0?!5gdA-j_>(u2D>}Sh9V3^!oV1z&s_|QLE zn$P27XyDVI?@8nBqSO1f2wTt6)b}^%#s<=c0{+|r(bIN?ujwO^_bJ9)@ebz8e+}+b z9JKq2Dy>VAa@aVwvvZMW-2(hhz57<)UkY*wmbI+H7p*x0e9kLZzpWv?&bETQ^zF>g z;PU8avo=oYsBy_o90Gv{#~e zX}2$<#KbwQgRKiIij`!Hk)YxB9A(D^`SY~~n4?sjI+lBMqez5s5@Vg3Yh(K5&0)q0z8d_VXwkFunR{C<@2qN^ z#|%{CihDi<|C)0vuKGMRGs%=xv;vk;nmd{|-bii!o%OSv>p`oXxr?De-I4fbN#YqJ zUua!htJsD5R-H8ZS-YkMIeCMC`I>}#9IM647yLbc_?18MOtw`&scfsZw>QKul~9mSzcSK=h%7>1>W*@_mGP-;r2+0+=wM0~F6IA^xicgS%D2dCxB z^_y54=d8GoofUQhGFS-WuD635n)m+eX_M&RV^hwD*tb#dRfgO^$( z3dt9?`}a3eWG7!fn|g@ov6y%R{lK!fv>TF}d0S-anqDM3w3RQ>$U^5U`sSa~UiM{@ zxdl1ycbjrR!f)1OA2! zZZsCN#JM6F8kH|X1UJ_lmG&*2pU5Q%cI!r0hCh7Np&qjkv0R|e@{GvjXDOnYM5V+f zE3eu&uY{&)S9x6RIJ4Xdl6UtzH#4haVQ5HEqT0XKDg``_vHWI2%%m+%^{GzyF54z+ zzC)Ux*JXq3xl6@1Rjq8`+E&96jp(;E6W)C+LEXGXmb=VCgPWbCr-WPlVmFKnL!ne` z*Yq?GoJA^`9Nd3RN@ILgw@!Rsq8Y9PaUBJhG@+E%&+d+7L& zoAd%p$b_goctpIAy-v$6Ry zl=>xHpMamN?|z~6g%J|R3JWm9o%p9VE8@hPrKq7kS8Rr-kg{rwRE5ZuoE( zq56H8%r@1@(T~gB->6+%p;wI$URYVX`)9ASes1>%awE}A5hV$0AXQ2-P>~?tXj8L% z90IoNc79Bh-e<9q7S|_deDhC|N2hM%6Sv3H-jAuRW?GvT1?@mJkKJ0UrlQE^%R~0& zA}6f5R-1Af+M(~WpZ2hpt8O9O@7;K>8D|qt$BP`;I?6H3OrLFaS6yeu1SfdmZ+rR9 z#=_p)Iiq@4)#=nUkR+bH zCY+o`_Ic7HwwU}}1KpL^y|#*cVsR$fSQI~9q3i7}%GiPvbyT8jvHp6QZ&toxI0BOa zgL2j~;{*cH=<+Q4bi#p64LcnZPNnF3tMR5kv))w7sZ@%omC#~Hw`ilVIBD=gf@cZZ zV~#i_`a+OYA^s9GAwXZ?E{Ee5Hmz8U4ztA%{601oCB9W@_6~Vd8(UC%hj9oWfoxHCvSL+B zckZv%%zuP1jmJ@;IK4xvV~TReU0j1n={UHi9JGj-&eWh!ZXaIzjM`g6;3P_!(U*B& zxbK`@iz{LuhHJHJh8BLhRu&FC#T&FXoC-Q6SWgUOvTSBe*?;IYSVh(6S`S4o;Uu#=loxtt(@C(XQ|z!mWi*qL)I|&0{yzB=BRH;c*T@qzxa7&uB12G0SaS zhhnX4P9t{>Xu33k%6a+SP!9$kLjD1}Hv$X2n>C*02OrE1S8Vi7Zh_3VXJ46{spaUp za03M+L=CP=&nEPf1H;VzXnnFbcs-PO3()2LNufuG4)0K2aVR|e*0uFYl~7y*orMth z9QW?z3S;}!Uv%xB|Md4LtfZEJ@Kl+o`P-u^DNbhC6xyEH3}Ad0YbNVf^F-{%*F6bVBLj}U60^Rlj%m9Fc zgL`N4{{X_l$HgN6--)u600R--Unx99$|qd}>ayM|w2(Y&{6V)9S^Q z^$TgAM||F=;~HB!06#WJFS|L7j@=G=$Q`0aMhqdFkM!lJt6-i z&7Ax?1&094>Q}rika{XrDHuQNjRm!2XGEX!$nW~pRFy(|yJyOp{i1oCJ)Ly|JyQtH zXvrwbw_qi^vL-XSI=ItmzWhbFj>d) z^?ik)*Gn;}Y>KLbn2jSJ53?hOLq%8B?Kj`X+dlrFKn>imPz*arrv}uLVAg3QeA^OF zj7P7u)EULbZw{s z<#m={yYcd3gL_pYjZ{vd@p9=Atq^0UF@I$yexiF6Cl_oqD!hPGG|ZNeP;HnStA>~0 zNSl-z#es)`N8u_{zGYB1npNAQwA8?BIIdIpR!LM0Au6_6rYJ0G=p=Wlrn>6AjE+$i zE18Wy2Jk-!zZf{E2+C7;O^5w#7FZHm@t6RlSU(yI%oTR-=xP_OJ@w2oP6B$5*1a#y=QF5K51 z)$${2$Cs<X`qjDA&!;Aix^s}ifG~Yu&@lbC7-1)FQKJm2Xg8a8 zESN1;D@Mq|O4WkJi{p_Ya~Y)7k=bmC*UJP;VY)i@zd1dJ2$_wTl#=((&l;Z(O}orD zH~V(zVeA-LJX#Im(0C4SVEmEAw3q2kef3KAY;%wDC@vRvYGn0icK$}4QFU^G>j_Ej z7mq!m))1sfLwgV^IvzOY&9oEg5blO>e+s_G7ntEx7Q~!4BqJBLbQ3c4{8hQ%QlMfX z<`(d*?~Ukpa_!a^w9fe%Z1Cm4T4|VrT#^|XGQ^+$3De{d;SP6BW=oM16cS&|Rh&>$v}2(p+c(I~oBM}4iG`jKxo?PvZ9&|l zj-1@P95iquCZnj-u*{+f-L$+?ggTbsZuhSQB*ULbgBtnw7!r{habF7iN%W`wW(pog zzYBqp{fYk3$-uJPvnuHCFYi!{l$V-E4vCTfgvOkd1O%YlL@|1DZ*Pnj4uVZ@`aHp|W!Vn!{> z)~id0P`~Bej30S0!0qD);_)B}Rd`^(javX(hgqLw&u=!uV7lBeXvXK^iKY~z z1p_E&^=|*XJg;13OCTe9+ zSrH1C^lvEc2a z>&b_w<64LVldysI5snkU5Wn_~&WiSE5c(Bekh$;FEns&G5YGz42$)$4I2I24G;Gdj zwW=}`E#1Y@lz&21R7NIs)ugic;-In3RDJa9l3euQ>3UOB8}WJ@khnFtCr1|%#2%k5 z`5f0`#!JWiq21Bek?3l92H{=G*}JUt}b>Tdm9$PuG*6_flQV#*`r2?z1VA z=9hd~`taImP2}EB#ueCSyzKQ+v)_t(Y`P(~&*Q_Vy)-n`M35gIlwP!Qh3Y>1Bseuk zGEc&2_7;M+$@{*lS2#tk@aA`qyeMY9Q6oMr#c}qzi~?bCYZcb2$Yrnaz?c0jNSLLG z$*Z;Yn?`6m@ghk$`_{50N^OGO?pqHq>l%M4q?6oR-mAi*d<)FEy0|p7_|^0MwS@Tx z+j8}8S0R4Z@g?sZ-vT8!Pp)gYL`EJ+XYXLu92a8i6hfT-%A3QvL8X}Lv^rL!bsB-* zE(Tu~1GCN9Rs&6`BObk#fp~ezEzYvslBwEtf=Wf>A_oWF(nJ3a=vyyQZyXjFS+d_V z^lR<6OiFT2h^vKSpO+8`V&>sbLGXLZvBV< z1SyCP&87}YXvDLaO%2s|Vi$5j)l3#UbshA7FdvAG{lec-R!4?gq?jhwNg1au>)|66 zatrNfI29oCa=-WlUqfjUQ1*ys+KEc2_Jk`e!Uz>%Q^130N{y6L#|m(hk2!Lu<&`@L z6tI@5Cr~oUygkgd$QPk{+HUD#sj`Tjhy;58D9>Awit|TWq>a<9gNtOzL-@hbE zbzgazI9oI|?5;(M>zg*ydOws`{5S+MGV^w_S2Iy0XbjFBT|ng~hZNjBGHI0Orm{gN z;W5lwPk?jX0?!5gdA-j_>(u2D>}Sh9V3^!oV1z&s_|QLE zn$P27XyDVI?@8nBqSO1f2wTt6)b}^%#s<=c0{+|r(bIN?ujwO^_bJ9)@ebz8e+}+b z9JKq2Dy>VAa@aVwvvZMW-2(hhz57<)UkY*wmbI+H7p*x0e9kLZzpWv?&bETQ^zF>g z;PU8avo=oYsBy_o90Gv{#~e zX}2$<#KbwQgRKiIij`!Hk)YxB9A(D^`SY~~n4?sjI+lBMqez5s5@Vg3Yh(K5&0)q0z8d_VXwkFunR{C<@2qN^ z#|%{CihDi<|C)0vuKGMRGs%=xv;vk;nmd{|-bii!o%OSv>p`oXxr?De-I4fbN#YqJ zUua!htJsD5R-H8ZS-YkMIeCMC`I>}#9IM647yLbc_?18MOtw`&scfsZw>QKul~9mSzcSK=h%7>1>W*@_mGP-;r2+0+=wM0~F6IA^xicgS%D2dCxB z^_y54=d8GoofUQhGFS-WuD635n)m+eX_M&RV^hwD*tb#dRfgO^$( z3dt9?`}a3eWG7!fn|g@ov6y%R{lK!fv>TF}d0S-anqDM3w3RQ>$U^5U`sSa~UiM{@ zxdl1ycbjrR!f)1OA2! zZZsCN#JM6F8kH|X1UJ_lmG&*2pU5Q%cI!r0hCh7Np&qjkv0R|e@{GvjXDOnYM5V+f zE3eu&uY{&)S9x6RIJ4Xdl6UtzH#4haVQ5HEqT0XKDg``_vHWI2%%m+%^{GzyF54z+ zzC)Ux*JXq3xl6@1Rjq8`+E&96jp(;E6W)C+LEXGXmb=VCgPWbCr-WPlVmFKnL!ne` z*Yq?GoJA^`9Nd3RN@ILgw@!Rsq8Y9PaUBJhG@+E%&+d+7L& zoAd%p$b_goctpIAy-v$6Ry zl=>xHpMamN?|z~6g%J|R3JWm9o%p9VE8@hPrKq7kS8Rr-kg{rwRE5ZuoE( zq56H8%r@1@(T~gB->6+%p;wI$URYVX`)9ASes1>%awE}A5hV$0AXQ2-P>~?tXj8L% z90IoNc79Bh-e<9q7S|_deDhC|N2hM%6Sv3H-jAuRW?GvT1?@mJkKJ0UrlQE^%R~0& zA}6f5R-1Af+M(~WpZ2hpt8O9O@7;K>8D|qt$BP`;I?6H3OrLFaS6yeu1SfdmZ+rR9 z#=_p)Iiq@4)#=nUkR+bH zCY+o`_Ic7HwwU}}1KpL^y|#*cVsR$fSQI~9q3i7}%GiPvbyT8jvHp6QZ&toxI0BOa zgL2j~;{*cH=<+Q4bi#p64LcnZPNnF3tMR5kv))w7sZ@%omC#~Hw`ilVIBD=gf@cZZ zV~#i_`a+OYA^s9GAwXZ?E{Ee5Hmz8U4ztA%{601oCB9W@_6~Vd8(UC%hj9oWfoxHCvSL+B zckZv%%zuP1jmJ@;IK4xvV~TReU0j1n={UHi9JGj-&eWh!ZXaIzjM`g6;3P_!(U*B& zxbK`@iz{LuhHJHJh8BLhRu&FC#T&FXoC-Q6SWgUOvTSBe*?;IYSVh(6S`S4o;Uu#=loxtt(@C(XQ|z!mWi*qL)I|&0{yzB=BRH;c*T@qzxa7&uB12G0SaS zhhnX4P9t{>Xu33k%6a+SP!9$kLjD1}Hv$X2n>C*02OrE1S8Vi7Zh_3VXJ46{spaUp za03M+L=CP=&nEPf1H;VzXnnFbcs-PO3()2LNufuG4)0K2aVR|e*0uFYl~7y*orMth z9QW?z3S;}!Uv%xB|Md4LtfZEJ@Kl+o`W2m->AN~e@4l1oWQONmG$ES(Zc zgDBE@{oix$!~gNl%$YOi%zWR>)Aum9v$rb%rIv=K27reL0C@i#xLp7u0b&9|YHDg) z8X6ipT3R{=dU^&1dIoyh{}lg%_FraT;e#=-a6{?o*~Qtp`2|HpMCjQhWhI4W_=H6S z@kmHWD99<8C@G<$+%Rt8|F_+C0aQdl3mz31j{^Wv;en~}ZhHVI0D!=F|2g&l3=F{o z;R6JO|C$Pv03H~G4+0Z{AP@rZzX`m5NH9BuT3p4z)*By_Lc<~PzV`EP0^D80)jyo7 zcHy*=Mm}lXQmJeI;EZ^{e^dX%gFz7de=QCw00aX6*J`|f%+%~O;u0zb5Dwehd4LT3 zzuTw)C7{clRu4sUP}&XVr|l{+-yysc(LPam3y}2-=rT~p7^Zk78~ki^ERfonOO+jp z)RD8~LBi8_>g#S^VY4M1sqksa-J6{y3g@Uz77uX3&KB%yAZOcwy3JJtPPo}cJyx+y zyJfMIB^)ilcIO!}PjXotA;igqGFD!EEKctWah-om1+{F{NVT+15+OP_Qp=ilN_tJf zELUtHV_Qi}KFCaa;`bjaz8eO7rtI`$!F%`>A7|DCRMZ{RH^mjXc?L)NGV;cSHY$%C zR*n0jq7SHX7MXE}NgU+3nw`K%Ejs>!E*3hYE7M4l=A*Y-OzGwgCN5n|U`~;J8yMJ8I3JM>^mlf*H@Hdx4eUj>YWtrT%u0&0r zlJjE1%$OF=vHCj^tX`o8L77|GyM1NwV_k!Rs-nNN`LuHj0Z^W~Xs~i)O-Z)Xi_?$OhYt&lOpH8|>0|E|d5#y1AFj}4}c{B9EDOj`M8TAtF#8>q4BlWuDl^_@V#C5{fOFBTM5y**BvkVi?Mn=lC`@RX|{)Y6O zB91aNti}1~_AZo&1=O87cqCk9n+3$XP{7RaLWcJXYOd9yx(%FtJHM8RjJschehkUT z7|4l#R(Y&ojiP*Kz^+YaJ+@kye^qRB_W6xeGwAt+>bjAW8Yb98UVLQtNQY@8pkAx@ zT2phx88%Gm-aQoh>PZ`M!_hIuFWA@ni4#88bCZGwrvMMpc^RoB?vk%djBb@{>3-;e zsITR|f~vTbXOS}E5*q@qVUThoLjB~PGv7x;R(opLrEzEfKsV2D59)it{QJyNd(0$^ z#SStMa-M)xiP@;h93Ioks&LymC%a%Fw_+g;W-@q#D3L=`Owl!d2~&t%dAxIEH?eAKFrVTUD@x=Ii@g60QOW<{FoEx8ZfQc zNcts8BYuiu-u*bbpM}TPyG>U+#wkPg=twn<>7Nn`fqvVRDIq3%)+D8C?f)t2k1VrW zpj63ft$fB^CiEq<{c8Bca6)FAKUYk+tVAteo8di1p%h8+S$mC__HQ@wz~5mRo8FIN z&@?}*FPeJ8Qs7+Pc%#0`)QET1A;_Ip133ly?Ptt=W=ZA8cNe7vNxxu8bU+XCCRpc* zEE1jAG#sL|=W0kK5$3wRU6E7F`f+pUFBdc?zC&Q^=-8MB<7!WV4_I7lb*fTPo6i-` zP6HFxo7_@RCS52c6!m8ygjQ;bO~)-u%u^uyA%BZ{$Sd_5k|0_m)q&Fcg=o;hfKgz6 z$Doj8sD{;3)8=FI3^5_>+L)T9R4%yxS5tXQzP09N#?L>;63ApEcj_`pAFW7u#cH)J ztav#_Uepk`FOf~HoZ)0!c2G^yy8I~cVSS)UjRQg~*`qzrhR~LVKufhW9Pl8nyUUnh zo-?|^EZBa6Z+zw?X{W4QhQuqfQNOQ}?Gicso^U>DNKx9pH$?as5_?8Tud%UH?UgGs zUZk)@FFWj2@#3Bd`$uAo#pGYsMZO8$^p5lhAdGU$R!#j=di32%PKVEZ1yvdmFGO)_ z2$5tI569YTJ6_jiO|2JLkskdMp@^%QNsX5W{_~L<`cP$aXP39oqk~B1d-|u6Fz9Sz zCh7RuOyWeb=IAt`3NPwD-#e^4k)6wHlIYytM7D=`*z;=YQQXuu*5TQZa*c~`kFN;u zT?$Xb+lF6aEYv_*Ui1rFtm5?hQ%w;-{bJa;-6nD+PBV9#FggMBj1)5n)`%F8AB5q5 zNfS)F1!BQH0YFQb@KO49f@ zyFo_uXMXdZxRRPPZ|czcs*<_rRFWN|fI}^NvE?Dp)BqOD!YGa`#;3tf^7lng7K;(B z0AA|{!8>*JR=oaxYg10=$+hKR+gwY#|$)m@NxsZj5zWm1-HQf=bZ zJ?E5aZV4B2^uzPC6JPBMHr(c}blJjsoyEIIO^QH>x0jON4)a*lUWR=*U(d&Z*dXcp zJAwMW`MSrCkR=533w1PRiZmW{TbZ>^Yi_E1ZkTkpmii#>r)%2xn+gq2M7L2OGq5$z6J1a0lg8Q^tut7g(X;=}7 zCH9F}G%A6-DYq(HXlCFOVqKS6%^HzFJjpSg%a|8yd9P?oomj9|xqLFbD>kEDNr+4*;v*EwbS3Bx)I|2HrmRCastu z3q#o$)e$rkM~DNv-v(NB@-igr-+a99m$bx22Dm+XiHglZeOi@}nKL~L%4VwQcu#*R zVn2Lp@a+SBO>2tY6A-0_NV-6-sMrw=Rqv-r=_pp%nb$1z`Bq78X)B>U`fEmvz-0ek zYhh(r&Ar^>(_amYI+&8>IY-+>r+WAE!FlW@(PTn1NsT2h@0fdgCy3Vh*DKxH%27Vq z0t_HR(915kH#>u%Qp%|?u1L2`QfgJ1?b@2ykxyKGsu-_5*BJwQIY+1QN{>QuTTye_ z$yN18{L0{~Qi`dB7V+!vG>^+$aGu#n9v*&#gneD+IBRfWxP#g}MHxA1ka#OwjSJN% ze=dunv!9t}oH_g(Xb*@=V3SBZPZ{)J=RWw(O-+6QgZHEYD~^BCHz9<%zI$+amH zzixc+r|DObZJ50KcO*e=SQvU8%-#S=edsLKk^t8^JuiPf?>~Q8VBk@+%~Uj1&h$ga zlyYn0XZf7Ml((PgWzkbRwwmmT%W96+=b7C%Qc7Rm77nh0Nln!TjlMpWo#5*2>RG-x zM%a3z@D5y;jIv`QoTr72J8uPu5m$aU(nrH7u9|&pikGOaf z+^0|c;Z@fCS&wK3eUj(T47 zaPFX{G8)faByp?GBw~*@(v_yVd@rdm;gSpfM=;`mU}rU%l9houLnLq$W%s8OEyd0%)<}Tlt{Q z9(%7UyU9*OE~TzSkYr`f#c`at@ybJsLO85G0w(*gmFRxxNYLm2)#OL1byp{IjO}|a zk3@EKtZb4$7!wq=XNUTmpDy$WrTzFop#Vi~pIyiBxh^l_V`~>_97_Qq!Qfe?FaaW{ zKTD>psrrEhEqzr2e!Q=)>KJ!dqeOPgM7o=_fH{oOJNKn{Gc~rCFuJA20%n%t;aOe4 zSD-rp+d*YFMqK3E>Pq+I=JmQRt5nkZTr-Ucan0u0S2d|{QA@;;J9G`jj69ZybX+Nu9ua@C;-VP~3{M_DphDAJ>q$2hJ9ShwKn@6Z>w*}~ zSZ{$`)zKL2-(6R$b5t}?b% zLerx4uF1a$*@4InYOXV$A~T7&{xENE6x4D(b+>0K+lPVKM_4CgD!a2hAS_umO z4hUNHNafl8MfNgvvHTeD>P<88A1cEoUfrdhSf4Y@laV^$P*R}#rT8Z75B9tXU5pai zkH-(LFl5NfWo6w)aeb#jEUyuFWsPBT&1xuMrnMJWLYrm}1NSzgm4I1 zC}|f4*_|lopWW{@!V-!Y1T2?M$&rxEd!U-i#+Be4(kX+$+m6$!jq9-*^xMsrU)W_w zGGJj@jSB;3Gt91>9beAH{v0kaVR$nPpA1=p4{Eft?=#J{WH+BZaGV(5eb`eWou>V2 z;c%CR9GR9QvhTUU88sA5;nV2nm>TZ_HS2phvqVL+3E!%#b;oZEmx1>YO+_qjGh0Hh z4FhtFbui=*>np)0ap{%O_xXAw&a8se)Q`4Ve!UeXPsti+a|F(d#AHET#5^-&fBIw| zk%;ol_)n)zdmdVe2NzIV)to&WS;4EJ81X0p@n^W;T=ptL`{Z16im$ z;vRp(n4o^JE=bs|K9o3pFi|i;ThAyz5Ek5*+K69?*bG8AMDA{$ADg;6Vz>ag-R%O) zPO*p{>jgIl80JGYNXcWQFlKN)(1$P1OLjIX5@ zz5aBJbnjqsa=%8#9EhI}=6iLxsr*Px%u%~7@#ef`{?FwEZ8 zgUFJ1&LSASd6)WA)Qz5tK*gIJw!v86;kCxz^s7 zjCgrYb51W<;jNLxw(lFBX-3plvuU@&1jYS+Dn^L}zqTW`C9Yl^8RfVe%KA>bv-C?| zPt`chuYHm^2&w~y2t_AV;|hdr48ibXqy4uByXAOrhe*(`WXmBRyd=JD+N8j@~Md8l1CyEpb|zInf+@I;4MIUDZ-m-@+DLy(OL;V19= zj$>rjOQ~yB4?-ODIUoJ`YO2vN-bJNWymg~t2IM%egGNR}*TP&WZWJa)>5*JB# z(_I#S{LoDhJ^zfgd?Rg9L|eB$LeUG`$3ph~5TbEqc?%#z(Jr|z!wAWZVj-7H=@MLK z?C+A+Z`i-}7Jt{5NN?^=4@+}&WnXT$PLGJEle#XKd__)7Z&pUQOYviix0oxH^+C@A zvsZkjO>QX0(KYZJpF9rX-djLzO2r%ftyhFNVzW5bj`GAkPMOua{=Dm|uGle8J>EkT z{#h=Bu2@;`Q>A7e+$+6D{EJz8rIRXA&9wbi4&_I4cnyQ})%GcpOeRkAM=3H zMq1*I)r1NreQ0rQ4VBQLE%_Gfoj5#PrKM7@rJ5Wb$~$7zbEV%h_IWw6KY%IPq3am> zJnx=mZJbXddQek9P4o|;j0x1UI)u(dlFInt%xcVmYk5${Ix$z$MrFn(Gl$cwGE8#| zn7+hJ7~_6CkZ}x+7z+mNBDi#ujvkZc3v=n7E)h5Tw#fCooV*42E~-ep&{yYXf+8Hc zX1Sk}7g*7u?|G(8$z5P7E^B0g%9x)b+!lpz$UYN>D4(-_$C4233%foQtyb6H2ZqKWn4V?pEA|bg z9Ysf{A*(cD8tl35{ze$V#UuJx$(wRb|619~jnXUhgc11Jdt zcI9u(Mfh>}qf&1+88}7k#<%OkQ4fFeqTqWv4sL2y?FoB$oYlOluO3C@h@2$WXnOy$ zBF~a9-dKkNOzu5<0O1}|< zKA6qL0EJ1%!)kfjH^M?vUj& zDj5~*g!c;`zXPfn9F!)KLSh?3&Gnu9J@MpWDicK=G1v09D;P;L)h=BdxV1s!&Pz>& z;}w?B8WS(S4;+8T9WO)1K2r?qR`&!qtlp(V1g2%FOaf0H8g*`>dE2N(Q2ShZC_O7w zL2NveOFtddA1P}4{UThmzDu(4vl#y?Hz+1Os77az{1yp@r2i?$i1WA1?1M}6fPDgo3L*W2P{2MyeW4i{J-XGaK4&~+i>mqn&6q27 z&f1ihMp;66IRj9Y$(*}oJBW#pcFrrz%Ik&Zv``Sb)pmZkaA7USH*_Q3g9WEi`Zp;{ z`oAfBp%`m}(1@fNU_;VR!GgKIHD*tkQGH9s=n(b`qvMhSc6T?j$TvWEl3SXJj|2B2 zSDrPT#wC5b6v$!EC#dgj9qD^Vlt6Gfvf}RgSwWwJwab%Zd4EPlQfbK#9{CoSRFN55 z(yXD%)g|lGt*XS9>UIsM<-@jv7CcqzP^|H2_~(QGaqQLMW}S!LMrXLc@g2s*sClSH z#8}c2gRJ^~Nr`Act1@riEpY5|3;ehRPFkR|P5+)Ivp2gbwuo;Z=0F62RPa#)Id2Ce z&6Vii_!w|wo{_AsSs`P^EwFIRXUkgSW?M9_map?k_t{&|?&9Y{VwU4;DvJAY@#>@Y zKePX^o%4Qr5v6MrGqYt_lOOP0DvQ}-xVD+GG!-#3B8e%u+zyKC0mHw(ej#`Zn1+C! zeF13&!)ox8wM`Zgu&~)JOdcGY zQktVI8tb6E*1Nuk5R-074dXO`q_mx4wlcbcJ#T@Q&Ic2U(kE%o%0}swl0OMtNxDlwW-{ir=OciMCYQ=vaWF)*eJO6!@J}U zr+5cst-77%u!*4)&8Ag_rDIa#FT_XwX&;I2hxlEQwkNJUXHYJ!E)aaN&ijp=#o$9Y u(-@h>p`wL=M0Qf(yx+T=5}GxoUnb9HL$oNxz7P8-D3!01CJQRv&ixM`qyV%4 literal 0 HcmV?d00001 diff --git a/docker/hyperzoo/resources/test_image/fish1.jpeg b/docker/hyperzoo/resources/test_image/fish1.jpeg new file mode 100644 index 0000000000000000000000000000000000000000..daef2b58d5746cf36cf9b25c7fdc721eac8ed928 GIT binary patch literal 3444 zcmZWr2Uru?7QQJIsR<>EbVPbUA}ENI$S%cD#7IQC6p8dEh9(9U1rcziN-qKt6cFhh zQDBiKD6$YXC`F}21Ox$bd4uKK_r7=TH*=FY=RfDn+&lmLyMOJz1}K8zNkagG0RV#r z*rkJLu#c6Eo0pfHo0|tZzE3>dywKnlL}M_5Xnu^4n7pKru#AWxzZ6bNMnOqMMdcR> zHBB|FhP<+h5{!d`gNK`EKMEzHEF&SKr2Nyq|58%=aRr+G_5F<5yR_Q@(Ci=&rh|Z? z0UQlOpkcdDfyiE)2-se)KMe*)AX!-1ASVt5;4nA~9KnV}{0M-YXe0}t6z;_715W;| z{D<`;Z&wKvJRT-G2S`H@;r~kqzUPB{GH4ip@7Z3x@AGu+fs5?VQ3$*_^&&e2KtcyP5?T@}97rQjlD|>$I7BcZx(uE#599&t!b1Uo z+IJpcSswCr02VwH4i#jT1n~d`^kG8iNH~hUgTW`6iZ}>30I>FAfkJ?!007c>e8RU| zf=&n&7duo+>Ly%OQeTye`>|pa35JK^k)8Rea9Ao;9eHsN1|YZkk0Nk&BwQ8XVeB1L zXCMV8`3e-N`4#}~q;JVVg&Kzd7zhD1pxzxMEpelWxTSO|;Ld}gl7{I6n~gakg7w8T z(Vn_lU#?jIJrOQ^lB?-citHW3!f=83+iD|jvO3j=YNQSzC50Zs*jYb5%9ls70G1OM zEfn`!0az$G6|iP*#-8xVPO-}SNKRJv?`CQ_q}0A6g#^cs_jx!7{zH7X6mqOxuJM&_ z`?f%!kEyyaO`C#f3ak;8&`Xeir3Y#N&>~GI&tM<$i{uA zGAfV+nrmEFw9um(&6_Tx**l2_eq#YsY~owxWi9*Mf)x95aBaO3dRbZFX`f$=Rr1}L z`5I;&)}-SgI+*JyeOhH^I7SfzqR%^bL?RG=)=2Ej_upi2IJ)OxldxE;t*qSvIP8 z{AH4FSzBL4-}9BV0N=C-pPgX$L|xV4f_m5OT_DRh=FiEd*>)u2HybBR+&KML;k(r> z5m{oIjp8a^*&%u6+nVbm?;>%~5rYE3UU9rwiH>iC6;7V*4t z8iOtdnw=x1UyNS9o?G+0enkB5Ne97d^LH=iJO+#17qbGAmpeb{QdB(W>gJY5Qo8Jo zPFX4U$;f7fg^G##h|cCTglIQB@Q^UkPl`J!VRTc#rJ~@?Yzu>e9Z31UhfuHv?4LjY zAo;nXNRmGuj{5Y9<@|}kq7>(*wlh#Y)4XiBp<<=ZICs5_wNKg+r{*+-dv<|CrNqj_vUG3pwLnVp;zEd; zS_pl(&bY`!yn>gl>EuGaTG$+4BnfThPz+8k5Xd87NQ zL;osOOeTqGG}T-v#C^CWt?TJp$7C+O?z$xybT*qLSR&}W{^BgrKXt-;X^>z;sjUCa z_139^K2`BG-f|oC&88$u8f~iJ+RE&Wtj1Nh*t*;ZcWNm?rAU;_oqgBSzCH8reLZUn zHdfeE#Hn#wa;%8iHe(xOFFLMF^Ch`Bj$)hA{(P%Qsm;h)RIMRh zvm5k$vcFeuWOhu&rn_(?@1M|9f!xBAZL;GDHJ{63v#ks*FE`87$+(bi!p= zypwc9g$l1|FOAFUw8kq7)GvgEd*Q@d`qSwpJ)Y3Egjn0TBc2`go)bVUXN`>I*yB?m zDRGD?SF_KNHzs-Pa+#&A`!2BjvXyJ4k%1N$*jNt^*V$PO4Zq~P-qm-dXmq=;e@)e6 zj7fji(ye#%U2EoG;E}k>?e^e@pJp^j<{T)Rib9zWy>2O8qjAw~=1r|uBEwue{RU&o zNI!Ie6lLS2qm|zfpDmF2WqDn@c_udJN~`c}o=|nO+yF^_Fw-i)Or1Xe*O;J&^$l_^ z#k&b7^uFl`KmDauNpNPtL8d}XN7VmepUzaYHYR-4z{QaUm}J^SEkeGc|7 zDjWuo(g+NYYQaKV+rm_yu=mjffNf)KN%m;^yNomvQBWqrXUgh+|5kQ`_Z&`_v7u|` zcAHnXjnB+s!h1_jeE&v|Y;a;*c2KIVKj+fArU%9DxPRc)6x@U93}zyaT4L5xA-#P- zb}PUcM`^T`4bHtgVtAN%)6T>5gLF~*-)4%f)E4bt0>+g>L+V_eB^BZ~UH zPbr(z{VsVB2&%rM4Ygvq(ChUrVd*5(gRhzL?K+9t4Tns^3x-W=Yl4&mJG)JSXo38R+5Y@5#-b~3`J~;5$-xz0_=9SQ1gLnfbKv^NM^wPi!3R-?XQ{_P?QWxbhPdIiId<_}UaGjc;4+^wk-YJt^KsubpW-*yTH9y`JLY%6 z2!U>+H%{;FDqT%GcBn14+y<9alm2qCH?cm!L~FuxQq$u#>BCr2y55h=QFbS<* ztxR&Y5$)+>X~^WADZh0E<8^nfkXJGKT7?h+AOX^e>l{kzIc87x{9#PZhZy&p;ZpeL z<#U@8Ce3^q?8&Hx#E%cwE4(^GY9Mswv}cKa$Ys!4v#AgJNT8uT&ZWo zZdtah^37E@vU>f>aVKmcWiw*FF!`iAKhGp)|Czdb_7#U7=XlS2_$70hIgd5XbM^ef zRQ%`OrQxzEIdko5Ec0B9KH1WIJk!caFxHsxXxNc+-)jBx%O@%v4#%(GS)E8^i}Anw zMfY8f7p6EzuTGh#jO=hMaQu1Vy$g!=kO1sNN8Xfn=1z*5u>V_91_AE&i2U?K%r&OK9DHMq}r zCr9RfkJvWN2Q!WA~ zFu-DgK%k`I-~c{a6%N4^0SCrH9zZnwKPCwzQS3l59tcJ57l9v@1lIvxi&F* z6zMmn_!j_u8HE52aLYvHqi>)K3-|u9<2BL