Data and AI Assets Catalog and Execution Engine
Allows upload, registration, execution, and deployment of:
- AI pipelines and pipeline components
- Models
- Datasets
- Notebooks
For more details about the project please follow this announcement blog post.
Additionally it provides:
- Automated sample pipeline code generation to execute registered models, datasets and notebooks
- Pipelines engine powered by Kubeflow Pipelines on Tekton, core of Watson AI Pipelines
- Components registry for Kubeflow Pipelines
- Datasets management by Datashim
- Preregistered Datasets from Data Asset Exchange (DAX) and Models from Model Asset Exchange (MAX)
- Serving engine by KFServing
- Model Metadata schemas
For a simple up-and-running MLX with asset catalog only, we created a Quickstart Guide using Docker Compose.
For a slightly more resource-hungry local deployment that allows pipeline execution, we created the MLX with Kubernetes in Docker (KIND) deployment option.
For a full deployment, we use Kubeflow Kfctl tooling.
By default, the MLX UI is available at http://<cluster_node_ip>:30380/mlx/
If you deployed on a Kubernetes cluster, run the following and look for the External-IP column to find the public IP of a node.
kubectl get node -o wide
If you deployed using OpenShift, you can use IstioIngresGateway Route. You can find it in the OpenShift Console or using the CLI.
oc get route -n istio-system
Import data and AI assets using MLX's catalog importer
MLX Troubleshooting Instructions
- Slack: @lfaifoundation/ml-exchange
- Mailing lists:
- MLX-Announce for top-level milestone messages and announcements
- MLX-TSC for top-level governance discussions and decissions
- MLX-Technical-Discuss for technical discussions and questions