A curated list of awesome open source and commercial platforms for serving models in production 🚀
-
Updated
Apr 20, 2022
A curated list of awesome open source and commercial platforms for serving models in production 🚀
Serving large ml models independently and asynchronously via message queue and kv-storage for communication with other services [EXPERIMENT]
Collection of OSS models that are containerized into a serving container
Miscellaneous codes and writings for MLOps
Integrating Aporia ML model monitoring into a Bodywork serving pipeline.
Big ML Project with infrastructure (MLflow, Minio, Grafana), backend (FastAPI, Catboost) and frontend (React, Maplibre)
Energy consumption of ML inference with Runtime Engines
Applied Machine Learning Projects
🌐 Language identification for Scandinavian languages
Resources for serving models in production
Heterogeneous System ML Pipeline Scheduling Framework with Triton Inference Server as Backend
Example solution to the MLOps Case Study covering both online and batch processing.
Data extraction for Identifying architectural design decisions for achieving green ML serving
Add a description, image, and links to the ml-serving topic page so that developers can more easily learn about it.
To associate your repository with the ml-serving topic, visit your repo's landing page and select "manage topics."