This repository is a fork of the original Skypilot and maintained by Trainy in order to support running jobs Trainy's our managed Kubernetes cluster platform as a service, Konduktor (Github and Documentation). You can see some our contributions to the mainline project here. If there are features in this fork you feel like make sense to contribute back to upstream, please let us know and we are happy to make a pull request. We are planning on keeping this fork the same license as the original project (Apache 2.0), as we have also greatly benefit from the open nature of the project and believe that sharing our work reduces redundant work streams for maintainers, contributors and users alike.
🔥 News 🔥
- [Oct 2024] 🎉 SkyPilot crossed 1M+ downloads 🎉: Thank you to our community! Twitter/X
- [Sep 2024] Point, Launch and Serve Llama 3.2 on Kubernetes or Any Cloud: example
- [Sep 2024] Run and deploy Pixtral, the first open-source multimodal model from Mistral AI.
- [Jun 2024] Reproduce GPT with llm.c on any cloud: guide
- [Apr 2024] Serve Qwen-110B on your infra: example
- [Apr 2024] Using Ollama to deploy quantized LLMs on CPUs and GPUs: example
- [Feb 2024] Deploying and scaling Gemma with SkyServe: example
- [Feb 2024] Serving Code Llama 70B with vLLM and SkyServe: example
- [Dec 2023] Mixtral 8x7B, a high quality sparse mixture-of-experts model, was released by Mistral AI! Deploy via SkyPilot on any cloud: example
- [Nov 2023] Using Axolotl to finetune Mistral 7B on the cloud (on-demand and spot): example
LLM Finetuning Cookbooks: Finetuning Llama 2 / Llama 3.1 in your own cloud environment, privately: Llama 2 example and blog; Llama 3.1 example and blog
Archived
- [Jul 2024] Finetune and serve Llama 3.1 on your infra
- [Apr 2024] Serve and finetune Llama 3 on any cloud or Kubernetes: example
- [Mar 2024] Serve and deploy Databricks DBRX on your infra: example
- [Feb 2024] Speed up your LLM deployments with SGLang for 5x throughput on SkyServe: example
- [Dec 2023] Using LoRAX to serve 1000s of finetuned LLMs on a single instance in the cloud: example
- [Sep 2023] Mistral 7B, a high-quality open LLM, was released! Deploy via SkyPilot on any cloud: Mistral docs
- [Sep 2023] Case study: Covariant transformed AI development on the cloud using SkyPilot, delivering models 4x faster cost-effectively: read the case study
- [Jul 2023] Self-Hosted Llama-2 Chatbot on Any Cloud: example
- [Jun 2023] Serving LLM 24x Faster On the Cloud with vLLM and SkyPilot: example, blog post
- [Apr 2023] SkyPilot YAMLs for finetuning & serving the Vicuna LLM with a single command!
SkyPilot is a framework for running AI and batch workloads on any infra, offering unified execution, high cost savings, and high GPU availability.
SkyPilot abstracts away infra burdens:
- Launch dev clusters, jobs, and serving on any infra
- Easy job management: queue, run, and auto-recover many jobs
SkyPilot supports multiple clusters, clouds, and hardware (the Sky):
- Bring your reserved GPUs, Kubernetes clusters, or 12+ clouds
- Flexible provisioning of GPUs, TPUs, CPUs, with auto-retry
SkyPilot cuts your cloud costs & maximizes GPU availability:
- Autostop: automatic cleanup of idle resources
- Managed Spot: 3-6x cost savings using spot instances, with preemption auto-recovery
- Optimizer: 2x cost savings by auto-picking the cheapest & most available infra
SkyPilot supports your existing GPU, TPU, and CPU workloads, with no code changes.
Install with pip:
# Choose your clouds:
pip install -U "skypilot[kubernetes,aws,gcp,azure,oci,lambda,runpod,fluidstack,paperspace,cudo,ibm,scp]"
To get the latest features and fixes, use the nightly build or install from source:
# Choose your clouds:
pip install "skypilot-nightly[kubernetes,aws,gcp,azure,oci,lambda,runpod,fluidstack,paperspace,cudo,ibm,scp]"
Current supported infra (Kubernetes; AWS, GCP, Azure, OCI, Lambda Cloud, Fluidstack, RunPod, Cudo, Paperspace, Cloudflare, Samsung, IBM, VMware vSphere):
You can find our documentation here.
A SkyPilot task specifies: resource requirements, data to be synced, setup commands, and the task commands.
Once written in this unified interface (YAML or Python API), the task can be launched on any available cloud. This avoids vendor lock-in, and allows easily moving jobs to a different provider.
Paste the following into a file my_task.yaml
:
resources:
accelerators: A100:8 # 8x NVIDIA A100 GPU
num_nodes: 1 # Number of VMs to launch
# Working directory (optional) containing the project codebase.
# Its contents are synced to ~/sky_workdir/ on the cluster.
workdir: ~/torch_examples
# Commands to be run before executing the job.
# Typical use: pip install -r requirements.txt, git clone, etc.
setup: |
pip install "torch<2.2" torchvision --index-url https://download.pytorch.org/whl/cu121
# Commands to run as a job.
# Typical use: launch the main program.
run: |
cd mnist
python main.py --epochs 1
Prepare the workdir by cloning:
git clone https://github.com/pytorch/examples.git ~/torch_examples
Launch with sky launch
(note: access to GPU instances is needed for this example):
sky launch my_task.yaml
SkyPilot then performs the heavy-lifting for you, including:
- Find the lowest priced VM instance type across different clouds
- Provision the VM, with auto-failover if the cloud returned capacity errors
- Sync the local
workdir
to the VM - Run the task's
setup
commands to prepare the VM for running the task - Run the task's
run
commands
Refer to Quickstart to get started with SkyPilot.
To learn more, see our documentation, blog, and community integrations.
Runnable examples:
- LLMs on SkyPilot
- Llama 3.2: lightweight and vision models
- Pixtral
- Llama 3.1 finetuning and serving
- GPT-2 via
llm.c
- Llama 3
- Qwen
- Databricks DBRX
- Gemma
- Mixtral 8x7B; Mistral 7B (from official Mistral team)
- Code Llama
- vLLM: Serving LLM 24x Faster On the Cloud (from official vLLM team)
- SGLang: Fast and Expressive LLM Serving On the Cloud (from official SGLang team)
- Vicuna chatbots: Training & Serving (from official Vicuna team)
- Train your own Vicuna on Llama-2
- Self-Hosted Llama-2 Chatbot
- Ollama: Quantized LLMs on CPUs
- LoRAX
- QLoRA
- LLaMA-LoRA-Tuner
- Tabby: Self-hosted AI coding assistant
- LocalGPT
- Falcon
- Add yours here & see more in
llm/
!
- Framework examples: PyTorch DDP, DeepSpeed, JAX/Flax on TPU, Stable Diffusion, Detectron2, Distributed TensorFlow, Ray Train, NeMo, programmatic grid search, Docker, Cog, Unsloth, Ollama, llm.c, Airflow and many more (
examples/
).
Case Studies and Integrations: Community Spotlights
Follow updates:
Read the research:
- SkyPilot paper and talk (NSDI 2023)
- Sky Computing whitepaper
- Sky Computing vision paper (HotOS 2021)
- Policy for Managed Spot Jobs (NSDI 2024)
We are excited to hear your feedback!
- For issues and feature requests, please open a GitHub issue.
- For questions, please use GitHub Discussions.
For general discussions, join us on the SkyPilot Slack.
We welcome all contributions to the project! See CONTRIBUTING for how to get involved.