diff --git a/content/en/_index.html b/content/en/_index.html
index 0cc2184faa..518dd553df 100644
--- a/content/en/_index.html
+++ b/content/en/_index.html
@@ -124,7 +124,7 @@
Model Training
- Kubeflow Training Operator is a unified interface for model training on Kubernetes.
+ Kubeflow Training Operator is a unified interface for model training and fine-tuning on Kubernetes.
It runs scalable and distributed training jobs for popular frameworks including PyTorch, TensorFlow, MPI, MXNet, PaddlePaddle, and XGBoost.
diff --git a/content/en/docs/components/training/explanation/_index.md b/content/en/docs/components/training/explanation/_index.md
new file mode 100644
index 0000000000..bc2e4865e1
--- /dev/null
+++ b/content/en/docs/components/training/explanation/_index.md
@@ -0,0 +1,5 @@
++++
+title = "Explanation"
+description = "Explanation for Training Operator Features"
+weight = 60
++++
diff --git a/content/en/docs/components/training/explanation/fine-tuning.md b/content/en/docs/components/training/explanation/fine-tuning.md
new file mode 100644
index 0000000000..4e565f1368
--- /dev/null
+++ b/content/en/docs/components/training/explanation/fine-tuning.md
@@ -0,0 +1,63 @@
++++
+title = "LLM Fine-Tuning with Training Operator"
+description = "Why Training Operator needs fine-tuning API"
+weight = 10
++++
+
+{{% alert title="Warning" color="warning" %}}
+This feature is in **alpha** stage and Kubeflow community is looking for your feedback. Please
+share your experience using [#kubeflow-training-operator Slack channel](https://kubeflow.slack.com/archives/C985VJN9F)
+or [Kubeflow Training Operator GitHib](https://github.com/kubeflow/training-operator/issues/new).
+{{% /alert %}}
+
+This page explains how [Training Operator fine-tuning API](/docs/components/training/user-guides/fine-tuning)
+fits into Kubeflow ecosystem.
+
+In the rapidly evolving landscape of machine learning (ML) and artificial intelligence (AI),
+the ability to fine-tune pre-trained models represents a significant leap towards achieving custom
+solutions with less effort and time. Fine-tuning allows practitioners to adapt large language models
+(LLMs) like BERT or GPT to their specific needs by training these models on custom datasets.
+This process maintains the model's architecture and learned parameters while making it more relevant
+to particular applications. Whether you're working in natural language processing (NLP),
+image classification, or another ML domain, fine-tuning can drastically improve performance and
+applicability of pre-existing models to new datasets and problems.
+
+## Why Training Operator Fine-Tune API Matter ?
+
+Training Operator Python SDK introduction of Fine-Tune API is a game-changer for ML practitioners
+operating within the Kubernetes ecosystem. Historically, Training Operator has streamlined the
+orchestration of ML workloads on Kubernetes, making distributed training more accessible. However,
+fine-tuning tasks often require extensive manual intervention, including the configuration of
+training environments and the distribution of data across nodes. The Fine-Tune API aim to simplify
+this process, offering an easy-to-use Python interface that abstracts away the complexity involved
+in setting up and executing fine-tuning tasks on distributed systems.
+
+## The Rationale Behind Kubeflow's Fine-Tune API
+
+Implementing Fine-Tune API within Training Operator is a logical step in enhancing the platform's
+capabilities. By providing this API, Training Operator not only simplifies the user experience for
+ML practitioners but also leverages its existing infrastructure for distributed training.
+This approach aligns with Kubeflow's mission to democratize distributed ML training, making it more
+accessible and less cumbersome for users. The API facilitate a seamless transition from model
+development to deployment, supporting the fine-tuning of LLMs on custom datasets without the need
+for extensive manual setup or specialized knowledge of Kubernetes internals.
+
+## Roles and Interests
+
+Different user personas can benefit from this feature:
+
+- **MLOps Engineers:** Can leverage this API to automate and streamline the setup and execution of
+ fine-tuning tasks, reducing operational overhead.
+
+- **Data Scientists:** Can focus more on model experimentation and less on the logistical aspects of
+ distributed training, speeding up the iteration cycle.
+
+- **Business Owners:** Can expect quicker turnaround times for tailored ML solutions, enabling faster
+ response to market needs or operational challenges.
+
+- **Platform Engineers:** Can utilize this API to better operationalize the ML toolkit, ensuring
+ scalability and efficiency in managing ML workflows.
+
+## Next Steps
+
+- Understand [the architecture behind `train` API](/docs/components/training/reference/fine-tuning).
diff --git a/content/en/docs/components/training/images/fine-tune-llm-api.drawio.svg b/content/en/docs/components/training/images/fine-tune-llm-api.drawio.svg
new file mode 100644
index 0000000000..0aeed6e430
--- /dev/null
+++ b/content/en/docs/components/training/images/fine-tune-llm-api.drawio.svg
@@ -0,0 +1,4 @@
+
+
+
+