Skip to content

Commit

Permalink
[CI] Ensure documentation build is checked in CI (vllm-project#2842)
Browse files Browse the repository at this point in the history
  • Loading branch information
simon-mo authored Feb 13, 2024
1 parent a4211a4 commit f964493
Show file tree
Hide file tree
Showing 5 changed files with 14 additions and 1 deletion.
7 changes: 7 additions & 0 deletions .buildkite/test-pipeline.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -49,3 +49,10 @@ steps:
commands:
- pip install aiohttp
- bash run-benchmarks.sh

- label: Documentation Build
working_dir: "/vllm-workspace/docs"
no_gpu: True
commands:
- pip install -r requirements-docs.txt
- SPHINXOPTS=\"-W\" make html
4 changes: 3 additions & 1 deletion .buildkite/test-template.j2
Original file line number Diff line number Diff line change
Expand Up @@ -35,13 +35,15 @@ steps:
- image: "{{ docker_image }}"
command: ["bash"]
args:
- "-c"
- '-c'
- "'cd {{ (step.working_dir or default_working_dir) | safe }} && {{ step.command or (step.commands | join(' && ')) | safe }}'"
{% if not step.no_gpu %}
resources:
requests:
nvidia.com/gpu: "{{ step.num_gpus or default_num_gpu }}"
limits:
nvidia.com/gpu: "{{ step.num_gpus or default_num_gpu }}"
{% endif %}
env:
- name: HF_TOKEN
valueFrom:
Expand Down
2 changes: 2 additions & 0 deletions docs/source/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -94,3 +94,5 @@ def add_line(self, line: str, source: str, *lineno: int) -> None:


autodoc.ClassDocumenter = MockedClassDocumenter

navigation_with_keys = False
1 change: 1 addition & 0 deletions docs/source/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -89,6 +89,7 @@ Documentation
:caption: Quantization

quantization/auto_awq
quantization/fp8_e5m2_kv_cache

.. toctree::
:maxdepth: 2
Expand Down
1 change: 1 addition & 0 deletions docs/source/quantization/fp8_e5m2_kv_cache.rst
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,7 @@ The FP8 data format retains 2~3 mantissa bits and can convert float/fp16/bflaot1
Here is an example of how to enable this feature:

.. code-block:: python
from vllm import LLM, SamplingParams
# Sample prompts.
prompts = [
Expand Down

0 comments on commit f964493

Please sign in to comment.