Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[CI/Build][Misc] Update Pytest Marker for VLMs #5623

Merged
merged 1 commit into from
Jun 18, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .buildkite/run-cpu-test.sh
Original file line number Diff line number Diff line change
Expand Up @@ -23,4 +23,4 @@ docker exec cpu-test-avx2 bash -c "python3 examples/offline_inference.py"
docker exec cpu-test bash -c "cd tests;
pip install pytest Pillow protobuf
cd ../
pytest -v -s tests/models -m \"not llava\" --ignore=tests/models/test_embedding.py --ignore=tests/models/test_registry.py"
pytest -v -s tests/models -m \"not vlm\" --ignore=tests/models/test_embedding.py --ignore=tests/models/test_registry.py"
6 changes: 3 additions & 3 deletions .buildkite/test-pipeline.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -100,13 +100,13 @@ steps:
- label: Models Test
#mirror_hardwares: [amd]
commands:
- pytest -v -s models -m \"not llava\"
- pytest -v -s models -m \"not vlm\"

- label: Llava Test
- label: Vision Language Models Test
mirror_hardwares: [amd]
commands:
- bash ../.buildkite/download-images.sh
- pytest -v -s models -m llava
- pytest -v -s models -m vlm

- label: Prefix Caching Test
mirror_hardwares: [amd]
Expand Down
2 changes: 1 addition & 1 deletion pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -71,5 +71,5 @@ markers = [
"skip_global_cleanup",
"llm: run tests for vLLM API only",
"openai: run tests for OpenAI API only",
"llava: run tests for LLaVA models only",
"vlm: run tests for vision language models only",
]
2 changes: 1 addition & 1 deletion tests/models/test_llava.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@

from ..conftest import IMAGE_FILES

pytestmark = pytest.mark.llava
pytestmark = pytest.mark.vlm

# The image token is placed before "user" on purpose so that the test can pass
HF_IMAGE_PROMPTS = [
Expand Down
2 changes: 1 addition & 1 deletion tests/models/test_llava_next.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@

from ..conftest import IMAGE_FILES

pytestmark = pytest.mark.llava
pytestmark = pytest.mark.vlm

_PREFACE = (
"A chat between a curious human and an artificial intelligence assistant. "
Expand Down
2 changes: 1 addition & 1 deletion tests/models/test_phi3v.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@

from ..conftest import IMAGE_FILES

pytestmark = pytest.mark.llava
pytestmark = pytest.mark.vlm

# The image token is placed before "user" on purpose so that the test can pass
HF_IMAGE_PROMPTS = [
Expand Down
Loading