Skip to content

Commit

Permalink
Merge pull request #8495 from cvat-ai/release-2.20.0
Browse files Browse the repository at this point in the history
Release v2.20.0
  • Loading branch information
cvat-bot[bot] authored Oct 1, 2024
2 parents 4163cb2 + 44c6b97 commit 941f5c0
Show file tree
Hide file tree
Showing 73 changed files with 4,967 additions and 1,474 deletions.
2 changes: 2 additions & 0 deletions .github/workflows/full.yml
Original file line number Diff line number Diff line change
Expand Up @@ -165,6 +165,8 @@ jobs:
id: run_tests
run: |
pytest tests/python/
ONE_RUNNING_JOB_IN_QUEUE_PER_USER="true" pytest tests/python/rest_api/test_queues.py
CVAT_ALLOW_STATIC_CACHE="true" pytest -k "TestTaskData" tests/python
- name: Creating a log file from cvat containers
if: failure() && steps.run_tests.conclusion == 'failure'
Expand Down
3 changes: 2 additions & 1 deletion .github/workflows/main.yml
Original file line number Diff line number Diff line change
Expand Up @@ -177,8 +177,9 @@ jobs:
COVERAGE_PROCESS_START: ".coveragerc"
run: |
pytest tests/python/ --cov --cov-report=json
for COVERAGE_FILE in `find -name "coverage*.json" -type f -printf "%f\n"`; do mv ${COVERAGE_FILE} "${COVERAGE_FILE%%.*}_0.json"; done
ONE_RUNNING_JOB_IN_QUEUE_PER_USER="true" pytest tests/python/rest_api/test_queues.py --cov --cov-report=json
CVAT_ALLOW_STATIC_CACHE="true" pytest -k "TestTaskData" tests/python --cov --cov-report=json
for COVERAGE_FILE in `find -name "coverage*.json" -type f -printf "%f\n"`; do mv ${COVERAGE_FILE} "${COVERAGE_FILE%%.*}_0.json"; done
- name: Uploading code coverage results as an artifact
uses: actions/upload-artifact@v4
Expand Down
6 changes: 6 additions & 0 deletions .github/workflows/schedule.yml
Original file line number Diff line number Diff line change
Expand Up @@ -170,6 +170,12 @@ jobs:
pytest tests/python/
pytest tests/python/ --stop-services
ONE_RUNNING_JOB_IN_QUEUE_PER_USER="true" pytest tests/python/rest_api/test_queues.py
pytest tests/python/ --stop-services
CVAT_ALLOW_STATIC_CACHE="true" pytest tests/python
pytest tests/python/ --stop-services
- name: Unit tests
env:
HOST_COVERAGE_DATA_DIR: ${{ github.workspace }}
Expand Down
33 changes: 33 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,39 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0

<!-- scriv-insert-here -->

<a id='changelog-2.20.0'></a>
## \[2.20.0\] - 2024-10-01

### Added

- A server setting to enable or disable storage of permanent media chunks on the server filesystem
(<https://github.com/cvat-ai/cvat/pull/8272>)
- \[Server API\] `GET /api/jobs/{id}/data/?type=chunk&index=x` parameter combination.
The new `index` parameter allows to retrieve job chunks using 0-based index in each job,
instead of the `number` parameter, which used task chunk ids.
(<https://github.com/cvat-ai/cvat/pull/8272>)

### Changed

- Job assignees will not receive frames from adjacent jobs in chunks
(<https://github.com/cvat-ai/cvat/pull/8272>)

### Deprecated

- \[Server API\] `GET /api/jobs/{id}/data/?type=chunk&number=x` parameter combination
(<https://github.com/cvat-ai/cvat/pull/8272>)

### Removed

- Removed the non-functional `task_subsets` parameter from the project create
and update endpoints
(<https://github.com/cvat-ai/cvat/pull/8492>)

### Fixed

- Various memory leaks in video reading on the server
(<https://github.com/cvat-ai/cvat/pull/8272>)

<a id='changelog-2.19.1'></a>
## \[2.19.1\] - 2024-09-26

Expand Down
2 changes: 1 addition & 1 deletion cvat-cli/requirements/base.txt
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
cvat-sdk~=2.19.1
cvat-sdk~=2.20.0
Pillow>=10.3.0
setuptools>=70.0.0 # not directly required, pinned by Snyk to avoid a vulnerability
2 changes: 1 addition & 1 deletion cvat-cli/src/cvat_cli/version.py
Original file line number Diff line number Diff line change
@@ -1 +1 @@
VERSION = "2.19.1"
VERSION = "2.20.0"
Loading

0 comments on commit 941f5c0

Please sign in to comment.