-
Notifications
You must be signed in to change notification settings - Fork 3.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Do we have a feature of interacting with server-side annotations(dump/load datasets to/from the server) #2010
Comments
Hi @nmanovic , how do you think about this? |
Each job has a status which can be changed.
It is something which we are going to fix in the future. First step on the way is the patch: #2007. Even it solves another issue it will be an initial step to support remote data sources. |
When datasets and annotations are exported, they are cached for some time (10 hours by default - https://github.com/opencv/cvat/blob/develop/cvat/apps/dataset_manager/views.py#L39). You can mount cache directories to a specific location on the host, the cache is placed in |
Hi @zhiltsov-max , thanks for your solution, this is very helpful. 👍 |
Hi @nmanovic , thanks for your timely comment, the following attach is my UI, |
@nmanovic Thank you so much. |
I will close the issue for now. Do not hesitate reopen it if you still have questions |
Dear great CVAT community, thanks for your continuous contributions to this opensource repo, excellently solved my previous use cases, I have a new issue not sure it is good to ask.
I'm currently trying to facilitate the annotating process between the annotation team and the data science team using CVAT.
I've added the server-side files sharing function according to the installation guide.
While the issue I'm facing now is loading annotations from the server and dumping annotation results/dataset to the server.
I know CVAT supports interacting annotations with the local PC, which is good, but we need to spend more extra time downloading the dataset(images+annotations), then uploading it to the server every time. If the dataset is big, time-consuming is huge.
So I'm wondering if we have this feature already, or else, would we try to add it in the future?
Additionally, it would be more convenient if we can have a task status flag on each of the task, such as Done, In process, Pending, so that data engineer or other annotators won’t re-do it. In this case, annotators can mark the status by clicking a button.
Thanks for your help ahead.
The text was updated successfully, but these errors were encountered: