Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Allow renaming datasets & dataset with duplicate names (#8075)
* WIP: Adjust schema to allow duplicate dataset names & implement new uir dataset addressing scheme * reimplement proper dataset name checking route (still keep leave away the check for already existing name) * WIP: implement wk core backend routes to only use datasetId and no orgaId - Includes moving ObjectId to uitls package * WIP: finish using dataset id in wk core backend and dataspath in datastore & add legacy routes - Undo renaming DataSourceId to LegacyDataSourceId * WIP: Fix backend compilation * Fix backend compilation * WIP: Adapt frontend to new api * WIP: adapt frontend to new routes * WIP: Adjust frontend to newest api * first kinda working version * Try update schema and evolution * fix evolution & add first version of reversion (needs to be tested) * fix frontend tests * format backend * fix dataSets.csv * fix e2e tests * format backend * fix frontend * remove occurences of displayName access / variables in context of a dataset object * fixed verion routes * fix reserveUploadRoute * rename orga_name in jobs to orga_id * format code * fix finishUploadRoute * allow duplicate names when uploading a new dataset * fix job list view * fix some datastore requests * further minor fixes * make add remote dataset path a post request as it always creates a new dataset (even when the name is already taken) * WIP: replace missed code parts where dataset address was still wrong / not backwards compatible * WIP: replace missed code parts where dataset address was still wrong / not backwards compatible * WIP: adapt annotation upload & task upload to use datasetId * WIP: adjust backend part of task upload to use new dataset addressing * Finish adapting task & annotation upload to new format * Fix inserting dataset into database * fix nml annotation upload * format backend * add hint about new parameter datasetId to csv / bulk task upload * Move task api routes to a separate file in frontend * add datasetName and datasetId to returned tasks * add missing task api routes file (frontend) * adapt frontend to new task return type * remove unused imports * fix frontend tests * add datasetId to nml output and readd datasetName to nml parsing for legacy support * add dataset id to frontend nml serialization * fix parsing dataset id from nml in backend * fix nml backend tests * fix typing * remove logging statement * fix frontend dataset cache by using the dataset id as the identifier * send dataset path as datasource.id.name to frontend * remove unused code * fix pervious merge with newest master * fix evolution and reversion * remove objectid from UploadedVolumeLayer and delete SkeletonTracingWithDatasetId and make nml parser return its own result case class * use new notion like urls * rename datasetPath to datasetDirectoryName * fix backend tests * delete DatasetURLParser, rename package of ObjectId to objectid, update e2e snapshot tests * small clean up, fix dataset public writes, fix dataset table highlighting * fix e2e tests * make datastore dataset update route http put method * make datastore dataset update route http put method * rename datasetParsedId to datasetIdValidated * bump schema version after merge * removeexplicit invalid dataset id message when parsing a datasetid from string * remove overwriting orga name and overwriting dataset name from anntoation upload path * WIP apply PR feedback * remove unused method * rely on datasetId in processing of taskcreation routes * apply some more review feedback * cleanup unused implicits * make link generation for convert_to_wkw and compute_mesh_file backwards compatible * adjust unfinished uploads to display correct dataset name in upload view * send datasource id to compose dataset route (not dataset id) - send datasource id in correct format to backend - fix dataset renaming in dataset settings * WIP apply review feedback - add legacy route for task creation routes - support jobs results link only via legacy routes - WIP: refactor nml parsing code - Added dataset location to datasets settings advanced tab - added comment to OpenGraphService about parsing new uri schema * Finish refactoring nml backend parsing * ifx nml typing * fix nml upload * apply frontend pr review feedback * apply pr frontend feedback * add new e2e test to check dataset disambiguation * re-add backwards compatibility for legacy dataset links without organization id * change screenshot test dataset id retrieval to be a test.before * remove outdated comment - the backend always sends an id for compacted datasets * temp disable upload test * fix linting * fix datasetId expected length * replace failure fox by returnError fox, fix json error msg * fix upload test - make new fields to reserve upload optional (for backward compatibility) - fix find data request from core backend * remove debug logging from dataset upload test * remove debug logs * format backend * apply pr various pr feedback * fix frontend routing & include /view postfix in ds link in ds edit view * add todo comments * send separate case class to datastore upon reserveUpload request to core backend * format backend * try setting screenshot ci to check this branch * use auth token in screenshot tests * renaming path to directoryName in response of disambiguate route * hopefully fix screenshot tests * reset application.conf * remove disambiguate link * switch screenshots back to use master.webknossos.xyz * add comment to explain handling legacy urls - revert nightly.yml full to master * remove outdated TODO comment * fix single dataset dnd in dashboard - improve typing of dnd arguments * format backend * add comment reasoning why dataset name setting is not synced with datasource.id.name * rename some local variables * pass missing parameter to infer_with_model worker job * try not showing dataset as changed when it was migrated to the new renamable version * format backend * add changelog entry * add info about ds being renamable to the docs * undo misc snapshot changes * make task creation form use new version of task creation parameters * remove dataSet field from task json object returned from server -> add required legacy adaption * also rename dataSet in publication routes to dataset - no legacy routes needed as not used by wklibs * add changelog entry about dropping legacy routes * bump api version * remove old routes with api version lower than 5 & resort methods in legacy controller * refresh snapshots * fix legacy create task route - also remove unised injections and imports * fix annotation info legacy route * remove unused import * - fix in code comments - fix changelog entry - add migration entry about dropped api versions --------- Co-authored-by: Michael Büßemeyer <[email protected]> Co-authored-by: Michael Büßemeyer <[email protected]> Co-authored-by: Florian M <[email protected]>
- Loading branch information