You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
De-identified DICOM images will be downloaded from orthanc-anon and sent via FTPS to the DSH under the project slug directory using the pseudonymised id as the filename (e.g. {project-slug}/{study-pseduonymised-id}.zip)
Breakdown of steps for progress:
get pseudonymised identifier for image from DICOM data (currently in patient ID tag you can do this in a REST request: {dicom-server}/instances/{resource-id}/content/0010-0020 gets us just the patient idenfier, see below for details)
get project name slug from PIXL db using the pseudonymised identifier (in pixl db: Image.hashed_identifier)
download zip of DICOM data (see details below)
upload zip of DICOM data to {project-slug}/{study-pseduonymised-id}.zip on the DSH (see details and prototype repo below)
Update the existing image row in the database with the current time added to exported_at
Testing
In an ideal world we'd have a mock DSH in docker compose which we can ftp into. If we can set this up in a day then probably worth it. Have a prototype repo for setting this up so I think this should be quick,
Current State
At the moment DICOM images are pushed via the DICOMweb protocol to an azure DICOM server. Sadly the DSH doesn't have a DICOM server that we can use, so we'll have to use FTPS for 100 days.
Documentation
No response
Dependencies
DSH account for uploading data only for the project UCLH-Foundry/the-rolling-skeleton#77
It looks like its possible to do this using the resourceId that we currently use in the dicomweb request in SendViaStow in orthanc-anon.
All instances of a study can be retrieved as a zip file as follows:
$ curl http://localhost:8042/studies/6b9e19d9-62094390-5f9ddb01-4a191ae7-9766b715/archive > Study.zip
So in python we could do something like this?
# Query orthanc-anon for the study
query = f "{orthanc_url}/studies/{resource_id}/archive"
response_study = requests.get(query, verify=False,
auth=(orthanc_user, orthanc_password))
if response_study.status_code != 200:
raise SpecificException(f"Could not download archive of resource '{resource_id}'")
# get the zip content
zip_content = response_study.content
Copying file
@tcouch uses this curl command to send data to the DSH
We can convert this to python that creates a directory in the FTP server if it doesn't exist. 🎉 Prototype repo that can be used as a basis for automated testing 🎉
The text was updated successfully, but these errors were encountered:
Definition of Done / Acceptance Criteria
De-identified DICOM images will be downloaded from orthanc-anon and sent via FTPS to the DSH under the project slug directory using the pseudonymised id as the filename (e.g.
{project-slug}/{study-pseduonymised-id}.zip
)Breakdown of steps for progress:
{dicom-server}/instances/{resource-id}/content/0010-0020
gets us just the patient idenfier, see below for details)Image.hashed_identifier
){project-slug}/{study-pseduonymised-id}.zip
on the DSH (see details and prototype repo below)exported_at
Testing
In an ideal world we'd have a mock DSH in docker compose which we can ftp into. If we can set this up in a day then probably worth it. Have a prototype repo for setting this up so I think this should be quick,
Current State
At the moment DICOM images are pushed via the DICOMweb protocol to an azure DICOM server. Sadly the DSH doesn't have a DICOM server that we can use, so we'll have to use FTPS for 100 days.
Documentation
No response
Dependencies
Details and Comments
Getting hashed identifier
Downloading dicom data
It looks like its possible to do this using the
resourceId
that we currently use in the dicomweb request inSendViaStow
in orthanc-anon.So in python we could do something like this?
Copying file
@tcouch uses this curl command to send data to the DSH
We can convert this to python that creates a directory in the FTP server if it doesn't exist. 🎉 Prototype repo that can be used as a basis for automated testing 🎉
The text was updated successfully, but these errors were encountered: