You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
To cover more use-cases we want to extent pre- and postprocessing specifications.
From what has briefly been discussed in the bioimageio meetings so far and from some discussion offline with @oeway and on separate occasion with @constantinpape and @akreshuk , here is an (opinionated) overview:
Currently we support predefined processing steps for pre- and postprocessing.
Additional use-cases can be separated into python (pure, or with dependencies) or javascript and more complex cases (that call for containerization). Question 1: Do we want to support 'pure python' processing steps without containerization or do we take the complex solution for all?
For custom processing steps we'll need a place in the zoo. This place should not compete with the hosted models, but as we need to document these custom steps, they could also be discoverable in a similar manner to our existing RDFs.
Question 2: Do we add a processing RDF, which can heavily borrow from the 'model RDF'?
Python/javascript based processing steps and containers--from a specification point of view--almost only differ in the source file. (Test-)IO, description, etc would all be equivalent...
Of course we will discuss this in the upcoming hackathon, but I thought preparing this a bit would be helpful to have a more informed discussion then.
The text was updated successfully, but these errors were encountered:
I am in favor of the rdf.yaml providing support for Python-based pre- and post-processing steps, with associated environment.yml. Then downstream tooling can take care of constructing such environments in order to run that processing in the correct environment.
To cover more use-cases we want to extent pre- and postprocessing specifications.
From what has briefly been discussed in the bioimageio meetings so far and from some discussion offline with @oeway and on separate occasion with @constantinpape and @akreshuk , here is an (opinionated) overview:
Currently we support predefined processing steps for pre- and postprocessing.
Additional use-cases can be separated into python (pure, or with dependencies) or javascript and more complex cases (that call for containerization).
Question 1: Do we want to support 'pure python' processing steps without containerization or do we take the complex solution for all?
For custom processing steps we'll need a place in the zoo. This place should not compete with the hosted models, but as we need to document these custom steps, they could also be discoverable in a similar manner to our existing RDFs.
Question 2: Do we add a processing RDF, which can heavily borrow from the 'model RDF'?
Python/javascript based processing steps and containers--from a specification point of view--almost only differ in the source file. (Test-)IO, description, etc would all be equivalent...
Of course we will discuss this in the upcoming hackathon, but I thought preparing this a bit would be helpful to have a more informed discussion then.
The text was updated successfully, but these errors were encountered: