-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
architectures and nomenculature #42
Comments
On this note, since we are having many more models, the old initialization scheme based on dictionaries is starting to be restricting. I have been thinking about using dependency injection with lightning configs to initialize (and type check!) each underlying model component. |
We also potentially need better names to distinguish between:
|
@ziw-liu @edyoshikun let's use the following. 2.5D UNet: 3D input -> 3D encoder -> 2D decoder -> 2D output |
The model design and the config file both become modular with dependency injection! Thanks for pointing out this pattern. Please think through how sensible defaults for modules can be set, such that succinct calls to construct models (example below) still work.
If your thought experiment is successful, let's start using this pattern to write new models (3D LUNet) and to refactor recent models (2.5D UNeXt, 3D UNeXt). |
Can we call these nD LUNeXt for consistency? |
Now that 2D and 2.5D UNets from our 2020 paper are implemented in pytorch, we are exploring the space of architectures in two ways:
a) input-output tensor dimensions.
b) Using SOTA convolutional layers, particularly inspired by ConvNeXt.
At this point, the 2.1D network combines both. It is useful to have distinct nomenclature and models to compare these two innovations.
I suggest:
The text was updated successfully, but these errors were encountered: