-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Multiplexing registration overview #39
Comments
Very nice overview @jluethi! 😀 As you've stated our lab has a clear preference for image-based registration. It seems like a much cleaner solution to have "one true" image coordinate system for the whole experiment & not have to worry about matching up objects (also what happens if i. e. cell segmentation leads to slightly different results in different channels?). A couple more thoughts that came to mind (also related to discussions in #36):
|
@MaksHess Yes, I also prefer image-based registration workflows for their ease of use in downstream processing. Nicole can talk more about what happens when segmentations don't agree, but I think it mostly is a question of what things one can match and at what difference one would start discarding measurements. Very cool to hear that you have refactored the itk-elastix workflow in abbott! Let's have the conversation about the details of # 3 : Registration per FOV, image-based here: #40
Thanks for the concrete example with b-splines. I think it's generally to complex to really consider for our typical workflows, but then even basically impossible for the zebrafish workflows to use on-the-fly approaches
Yes, I'll open another issue to discuss correcting chromatic shifts (=> here: #46). Let's discuss the details there. I do think that is more of a niche topic though than general registration. The architecture should be able to handle this with similar workflows as image-based registration (i.e. shrinking ROIs to the consensus region while having 0s outside of it). How we actually compute those is a debate we can then have in this separate issue :) Your suggestion in 3 may be a reasonable way to do it. Let's implement the general registration first, but the division into
Yes, we've also been thinking about this. See here fractal-analytics-platform/fractal-client#67 and especially fractal-analytics-platform/fractal-server#14. See:
We will tackle the workflow flexibility after search-first & multiplexing, current planning has it in October. The foundations for it are already being built with the current transition to the client-server architecture though :) |
Revisiting this overview now that we're starting to tackle registration. Looking at the progress with transformations in OME-NGFF (see ome/ngff#94) and at examples for multiplexed imaging there (e.g. see this user story): We should really keep an eye on those and I'd see such transformations as the optimal place where we'd store complex transformations. Level 1: We work with ROIs (our custom tables) and modify the ROIs per cycle. Limited to only work for translation in the ROI version, expansion to complex transformations with level 3. This allows us to have "raw" ROIs and aligned ROIs. Loading data from aligned ROIs should return aligned images (if all that needed to be transformed was a translation). Requirement for this to work well: Every cycle loads data based on its own ROI table Level 2: Burn in transformation for given things (i.e. burn in a complex 3D b-spline alignment that’s too expensive to calculate on-the-fly, see comment by @MaksHess above) Level 3: We add OME-NGFF transformation, any ROI loading also applies transformations (see ome/ngff#94). Then, we wouldn’t have to change the ROIs anymore. Likely, this will mean adopting some parts of the SpatialData spec, as they are progressing well with actually using transformations in OME-Zarrs, see ome/ome-zarr-py#229
Thinking about this again, still seems like the way to go. There will be multiple ways to
I think chromatic transformations may also eventually become part of the OME-NGFF spec, see user story by Marvin here: ome/ngff#84 (comment) |
When we acquired multiplexing data, we will have multiple cycles that are acquired independently. Each cycle contains multiple channels. Due to inaccuracies in the stage handling and potential movement of the biological sample between rounds, we need to register the rounds to each other.
There are different approaches for such registration. At a very high level, we have: Image-based registration vs. segmentation-based registration.
In image-based registration, we have a channel (typically our nuclear channel, DAPI) that is acquired in each cycle. We calculate the optimal transformation parameters to register all cycles to the first cycle. Then we transform the images and can do our downstream processing as if all the images were acquired at the same time.
In segmentation-based registration, we do not modify the images, but instead process the cycles separately. We run a segmentation for each cycle (=> need to have both nuclear & membrane marker in each cycle) and make measurements per cycle. We then find matching objects based on a point-cloud registration of the segmented objects. We can then combine measurements for different objects based on the matching. Theoretically, one could also apply such point-cloud matching back to the images, but we haven't done thorough tests for that yet.
What is required for Fractal implementation?
Segmentation-based registration doesn't need any special registration tasks, but will require a downstream processing task that calculates the matching of objects. That could be done via a napari workflow. Thus, for registration, this just requires that the metadata of which channel belongs to which cycle is saved, processing is then handled during data analysis.
For image-based registration, we will need to do some processing and adapting the ROIs. I made an overview here:
There are 2 ways we can do image-based registration and they depend on what processing route we choose in stitching (see here: #11).
Registration (1): Per well. If we are able to stitch the wells, then we can perform the image-based registration on the whole well at once and find a transformation (see here for discussion of registration methods: #36) that works on the whole well to register the cycles. The easiest case for this is processing is just finding 2D rigid shifts when aligning MIPs (that will probably be the first registration to be implemented). But one can also go with 3D affine transformations.
What do we need to handle? We will need to shrink some of the ROIs at the edge of the wells. All ROIs should contain image data for all channels, while the region outside of the ROIs may not have information for all channels.
Registration (2): Per FOV. When we process per FOV (we are worried about FOV boundaries or when we acquire in search-first mode), we also want to do registration for each FOV (or maybe down the road registration per arbitrary ROI, as FOVs are just one type of ROIs for us). Typically, those would be 3D registration, like a 3D affine registration.
What do we need to handle? We will need to shrink each ROIs to the lowest common denominator of shared area. In this case, it may also make sense to set all area outside of the ROIs to 0, because otherwise there could be conflicts (e.g. if two FOVs are shifted towards each other)
Priorities for implementation
@gusqgm @nrepina @MaksHess What are your impressions of those registration workflows?
The text was updated successfully, but these errors were encountered: