Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

New Stokes and Mueller module #110

Merged
merged 19 commits into from
Mar 7, 2023
Merged

Conversation

talonchandler
Copy link
Collaborator

This PR adds a completely rewritten version of all Stokes- and Mueller-related calculations in waveorder.

This rewrite was motived by the following questions:

Highlight improvements:

  • removal of normalization steps...instead of a two step reconstruction with a normalization followed by reconstruction, this implementation goes straight from Stokes parameters to (retardance, orientation, transmittance, depolarization).
  • a natively ND implementation...the Stokes and Mueller indices go in the first/second axes, and the remaining indices are arbitrary. A convenience mmul (Mueller multiply) function that uses einsum is the key simplifying design choice.
  • A Mueller-matrix based reconstruction scheme that can handle larger background retardances.

What does the new API look like? Here's an example snippet from /hpc/projects/compmicro/projects/infected_cell_imaging/Image_preprocessing/Exp_2023_02_07_A549MemNucl_PolPhase3D/Background_correction_trial/bg-corr-with-mask.py

# Calculate A and A_inv
A = stokes.A_matrix(swing=0.1, scheme="5-State")
A_inv = np.linalg.pinv(A)

# Apply A_inv to background and sample data
S_bg = stokes.mmul(A_inv, cyx_bg_data)
S_sm = stokes.mmul(A_inv, czyx_data)

# Calculate background correction matrix from S_bg
M_inv = stokes.inv_AR_mueller_from_CPL_projection(*S_bg)

# Apply background correction to sample data
bg_corr_S_sm = stokes.mmul(M_inv, S_sm)

# Reconstruct parameters
ret, ori, tra, dop = stokes.inverse_s0123_CPL_after_ADR(*bg_corr_S_sm)

Limitations compared to the current waveorder_reconstructor implementation:

  • No GPU implementation. @ziw-liu maybe you have ideas for flipping a gpu switch for this whole module? The waveorder_reconstructor class' parallel np and cp implementations seem clunky.
  • Not yet optimized...instead of using differences and ratios to apply background corrections, this implementation uses Mueller matrices and their inverses. This implementation is not slower than the phase reconstructions, and I've added comments in the places where further optimization can improve performance.

I have not removed the existing implementation in the waveorder_reconstructor class. My current plan is to discuss the technical parts of this reimplementation and compare with the existing implementation here, then later I can complete the refactor by removing the Stokes parts of the waveorder_reconstructor class and updating the recOrder calls.

Note: this PR is to merge into alg-dev, so we have a bit more flexibility in the changes. Temporarily breaking changes/suggestions are okay while we iterate.

Specific feedback requests:

  • @ziw-liu your comments on a gpu switch, and on documentation+testing practice is very welcome. I wasn't sure if I should use type annotations & numpy-style docstrings? I stayed with only docstrings for now.
  • @mattersoflight I'm particularly interested in your thoughts on naming. For example, inverse_s0123_CPL_after_ADR doesn't roll off the tongue like the earlier Polarization_recon, but I think it's important to be very specific at this level. Later layers of abstraction can reintroduce more abstract names likes Polarization_recon if we think they're helpful.

@mattersoflight
Copy link
Member

mattersoflight commented Feb 27, 2023

@ziw-liu your comments on a gpu switch, and on documentation+testing practice is very welcome. I wasn't sure if I should use type annotations & numpy-style docstrings? I stayed with only docstrings for now.

Re: GPU implementation, the API between pytorch and numpy seems quite consistent i.e., object.operation runs on CPU if object is a numpy array and on GPU if object is a torch tensor on GPU, e.g. pinv. If we are going to replace the GPU library, I'd first consider pytorch. Can you do a census of numpy methods you use and see if the same methods are available for torch tensor?

@mattersoflight I'm particularly interested in your thoughts on naming. For example, inverse_s0123_CPL_after_ADR doesn't roll off the tongue like the earlier Polarization_recon, but I think it's important to be very specific at this level. Later layers of abstraction can reintroduce more abstract names likes Polarization_recon if we think they're helpful.

If we find that all the assumptions in the forward model related to polarization transfer can be covered by two properties: a) instrument matrix, and b) sample model, we can use a generic name and specify the assumptions via arguments. I'll think more about this.

@mattersoflight
Copy link
Member

fft is not relevant for this PR, but for deconvolution operations later. This benchmark reports that pytorch fft works almost as fast as cupy: https://thomasaarholt.github.io/fftspeedtest/fftspeedtest.html. pytorch is accelerated on M1, but cupy will require a nvidia gpu.

@ziw-liu
Copy link
Contributor

ziw-liu commented Feb 27, 2023

Can you do a census of numpy methods you use and see if the same methods are available for torch tensor?

I wouldn't be too concerned about NumPy methods. However SciPy signal processing API may have a much lower coverage. Will have to check in detail.

@ziw-liu
Copy link
Contributor

ziw-liu commented Feb 27, 2023

Using torch is an interesting idea, in that it is 'accelerated' for CPUs too, so in theory the same code can work for CPU and GPU. However in addition to API coverage, lack of optimization/more overhead can be potential issues.

@ziw-liu
Copy link
Contributor

ziw-liu commented Feb 27, 2023

I wasn't sure if I should use type annotations & numpy-style docstrings? I stayed with only docstrings for now.

We don't currently have type checking infra set up, so type hints serves mainly 2 purpose:

  1. Help devs as well as users call the API in code elsewhere, because autocompletion and other in-editor static analysis works better.
  2. Help generate the docstring. I personally use tools that populate the type info in the docstring automatically from docstrings.

I like to write type hints because it helps me code faster (e.g. I get syntax-highlighted and linted types that's copied over so less typos in the docstring type field). But as long as the code is consistent in style and well-documented I think it's all fine.

waveorder/stokes.py Outdated Show resolved Hide resolved
waveorder/stokes.py Outdated Show resolved Hide resolved
waveorder/stokes.py Outdated Show resolved Hide resolved
@Soorya19Pradeep
Copy link

@talonchandler, the cell membrane signal from the new orientation image computed with the additional background correction definitely has more contrast and is more continuous signal compared to the earlier version with just measured background correction. Thank you!

@mattersoflight
Copy link
Member

@talonchandler

@Soorya19Pradeep asked about the consistency of this convention with the existing waveorder implementation.

Could you clarify which convention we are discussing here: convention for what is called right vs left circularly polarized light, or convention for axes of orientation, or may be both?

@mattersoflight
Copy link
Member

mattersoflight commented Feb 28, 2023

@ziw-liu , @talonchandler

Using torch is an interesting idea, in that it is 'accelerated' for CPUs too, so in theory the same code can work for CPU and GPU.
My thought was to keep using numpy for CPU, and use torch instead of cupy for GPU. There can be code branches depending on whether you use CPU or GPU, but only when matrices (tensors) are instantiated.

Let's focus on the model (which is making a lot of sense as I read it), naming convention, and numpy implementation in this PR, and start a separate issue to work on GPU acceleration. We should refactor whole codebase (including deconvolution code) if we change the GPU backend.

@Soorya19Pradeep
Copy link

@talonchandler

@Soorya19Pradeep asked about the consistency of this convention with the existing waveorder implementation.

Could you clarify which convention we are discussing here: convention for what is called right vs left circularly polarized light, or convention for axes of orientation, or may be both?

@mattersoflight, I am trying to understand how to read the orientation measurement of cell membrane, if it makes physical sense. The value of orientation changes with new implementation and further background correction, so I was curious.
This is from orientation information from a cell from earlier version with measured background correction, colorbar range of values (-0.87,+0.87)
image
This is from the new version with just measured background correction, range (+0.87,+2.5). I realized the information here is same, just inverted and offset by 90 degrees.
image
After the extra background correction the range changes and more information is visible, range (0,+3.14).
image

@talonchandler
Copy link
Collaborator Author

@mattersoflight

Could you clarify which convention we are discussing here: convention for what is called right vs left circularly polarized light, or convention for axes of orientation, or may be both?

Good question...I think we should discuss both.

I can think of two paths to take here:

  • Decouple the reconstruction from the orientation convention. We can give ourselves some flexibility in RCP vs. LCP, relative orientations of the camera wrt the polarizers, axis convention etc., then give the recOrder user (and the scripts) enough "knobs" to fix any deviation from convention. For example, we currently have one knob in recOrder that rotates by 90 degrees. To set the knobs, image a known sample (ideally a Kazansky target, but we can point our users to a more available alternative), twiddle the knobs until your colors match your expectations, then keep those knobs as default reconstruction parameters. This is effectively what we're doing now, and I think it's workable.

  • Couple the reconstruction and orientation convention. We can choose to be strict with our conventions: make the user specify RCP vs. LCP, the camera orientation, axis convention etc., then use those parameters as inputs to the reconstruction code. This will lead to the same results as above, but requires more diligence from recOrder user. In practice, I expect that this approach will result in the same approach as above---fiddle with these (physically motivated) knobs until you match your expectations of a known sample.

@talonchandler
Copy link
Collaborator Author

Thanks for the comparison @Soorya19Pradeep. Very helpful.

I realized the information here is same, just inverted and offset by 90 degrees.

These are the types of knobs that we might provide in the "decoupling" approach: one checkbox/function for "invert orientation" and one for "rotate by 90 degrees".

@mattersoflight
Copy link
Member

mattersoflight commented Feb 28, 2023

After the extra background correction the range changes and more information is visible, range (0,+3.14).

Thanks, @Soorya19Pradeep for the examples. It is great that you are taking a close look at FOVs.

Seeing the full dynamic range of orientation after correcting background bias is promising! To be sure of the accuracy of the measurement, I suggest finding some patches where you see strong cortical actin bundles. If the background correction in this (arguably challenging) case is accurate, you'd see that orientation is parallel to the actin bundle. Once you have reconstructed retardance and orientation, you can call waveorder.visual.plotVectorField to visualize the orientation.

tests/test_stokes.py Outdated Show resolved Hide resolved
waveorder/stokes.py Outdated Show resolved Hide resolved
waveorder/stokes.py Outdated Show resolved Hide resolved
waveorder/stokes.py Outdated Show resolved Hide resolved

1) A forward function group:

A = A_matrix(swing, scheme="5-State")
Copy link
Member

@mattersoflight mattersoflight Mar 1, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should use a more descriptive name here: A_matrix, can be called Instrument_matrix, Sys_matrix, or I2Stokes_matrix.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I just renamed A_matrix to I2S_matrix and added a new S2I_matrix function.

I'm open to more iteration on these names.

Comment on lines 12 to 13
s0, s1, s3, s3 = s0123_CPL_after_ADR(ret, ori, tra, dop)
s0, s1, s2 = s012_CPL_after_AR(ret, ori, dop)
Copy link
Member

@mattersoflight mattersoflight Mar 1, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These names can be more intuitive.
I suggest stokes_ADR(ret,ori,tra,dop, input='CPL') and stokes_AR(ret,ori,tra, input='CPL').

Reading the mnemonics for components in the order in which vectors and matrices multiply is more intuitive. The input can also be a Stokes vector when one wants to use the methods for another purpose, e.g., model non-ideal illumination.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think adding input="CPL" is a big improvement!

How do these sound?
stokes_after_ADR(ret,ori,tra,dop, input='CPL')
stokes012_after_AR(ret,ori,tra, input='CPL')

I think it's important to make it clear that these are stokes vectors after an ADR/AR, and I think if we don't return all 4 stokes params then the function name should make that clear.

Reading the mnemonics for components in the order in which vectors and matrices multiply is more intuitive. The input can also be a Stokes vector when one wants to use the methods for another purpose, e.g., model non-ideal illumination.

I'm not sure I understand this. The matched inverse functions below take Stokes vectors as input...is this what you have in mind?

Comment on lines 17 to 18
ret, ori, tra, dop = inverse_s0123_CPL_after_ADR(s0, s1, s2, s3)
ret, ori, tra = inverse_s012_CPL_after_AR(s0, s1, s2)
Copy link
Member

@mattersoflight mattersoflight Mar 1, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I suggest:

ret, ori, tra, dop = inverse_ADR(s0,s1,s2,s3,input='CPL')
ret, ori, tra = inverese_AR(s0,s1,s2,input='CPL')

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Similar to above...how do these sound:

ret, ori, tra, dop = inverse_stokes_after_ADR(s0, s1, s2, s3, input="CPL")
ret, ori, tra = inverse_stokes012_after_AR(s0, s1, s2, input="CPL")

inverse_ADR alone might make me think that I'm going to get a Mueller matrix.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Further down you suggest estimate_ADR, and I do like the word estimate here.

I'm now favoring

ret, ori, tra, dop = estimate_ADR_from_stokes(s0, s1, s2, s3, input="CPL")
ret, ori, tra = estimate_AR_from_stokes012(s0, s1, s2, input="CPL")


ret, ori, tra, dop = inverse_s0123_CPL_after_ADR(s0, s1, s2, s3)
ret, ori, tra = inverse_s012_CPL_after_AR(s0, s1, s2)
M = AR_mueller_from_CPL_projection(s0, s1, s2, s3)
Copy link
Member

@mattersoflight mattersoflight Mar 1, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I suggest:

M = mueller_from_projection(s0,s1,s2,s3, model='AR', direction='forward', input='CPL')

Copy link
Collaborator Author

@talonchandler talonchandler Mar 3, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Much better!

Thinking more, I also don't think projection is the right word here. I think:

M = mueller_from_stokes(s0, s1, s2, s3, model="AR", direction="forward", input="CPL")

or possibly mueller_from_measured_stokes is even clearer.


3) A convenience function group:

M = inv_AR_mueller_from_CPL_projection(s0, s1, s2, s3)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I suggest:

M = mueller_from_projection(s0,s1,s2,s3, model='AR', direction='inverse', input='CPL')

Copy link
Member

@mattersoflight mattersoflight left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall, this PR is a great step forward towards unifying the polarization algebra in our codebase.

The module currently consists of many small methods. They can be unified by specifying some of the assumptions by parameters.

I've made specific suggestions above. With that, the script can look like this.

# Calculate A and A_inv
A = stokes.instrument_matrix(swing=0.1, scheme="5-State")
A_inv = np.linalg.pinv(A)

# Apply A_inv to background and sample data
S_bg = stokes.mmul(A_inv, cyx_bg_data)
S_sm = stokes.mmul(A_inv, czyx_data)

# Calculate background correction matrix from S_bg
M_inv = stokes.mueller_from_projection(*S_bg. model='AR', direction='inverse', input='CPL')

# Apply background correction to sample data
bg_corr_S_sm = stokes.mmul(M_inv, S_sm)

# Reconstruct parameters
ret, ori, tra, dop = stokes.inverse_ADR(*bg_corr_S_sm, input = 'CPL')

Additional thoughts:

I can test the example script once the code path is refactored.

@talonchandler
Copy link
Collaborator Author

talonchandler commented Mar 7, 2023

I've just completed the renaming/refactoring. @mattersoflight this is ready for your re-review.

The latest API (at this level) looks like:

# Calculate I2S
I2S = stokes.I2S_matrix(swing, scheme="5-State")

# Calculate Stokes vectors
S_sm = stokes.mmul(I2S, czyx_data)  # Apply I2S to sample data
S_bg_meas = stokes.mmul(I2S, cyx_bg_data)  # Apply I2S to measured background

# Calculate background correction matrix from S_bg
M_bg_inv = stokes.mueller_from_stokes(*S_bg_meas)

# Apply measured background correction to sample data
bg_corr_S_sm = stokes.mmul(M_bg_inv, S_sm)

# Recover ADR parameters
ret, ori, tra, dop = stokes.estimate_ADR_from_stokes(*bg_corr_S_sm)

I've also spent some time characterizing the old (green profiles) vs. new algorithms (white profiles).

Soorya's cells - retardance on y axis - measured bkg correction only
Screenshot 2023-03-06 at 5 53 03 PM
At most 2% difference in retardance.

Kazansky target - retardance on y axis - measured bkg correction only
Screenshot 2023-03-06 at 5 31 36 PM
At most 1% difference in retardance.

Kazansky target - orientation on y axis - measured bkg correction only
Screenshot 2023-03-06 at 5 35 18 PM
At most 2% difference in non-background regions when the different orientation convention is accounted for. This main difference here is from a difference in orientation conventions which we'll be handling with two user-facing switches as discussed above.

Timing
Current performance bottleneck is the pre-calculation of mueller_from_stokes from the background stokes vectors, which can be further optimized (I expect factors of 2-4x). For now, here are a couple benchmarks:

1 x 2048 x 2048:
old algorithm: 1.0 s
new algorithm: 15.4 s

8 x 2048 x 2048:
old algorithm: 17.8 s
new algorithm: 19.6 s

Example comparison script (generates Kaz target comparison above)

Full example script (click to expand):
import numpy as np
from waveorder import stokes
from recOrder.io.utils import load_bg
from recOrder.io.metadata_reader import MetadataReader
from recOrder.compute.reconstructions import (
    initialize_reconstructor,
    reconstruct_qlipp_stokes,
    reconstruct_qlipp_birefringence,
)
from iohub.reader import imread
from iohub.ngff import open_ome_zarr
import napari

# Set paths
base_path = "/hpc/projects/compmicro/rawdata/hummingbird/Talon/2023_02_08_kaz/"
data_subpath = "kaz-raw_recOrderPluginSnap_0/kaz-raw_RawPolDataSnap.zarr"
bg_subpath = "BG"
cal_subpath = "calibration_metadata.txt"

# Read data
reader = imread(base_path + data_subpath)
T, C, Z, Y, X = reader.shape
czyx_data = reader.get_array(position=0)[0, ...]

# Read background data
cyx_bg_data = load_bg(base_path + bg_subpath, height=Y, width=X)

# Read calibration metadata
md = MetadataReader(base_path + cal_subpath)

def new_bg_correction(czyx_data, cyx_bg_data, swing, scheme):
    # Calculate I2S
    I2S = stokes.I2S_matrix(md.Swing, scheme=md.get_calibration_scheme())

    # Calculate Stokes vectors
    S_sm = stokes.mmul(I2S, czyx_data)  # Apply I2S to sample data
    S_bg_meas = stokes.mmul(I2S, cyx_bg_data)  # Apply I2S to measured background

    # Calculate background correction matrix from S_bg
    M_bg_inv = stokes.mueller_from_stokes(*S_bg_meas)

    # Apply measured background correction to sample data
    bg_corr_S_sm = stokes.mmul(M_bg_inv, S_sm)

    # Recover ADR parameters
    ret, ori, tra, dop = stokes.estimate_ADR_from_stokes(*bg_corr_S_sm)

    ret = ret / (2 * np.pi) * 532

    return ret, ori, tra, dop

def old_bg_correction(czyx_data, cyx_bg_data, swing, scheme):
    reconstructor_args = {
        "image_dim": (Y, X),
        "n_slices": 1,  # number of slices in z-stack
        "wavelength_nm": 532,
        "swing": swing,
        "calibration_scheme": scheme,  # "4-State" or "5-State"
        "bg_correction": "global",
    }
    reconstructor = initialize_reconstructor(
        pipeline="birefringence", **reconstructor_args
    )
    # Reconstruct background Stokes
    bg_stokes = reconstruct_qlipp_stokes(cyx_bg_data, reconstructor)

    # Reconstruct data Stokes w/ background correction
    stokes = reconstruct_qlipp_stokes(czyx_data, reconstructor, bg_stokes)

    birefringence = reconstruct_qlipp_birefringence(stokes, reconstructor)
    birefringence[0] = (
        birefringence[0] / (2 * np.pi) * reconstructor_args["wavelength_nm"]
    )
    return birefringence

oldADR = old_bg_correction(czyx_data, cyx_bg_data, md.Swing, md.Calibration_scheme)
newADR = new_bg_correction(czyx_data, cyx_bg_data, md.Swing, md.Calibration_scheme)


v = napari.Viewer()
v.add_image(oldADR[..., 890:1220, 790:1370], name="old")
v.add_image(np.stack(newADR)[..., 890:1220, 790:1370], "new")
import pdb; pdb.set_trace()

@talonchandler
Copy link
Collaborator Author

I neglected to commit one small change: stokes.mueller_from_stokes should be direction=inverse by default since this is the most common usage mode (and the usage mode I showed in the snippet from last night).

@mattersoflight
Copy link
Member

thanks for the offline discussion. Looks great to me!

@talonchandler talonchandler merged commit 9465db4 into alg-dev Mar 7, 2023
@talonchandler talonchandler deleted the direct-stokes-to-params branch March 7, 2023 23:25
talonchandler added a commit that referenced this pull request Mar 7, 2023
@talonchandler talonchandler mentioned this pull request Mar 7, 2023
mattersoflight pushed a commit that referenced this pull request Mar 9, 2023
* Add .git-blame-ignore-revs (#109)

* Add #110 pr doc
talonchandler added a commit that referenced this pull request Jul 25, 2023
* New Stokes and Mueller module (#110)

* Initial fwd and inv w/ tests

* Add A-matrix and mueller-from-projection functions

* Debug AR_mueller

* Add convenience functions

* Make 4d transposes nd with moveaxis

* Add documentation

* Improve module docstring

* Change `KeyError` to `ValueError`

* Improved docs

* Remove unnecessary test

* Removed "float" from docs

* Revert "Removed "float" from docs"

This reverts commit d9dd7e3.

* Use `assert_almost_equal`

* fixed couple of  typos

* `A` -> `I2S` & new `S2I`

* comprehensive renaming

* copy `s0` <-> `tra`

* `np.array(s0)` so that it always has `.copy()`

* Set default to "inverse"

---------

Co-authored-by: Shalin Mehta <[email protected]>

* PR docs (#111)

* Add .git-blame-ignore-revs (#109)

* Add #110 pr doc

* Phase 2D + 3D refactor (#117)

* black formatting

* move 2D-to-3D phase recon to phase.py

* convert gen_HZ_stack to ZYX

* keep existing gpu handling

* add 3D_to_3D phase OTF

* improve naming consistency in phase_2D_to_3D_OTF

* Add kwargs to recon params

* change axes for 2D recon preparation

* simplify 2D_to_3D recon with kwargs

* handle padding and simplify OTFs

* add 3D_to_3D recon with kwargs

* temporarily remove docs for rewriting

* first dependence on torch

* move gen_Hz_stack to torch

* convert 2d wotf to torch

* move gen_Greens_function_z to torch

* move important utils to torch

* phase3Dto3D complete overhaul

* refactor

* optics.py to torch

* high-level tests

* splitt phase.py into models

* clean up 3D script

* cleaning and notes

* phase2Dto3D placeholders

* clean tests

* use napari in tests

* better skipping

* reduce dependencies

* update tests

* maintiain PTI simulation's compatibility revised optics functions

* drop pdb

* Preserve birefringence recon

* fix transpose bug

* transpose bug

* phase2Dto3D.py example

* support padding

* empty model for planaraniso

* improved names

* broad renaming of phase2D_3D and phase3D_3D

* updated 2D phase and absorption recon

Return the 2D absorption along with phase. Viewing OTFs in napari is very nice. I changed the axis order to be able to compare phase and absorption OTFs at focal plane.

* display OTFs with Z-axis as a slider

* changes to 2D phase/absorption simulation

I suggest changing this example to be simulation and reconstruction of 2D specimens, thinner than the depth of field of the microscope.

* rename models to <object-type>_<object-thickness>_<data_shape>

* fix isotropic_thin_3d example, include absorption

* 2D -> 2d, 3D -> 3d

---------

Co-authored-by: Shalin Mehta <[email protected]>

* Prepare polarization algorithms for integration with `recOrder` (#118)

* black formatting

* move 2D-to-3D phase recon to phase.py

* convert gen_HZ_stack to ZYX

* keep existing gpu handling

* add 3D_to_3D phase OTF

* improve naming consistency in phase_2D_to_3D_OTF

* Add kwargs to recon params

* change axes for 2D recon preparation

* simplify 2D_to_3D recon with kwargs

* handle padding and simplify OTFs

* add 3D_to_3D recon with kwargs

* temporarily remove docs for rewriting

* first dependence on torch

* move gen_Hz_stack to torch

* convert 2d wotf to torch

* move gen_Greens_function_z to torch

* move important utils to torch

* phase3Dto3D complete overhaul

* refactor

* optics.py to torch

* high-level tests

* splitt phase.py into models

* clean up 3D script

* cleaning and notes

* phase2Dto3D placeholders

* clean tests

* use napari in tests

* better skipping

* reduce dependencies

* update tests

* maintiain PTI simulation's compatibility revised optics functions

* drop pdb

* Preserve birefringence recon

* fix transpose bug

* transpose bug

* phase2Dto3D.py example

* support padding

* empty model for planaraniso

* improved names

* broad renaming of phase2D_3D and phase3D_3D

* Rename variables in `stokes.py`.

* updated 2D phase and absorption recon

Return the 2D absorption along with phase. Viewing OTFs in napari is very nice. I changed the axis order to be able to compare phase and absorption OTFs at focal plane.

* display OTFs with Z-axis as a slider

* changes to 2D phase/absorption simulation

I suggest changing this example to be simulation and reconstruction of 2D specimens, thinner than the depth of field of the microscope.

* convert stokes to torch

* initial draft of planaraniso model

* rename models to <object-type>_<object-thickness>_<data_shape>

* fix isotropic_thin_3d example, include absorption

* 2D -> 2d, 3D -> 3d

* calculate background corrections with transfer function

* rearrange examples folder

* rearrange examples folder

* rearrange `isotropic` and `phase` examples

* add `inplane_anisotropic` model and example

* fix inplane tests

* fix maintenance scripts

* remove deprecated

* use `np.meshgrid` for consistency

* minor bug

* integration changes

---------

Co-authored-by: Shalin Mehta <[email protected]>

* Remove kwargs from reconstructions (#119)

* remove kwargs

* fix estimated background bug

* `illumination_wavelength` -> `wavelength_illumination` (#123)

* Remove duplicate test (#125)

remove duplicate test

* Add option for axial flip of `phase_thick_3d` transfer function (#124)

* `illumination_wavelength` -> `wavelength_illumination`

* add option for axial flip of transfer function

* test axial flip

---------

Co-authored-by: Ziwen Liu <[email protected]>

* Rename model from `anisotropic_thin` to `oriented_thick` (#127)

`anisotropic_thin` -> `oriented_thick`

* `isotropic_fluorescent_thick_3d` model (#128)

* typo

* model outline

* prototype transfer function

* 3d phantom + visualize transfer function

* refactor apply_transfer_function

* typo

* refactor padding (with gpt docs + tests)

* complete example

* test apply_inverse_transfer_function

* TV reconstructions raise NotImplementedError

* `pad_zyx` -> `pad_zyx_along_z`

* Simplify `data += 10`

* Update tests/test_util.py

Co-authored-by: Ziwen Liu <[email protected]>

---------

Co-authored-by: Ziwen Liu <[email protected]>

* Match parameters to simplify `recOrder`-`waveorder` interface (#131)

* add axial_flip to `isotropic_thin_3d`

* `illumination` -> `emission` for fluorescence

* simplify parameters for usage with recOrder

* fix test

* pin torch>=2.0.0

* `z_position_list` should accept a list

---------

Co-authored-by: Shalin Mehta <[email protected]>
Co-authored-by: Ziwen Liu <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants