-
Notifications
You must be signed in to change notification settings - Fork 32
Home
Rutger Fick edited this page Oct 20, 2018
·
266 revisions
Welcome to the microstruktur wiki!
Improvements to make:
- remove spherical harmonics dependency on dipy. only depend on visualization ondipy.
- spherical mean model explanation should include astro-models.
- fitting does not give error if there is no b0.
- ODI and beta_fraction are optimization parameters. The flags to do this should be included in the call for the Watson/BinghamDistributedModel. i.e. if these are turned off they use the regular kappa/beta parameters.
- huge diameters gives NA signal attenuation without error message.
- volume fractions cannot be fixed while using MIX.
- add print_model_summary function. Should output model composition and parameter optimization settings.
- set custom parameter link and set custom replaced parameter. make parameter link a list of 4, and an optimized parameter a list of 5, with the last item being the name appendix for the optimized parameter.
- fod optimizer cannot replay custom parameters now.
- still fix brute2fine fixed parameters.
- callaghan sphere
- van gelderen plane and capped cylinder
- separate parameters links in separate python file in utils.
- rewrite docs for DD1
Issues for later:
- psi brute optimization should take into account that it's circular.
- mu parameter ranges should not be constrained somehow.
- Brute2Fine should estimate proper grid for mu instead of theta-phi grid in equal steps.
- function "print_relevant_references" that can model-dependently print the references that are related with the current model composition.
- l0.5-norm for mix as in (Zhu, Xinghua, et al. "Model selection and estimation of multi-compartment models in diffusion MRI with a Rician noise model." IPMI. 2013.).
- find optimal parameters for Ns and NSpherePoints for arbitrary model setup.
- make bingham and watson both sh-order and sphere dependent (so less points are sampled at lower sh-orders)
- visualize model using graph nodes from optimized parameters -> linked / preset parameters -> input for models -> combined signal. dask uses http://www.graphviz.org/.
- Implement sparse dictionary fitting using http://spams-devel.gforge.inria.fr/ as in AMICO.
- deep q-space learning Golkov et al. https://sci-hub.tw/10.1109/TMI.2016.2551324
- Analytic gradients in optimization.
- Bingham is currently normalized using spherical mean instead of analytically. The implementation of the generalized hyperconfluent function of Matrix argument is required, see http://www-math.mit.edu/~plamen/files/hyper.pdf. This is also of consequence for closed form watson/bingham dispersed stick implementations, see Appendix A in https://sci-hub.io/10.1016/j.neuroimage.2012.01.056.
- directly estimate bingham / watson in spherical harmonics https://arxiv.org/pdf/1501.04395.pdf
- Implement Matrix-Variate Distribution models and DIAMOND (Scherrer).