Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

support "sensitivity" and "timebase" style scaling in raw data browsers #10888

Closed
drammock opened this issue Jun 30, 2022 · 15 comments
Closed

support "sensitivity" and "timebase" style scaling in raw data browsers #10888

drammock opened this issue Jun 30, 2022 · 15 comments
Labels

Comments

@drammock
Copy link
Member

drammock commented Jun 30, 2022

From the forum: https://mne.discourse.group/t/scaling-sensitivity-uv-mm/5079

The idea is that the raw data browser should have a way to take into account the DPI of the display when setting the signal scaling (both amplitude scaling, and when setting the duration of signal shown onscreen). This is apparently important for clinical practice, where certain waveform patterns are diagnostically useful, and visual recognition of those patterns is dependent on knowing the scaling (e.g., "if I see such-and-such pattern when viewing at 7mm/uV and 30mm/sec, I know it is was a seizure")

The linked forum issue has details about which scaling values are diagnostically useful.

@Grifunf
Copy link

Grifunf commented Sep 7, 2023

+1
I recently started working on this project.
I will come back soon with a couple of questions/recommendation for the raw data browser.
Feel free to contact me for anything!

@drammock
Copy link
Member Author

drammock commented Sep 7, 2023

@Grifunf I'm going to answer your forum question here, let's keep discussion about implementation details on GitHub from now on. Here's your question:

I have started working on this project as part of my diploma thesis. I aim to enhance both amplitude and time scaling settings, as well as the monitor sensitivity mentioned before.
My only concern for the time being are the values of the dropboxes, which you claim to be standardised. I can agree for the time related ones, but what about the amplitude dropboxes? I saw the screenshots for EEG, but what about the other types of channels?
My approach would be to use the already used Inverted Norm and populate the dropboxes with standard items, around that norm. If not, i guess it would require some kind of dictionary with the commonly used items for every type, as well as a valid source for all this information.
Anyways, please feel free to contact me for any information needed.

MEG isn't used much for diagnostic purposes, so it's unlikely that there are "standard" sensitivity scalings out there for MEG channels. Moreover, it will start to get hectic if we add separate dropboxes for every channel type. Some ideas of how to do it:

  1. don't add a drop box at all, but make it so that the scalings dict will accept string values formatted like "10 uV/mm" and then calculate the corresponding float value behind-the-scenes (taking into account display DPI). This should generalize fairly well to all channel types (and let you control them independently).
  2. Add a "sensitivity" dropbox, and have its text show the EEG-relevant labels, and then compute the conversion factor needed to change from current scaling to requested scaling, and then apply that conversion factor to all channel types.

@Grifunf
Copy link

Grifunf commented Sep 8, 2023

Hello, and thank you for the prompt reply!

First of all, I'm fairly new to the MNE and encephalography, so any remarks or corrections are more than welcome.
I would like to share more details about the small progress I've made so far and the direction I'm in, and have your feedback on them.

For starters, I have distinguished 2 tasks:

  1. Make an advanced scaling settings dialog, one for the amplitude and one for the time scaling. (instead of the dropdown lists seen until now).
  2. Integrate the monitor calibration, so that both of the above functions are measured in monitor distance ("10 μV/mm", "30 mm/sec")

I had started working on the Amplitude Settings Dialog. The start idea was to be able to scale 1) all channels, 2) a specific channel type and 3) a specific channel, either from the default dropbox items or via manual input (that is why i chose dropboxes over dropdown menus). Here is a screenshot of the idea:
image
As you can see, the example file I was working on has a lot of channel types (Ofc all names, units and values are placeholders can be changed according to our needs). My initial thinking was to make the functionality able to support all those different types.

So here are my first 2 questions:

  1. Only the EEG channels matter? I got confused by your comment on MEG. The STIM channel type is probably useless and can easily be removed, but in my example file there are more MEG than EEG channels. Also, I've read in the documentation that there are even more types, so it makes sense to be able to support them all.
  2. I get the idea of computing a scale factor from the user's choice on EEG and applying it elsewhere, but it sounds counter intuitive, form the user's POV.

I am sorry for the long post, please take your time to respond. Thanks in advance.

@drammock
Copy link
Member Author

drammock commented Sep 8, 2023

  1. Only the EEG channels matter? I got confused by your comment on MEG.

My understanding is that the "sensitivity" and "timebase" scaling are used for diagnostic applications of EEG (trying to detect epileptic events, sleep abnormalities, etc), where it is critically important for the clinician to see the same "zoom level" every time they look at a recording. MEG is not normally used for disease diagnosis (only for research), therefore "sensitivity" and "timebase" are not normally used, and standardized sensitivity values for MEG data are not in common use as far as I know.

The STIM channel type is probably useless and can easily be removed, but in my example file there are more MEG than EEG channels.

The number of individual channels doesn't matter, it's the number of channel types we need to worry about. I agree that scaling STIM channels is usually not necessary/helpful.

Also, I've read in the documentation that there are even more types, so it makes sense to be able to support them all.

I disagree, for the reasons stated above. Standardized sensitivity values are part of clinical diagnostic workflows, but are not widely used in research, and only some channel types are used diagnostically. So to me it doesn't make sense to choose sensitivity values for MAG, GRAD, FNIRS, etc. channel types when there aren't pre-established standard sensitivity values already in clinical use for those data types.

So this is why I think that we should consider approaches that either (1) are flexible in what sensitivity are allowed (i.e., making the scalings dict accept strings like '10 uV/mm' for EEG or '10 fT/mm' for magnetometers), or (2) only support the standardized EEG values for sensitivity, and scale all other channel types according to the conversion factor implied by the chosen EEG sensitivity.

@drammock
Copy link
Member Author

drammock commented Sep 8, 2023

@Grifunf we discussed this issue today in our developer's meeting. Here is a recommended course of action from that discussion. Each of these could be a separate PR, though steps 1 and 2 might be fairly easy to do all at once in one PR.

  1. add UI elements that display the current scaling for each channel type (one per channel type in the viewed file)
  2. in the UI elements from (1), let users type in a new scaling to change it for a given channel type.
  3. Add more UI elements to display the current sensitivity for each channel type. These would be read-only for now, but at least users could use the +/- keys to get close to the sensitivity that they want
  4. in the UI elements from (3), let users type in a new sensitivity to change it for a given channel type.
  5. Design a nicer UI for this feature rather than 2 columns (or rows) of text boxes. Maybe this means dropdowns or comboboxes, but for now let's focus on getting it to work, then at the end we can worry about making it pretty / easy to use.

Notes:

  • in implementing both timebase and sensitivity, you will need to account for monitor DPI and window size / window resize events. Currently, we only control the number of channels / time samples displayed, so resizing the window will rescale the data traces.
  • ignore my earlier suggestion to change all channel types's scalings based on the chosen EEG sensitivity. I've been convinced that it's a bad idea.
  • support for strings like '10 uV/mm' in the scalings dict will probably happen more-or-less for free by the time we reach step 3 or 4.

@Grifunf
Copy link

Grifunf commented Sep 13, 2023

You have been really helpful, thanks again. For starters I will work with your guidelines.
I would only like to bring up a couple of issues of my own:

  1. Time scaling feels like something that shouldn't be different across multiple channels/types. It would probably be a design nightmare too, but I think it has no realistic use case, and could possibly be a nuisance for the user. Please correct me if I'm wrong.
  2. Using only the scalings dict is a good idea, one question however is how will this interact with the existent global scale_factor and the Zoom In/Out buttons, but I guess they can work together. Example: if the user chooses 10 uV sensitivity and then doubles via the buttons, it should go to 5 uV and scale accordingly, even if it has a different overall factor from the rest of the channels.
  3. I will probably exclude the STIM channels from this.
  4. On this note, although this should probably be another issue, I am using the ch_types_ordered and the inv_norm snippets for the same purposes as the ScaleBars. They also exist in other places in the file. Could we possibly add those to the mne object so they are accessible from elsewhere and to avoid repetition?

@cbrnr
Copy link
Contributor

cbrnr commented Sep 13, 2023

I haven't followed this discussion very closely, but I still wanted to add my thoughts. I see two issues with the current plan:

  1. It looks like this feature will be implemented only for the Qt-based browser. Up until now, we've tried to avoid that the two backends diverge, so I think it would be good to involve more people in the decision if we want to do that.
  2. The main motivation for implementing new scalings seem to be clinical applications. This can be dangerous, because we certainly don't want medical doctors base their diagnoses on our viewer. Legally, we already disclaim all warranties via our license, and doctors are not allowed to use it anyway in their clinical practice, but still. Should we include a warning, at least in the docs?

That said, I'm fine with adding sensitivity-based scalings, but I'd prefer to have them for both backends.

@drammock
Copy link
Member Author

  1. It looks like this feature will be implemented only for the Qt-based browser. Up until now, we've tried to avoid that the two backends diverge, so I think it would be good to involve more people in the decision if we want to do that.

There is no reason not to add it to the MPL browser too. In general I'm happy if contributors do one or the other and add an issue to remind us to restore parity later. Asking to do both at the same time can be a deterrent, and anyway the Qt browser is in a separate repo, so separate PRs are necessary anyway.

  1. The main motivation for implementing new scalings seem to be clinical applications.

"clinical" can mean diagnostic practice, or it can mean research on diseased/disordered populations. So one use case is "clinicians (who are used to certain ways of viewing data) doing research (not diagnosis)".

Legally, we already disclaim all warranties via our license, and doctors are not allowed to use it anyway in their clinical practice, but still. Should we include a warning, at least in the docs?

I've seen explicit disclaimers in the licenses of other software (like 3D slicer) saying things like "we neither recommend nor endorse using this for diagnosis". Recent conversations I've had convinced me that we don't need that because (at least in the US) the main constraints (1) only apply if you're selling the medical device (software is a "device" by FDA definitions) and (2) mostly are about what claims you can make when marketing the device. As long as we say things like "useful for clinical applications" and don't say "you can use this to diagnose epilepsy" then we're fine.

@drammock
Copy link
Member Author

  1. Time scaling feels like something that shouldn't be different across multiple channels/types. It would probably be a design nightmare too, but I think it has no realistic use case, and could possibly be a nuisance for the user. Please correct me if I'm wrong.

agreed, time scaling should be one setting for all channel types.

  1. Using only the scalings dict is a good idea, one question however is how will this interact with the existent global scale_factor and the Zoom In/Out buttons, but I guess they can work together. Example: if the user chooses 10 uV sensitivity and then doubles via the buttons, it should go to 5 uV and scale accordingly, even if it has a different overall factor from the rest of the channels.

We need to think about whether it would make sense to change the behavior of the +/- buttons (or disable them completely?) when using predefined sensitivity settings.

  1. I will probably exclude the STIM channels from this.

STIM channels will be affected by timebase, and they already can be affected by the scalings dict. But yeah I don't think we need any sensitivity dropdown to scale stim channels.

  1. On this note, although this should probably be another issue, I am using the ch_types_ordered and the inv_norm snippets for the same purposes as the ScaleBars. They also exist in other places in the file. Could we possibly add those to the mne object so they are accessible from elsewhere and to avoid repetition?

If those variables are needed then sure, you can add them there. Just know that fig.mne is currently undocumented and not very well organized; things might move around (or be made private) at some point. That shouldn't deter you though; end-users shouldn't need access to those variables, and it's OK for module-internal code to access private variables.

@cbrnr
Copy link
Contributor

cbrnr commented Sep 20, 2023

FWIW, I just noticed that EDFbrowser lets users override the detected DPI like this:

edfbrowser_dpi

Maybe something to consider adding to our browsers as well?

@Grifunf
Copy link

Grifunf commented Oct 5, 2023

Hello again @drammock .
I 'm sorry for having to ask here, but I 'm kinda lost on how to modify in place specific types, from within the pg_figure module.
The dicts you mentioned seem to only affect the command line function on the startup of the browser, and the Scalebars. I'm also wondering if apply_function is of any relevance.
Sorry if my question is answered elsewhere or doesn't make sense, I would appreciate any kind of advice.

@drammock
Copy link
Member Author

I'm kinda lost on how to modify in place specific types, from within the pg_figure module.

It's hard to know for sure what you're asking here. It would be much easier to answer if you could open a work-in-progress pull request on the mne_qt_browser repository, and point to specific line(s) of code where you're trying and failing to do something.

The dicts you mentioned seem to only affect the command line function on the startup of the browser, and the Scalebars.

Do you mean the scalings dict? You are right, modifying that won't have any effect once the plot is created.

I'm also wondering if apply_function is of any relevance.

I don't think so; we don't want to modify the underlying data, we just want to change how much of it is displayed (time base) and how zoomed in it is vertically (sensitivity).

The best advice I can give is to work on the steps I outlined previously, one at a time, in order, and open a work-in-progress draft pull request as soon as possible. It is much much easier for us to help you when looking at and commenting directly on the code, than when discussing problems abstractly here.

@Grifunf
Copy link

Grifunf commented Nov 1, 2023

Hello again! So i have opened the first draft pull request and I hope I will be able to complete it soon. I have a couple of questions for the time being:

  1. I have implemented a scaling dict, apart from the scale factor, so that different channel types can be scaled accordingly. The only place I have trouble implementing it its the clipping part. So there, a conditional is used:
    array[array * scale > threshold]=np.nan.
    I need to use the index of the array in order to use the correct scale factor. For example:
    arr[arr[i] * scale[i] > threshold]=np.nan
    I tried some for-loop and enumerate iterations but it has become incredibly slow. Do you have any advice on how to tackle this?
  2. I began looking over the monitor calibration subject. I found the pixel ratio in our module. This isn't what I am looking for, but I was wondering whether we already have access to the monitor DPI or something relevant.

Thanks in advance!

@drammock
Copy link
Member Author

drammock commented Nov 1, 2023

I'm actually in the middle of reviewing that right this moment. stand by for feedback on the open PR

@larsoner
Copy link
Member

larsoner commented Sep 6, 2024

I'm going to close this since I think this has been tackled in #278 and related PRs, and I think we probably will not add this to the matplotlib browser. For further enhancements let's discuss over in the mne-qt-browser repo!

@larsoner larsoner closed this as completed Sep 6, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment