Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

User group, tcspc offset and markers #46

Open
wants to merge 7 commits into
base: 0.5.dev
Choose a base branch
from

Conversation

harripd
Copy link

@harripd harripd commented Aug 19, 2024

Summary of changes

This PR introduces a small number of actual additions to the format, while also clarifying a number of points without changing the actual format.

Additions to the format

  1. Markers: limited support for markers is not included with the addition of the /photon_data/measurement_specs/markersX groups (X being a positive integer). These denote ids in the /photon_data/detectors arrays that are assigned not to photons but any arbitrary "event" marker. The meaning of markers is not defined anywhere in the spec, but should be specified by the file creator inside the user-defined /user/ group (see point number 3). As no additional support for markers is specified, non-custom software should ignore photons designated as markers. (Note future versions may provide for defining markers for certain types of FLIM etc., but these are in future versions, not 0.5)
  2. /setup/detectors/tcspc_offset` array has been added to specify any offset that should be applied to different detectors as a result of differing distances between detectors.
  3. /user/ group- this is not a change per-se, but still a major clarification. The /user/ group is now on official part of the documentation, and is specifically designated as being open for the file creator (the user) to define as he/she wishes. Although there are some suggested ways of using it:
    • Use the /user/[vendor] group to store raw metadata loaded from the header of the file, this ensures preservation of data
    • Use /user/experimental_settings to store various experimentally determined values like FRET correction factors etc. as well as an explanation of any marker ids in the data

Clarifications

  1. Reworded text on measurement_type adding emphasis that "generic" is now the preferred type and while we may still add a new types , there will need to be a very compelling reason to do so.
  2. Reworded some of the text (without changing definitions) on ALEX related fields under /photon_data measurement_specs to hopefully make the spec more clear/human readable.
  3. Improved (hopefully nothing missed) to documentation syntax
    • Made explicit that certain fields should have the same number of values in others (e.g. timestamps and detectors) or match the value of another field (e.g. num_pixel and setup/detectors/id)
    • More consistent naming of fields:
      • if a referenced field is a group, end with a /
      • root level groups will always start with a /
      • non-group fields will not end with a /
      • always treat as monospaced when a field is being referenced and not defined.
  4. Renamed "Detector IDs" to "Record IDs" to reflect inclusion of marker records
    • Terminology renamed throughout document (hopefully all instances caught)
      • Before terminology was split between "detectors", "detector IDs" and "pixel IDs" now it should be more consistent, and precise
    • New terminology:
      • "record ID" is inclusive of both real photons and markers
      • "detector ID" is a real photon, arising from a physical detector
      • "marker ID" is any other sort of record, like a sync photon or marker for FLIM measurements, currently v0.5 has no official way to interpret these, and so will be ignored
    • Subsection of Marker IDs added for added clarification

@smXplorer
Copy link
Contributor

smXplorer commented Aug 19, 2024

About the interpretation of "detector ID": not all counts from a detector is a real photon, therefore it might be preferable to talk about detector pulse (or count).

Regarding the /user/ group: the openendedness of the specification seems like an invitation to utter confusion. In a very extreme case, I can imagine someone dumping the raw data file as a byte array "for the record". This is of course unlikely to happen, but you get the drift.
I would suggest holding off on making this a part of 0.5.

Consider replacing "he/she" by "she/they/he".

@harripd
Copy link
Author

harripd commented Aug 20, 2024

About the interpretation of "detector ID": not all counts from a detector is a real photon, therefore it might be preferable to talk about detector pulse (or count).

How about rewording this section to:

"Simply put a detector ID corresponds to an event that results from a pulse (real photon or background/afterpulse) at one of the detectors (pixels)."

This way we don't ensure that it is a real photon, but also makes clear that if it is not, the instrument is "interpreting" it as a photon, as opposed to markers, which are most definitely not photons.

Regarding the /user/ group: the openendedness of the specification seems like an invitation to utter confusion. In a very extreme case, I can imagine someone dumping the raw data file as a byte array "for the record". This is of course unlikely to happen, but you get the drift.
I would suggest holding off on making this a part of 0.5.

If you look at the phconvert notebooks that Antonio published, they actually use the /user group to store the header metadata.
Further, again using phconvert as a guide, the assert_valid_photon_hdf5 function specifically ignores the /user group. So making /user a part of 0.5 is really just putting it in the documentation, when the practical implementation already does this.

Would additional text discouraging "data dumping" be helpful? Or additional emphasis that the core interpretation of data should not require the /user group? It should more be a semi-structured place to store metadata that currently has no official place in the spec.

@smXplorer
Copy link
Contributor

Agreed with the wording, although I have reservations with introducing markers in 0.5.

My bad, I did not dig into phconvert and missed Antonio's usage of an undocumented feature (I couldn't find any mention of that group in the docs or in the multispot photon-hdf5 files we released with the Methods paper for that matter).

There probably should be guidelines in such a "no rule" section, but my preliminary thinking was, if anything, to impose some kind of json or similar string-based structure.
I know that Matlab will read an hdf5 file and intelligently create the appropriate data structure internally, so a limitation to strings has no real bearing here, but other languages might have a harder time (python converts json to dictionaries easily).
My point is that since it was so embryonic in Antonio's implementation, it might be a good thing to mull over it a bit more, before regretting having opened this Pandora box in the future. Remember, you can't undo anything in a backward compatible file format...

If I understand what you are saying, Antonio just dumped the header of whatever file he was converting into this /user/ group (presumably as a string?).
But what would be the use of this metadata without the raw data?
There might a few bits of information to glean from it (firmware version, TAC or TDC settings, etc.), but I would argue they will have no practical use for data analysis and are therefore not needed.
The point is that photon-hdf5 data is processed (photons are dispatched into detector buckets, ordered), and having the info that was used to perform this processing will generally not allow to get back to the raw file.
If people want the raw file, let them get the raw file.
Photon-hdf5's purpose is not to allow getting back to it.

@harripd
Copy link
Author

harripd commented Aug 20, 2024

The way Antonio stored the metadata was not as a string, but rather in a series of separate arrays.

Basically both Picoquant and B&H headers are dictionaries, and so Antonio just made fields names the same as the dictionary keys, and values equivalent to the values of the dictionary (so basically /user/picoquant or /user/becker_hickl would have a bunch of arrays, most of them single element, named after the keys in the header files).

The main purpose of storing this metadata is that these headers often contain information about the offsets, powers etc used for the detectors lasers etc. None of this data is used by FRETBursts, but I think it is good to still store it as reference for an intrepid person to investigate just in case. For instance some B&H cards (like the one I'm using) can specify the TAC window in ways that phconvert still doesn't quite parse correctly (I've been studying, and still can't perfectly match what I know is the TAC window with the various fields in the B&H .set file) so sometimes I wind up with files where the nanotimes unit is wrong. Having the original metadata there is useful in case someone wants to check that the conversion processes went correctly. It shouldn't be the case, but it is a good check.

@smXplorer
Copy link
Contributor

I understand the points and I am not trying to be difficult, but just to instill the sense of caution that should be present in the mind of someone trying to develop a useful tool.

You have an excitation_input_powers parameter in the setup group that would address the laser power point.
The offset is a newly introduced parameter.
What else is really needed that should then be added to the format?

My take on the other problem you are describing, is that if a photon-hdf5 file does not produce the results that the original creator is trying to achieve, the file is incorrect and should not be released by the original creator.
I would argue that one should carry out one's analyses with the photon-hdf5 file they are going to release, since their Jupyter Notebook (or Matlab script, or whatever code is shared for reproducibility purposes) will use that file.
Converting the file properly is internal voodoo that can be described if needed, but is way upstream.
I am not sure how providing the parameters you are mentioning would help in any way figuring out that the photon-hdf5 is actually bogus and how this would help in correcting it.
It might if it is indeed a simple rescaling but if things are bit more involved - nonlinear - then this won't help (I am only using binned std files from B&H, and have only dealt with ``fake" ptu files from Leica, so I am just interpolating from this).

Note that I am not against dumping header files from vendors but then my suggestion would be to call this that, for instance as a source_file_header field in the provenance group, not in a /user group open to all abuses :-).

Note that since there is no guarantee vendors will stick to one format (assuming it can easily be converted for a given version), my preference would be to dump the raw header as a string, leaving it up to the motivated user to do whatever forensic analysis they wish with it.

Copy link

@marktsuchida marktsuchida left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here are my comments; sorry for taking my time.

None of the changes I commented on are dealbreakers for me, only suggestions for (what I hope is) improvement, so counterarguments are welcome.

docs/phdata.rst Outdated Show resolved Hide resolved
docs/phdata.rst Outdated
Comment on lines 607 to 610
.. _record_ids:

Detector pixel IDs
^^^^^^^^^^^^^^^^^^
Record IDs
^^^^^^^^^^

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The HDF5 field names already use "detector" for all of these IDs (the /photon_data/detectors array contains both photon and non-photon), so I wonder if it will be less confusing if the general name just remains "detector ID"; we could use "photon detector ID" and "non-photon detector ID" when only one kind is meant.

I understand it's a tradeoff between using terms that better describe the thing vs terms that better match the HDF5 fields.

(I'm also slightly uncomfortable with using "record" for this, because that word is often used for a single TTTR record (often 32 bits) -- a photon, marker(s), or counter overflow.)

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@smXplorer @talaurence
What do you think? I have no particular attachment to the term markers or records.

So the choices would be

  1. Current pull request says, ie entries are called "records"
  2. Mark's suggestion, go back to the older terminology of detectors, but distinguish between "photon detector IDandnon-photon detector ID"
  3. We call these "events" so we will "event IDs" and we can refer to "photon events" and "non-photon events"

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You are discussing adding "markers/non-detector" flags and are wondering how to shove them into a array that was explicitly designed to store information on which detector a timestamp is coming from.
No surprise finding the proper terminology is confusing. :-)
When encountering such a problem, try to think out of the box.
My radical suggestion is to postpone introducing this to until after we have understood each other and weighted the pros and cons of adding this level of complexity (i.e. until 0.6).
But as an alternative to the above suggestion of trying to insert information of a different nature in an existing structure, I'll bring up the possibility to add an optional array of "markers" to the photon_data group(s).
If there is no markers (current files), there is no array.
If there are markers (future files), there is an array. It will mostly be filled with '0' (no marker), and those entry that are non '0' will be associated with a dummy "detector" entry (i.e. any value works as it will be ignored, but one could use the convention that 2^16-1 will be the equivalent of an integer NaN), since there is a priori no reason to have any specific detector associated with such a marker.
If I am not mistaken, HDF5 should be able to compress sparse "marker" arrays very efficiently.

So to answer your question: if there is no marker introduction in this version (0.5), then the question becomes moot.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't see a huge problem with "detector" not necessarily meaning "photon detector" (hence my suggestion above). It can be an (electrical pulse) edge detector or whatever. All we need is to explicitly define the term "detector" to be more general (as @harripd has already done under different terminology). IMHO "detector" is actually a pretty good term to mean "something that produces timestamps (records) for observed events that are otherwise indistinguishable".

My concern, to reiterate, was with letting the terminology diverge between the HDF5 names and the spec, which makes it harder to keep the definitions straight in one's head.

(Using code-font for HDF5 fields and quotes for spec terms) if the thing is called detector in the HDF5, you need to think in your head that detector can be either photon or non-photon anyway. I fear that adding to this situation a different name "record" for the same concept as detector, and especially the same name "detector (ID)" for a different concept (a detector that is not a "marker"), is going to increase the risk of people misinterpreting the spec.


I don't think using a separate markers array is going to win us much, I'm afraid. It is overall more complicated, it still requires the detectors array to contain a value that photon-only readers need to skip, and (at least as proposed) it imposes the irregular requirement that ID 0 cannot be a marker, which will require special-casing in all handling code (or bugs if not done right).


Regarding using "event" instead of "record" -- I think I have the same problem as with "record": both words suggest to me a single (photon or marker) event rather than a channel of input data.

("Event" and "record" would both be good words for talking about a single entry across the timestamps and detectors (and nanotimes) arrays, when not distinguishing between photons and non-photons. The term "record" could also be applied to other places where we have parallel arrays of the same length that act as the columns of a table (such as /setup/detectors): a record would denote a row in the conceptual table. That's clearly a concept in this spec but we don't have a word for it currently. I'd suggest saving these words for improving the readability of the spec in the future.)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Regarding a separate array: being sparse and for instance using 0 as the "no marker" value, it makes it extremely easy and fast to slice the actual photon_data array (and its associated detectors array, if any).
The MAIN advantage I see in it, is that it does not confuse a reader that is not aware of those additional markers (as I mentioned, since the "detector" of a "marker" is irrelevant, it can be anything : no need for a special value as I was musing about).
Yes, you will add "dummy" photons in your processed data, but it probably won't matter if there are very few of them.
This is kind of the essence of a backward compatible format.
You can ignore what you don't know about and be fine.
With a "detectors" array containing values that are not detector IDs, but pass as such (for an unsuspecting code), a software trying to interpret this will hallucinate detectors which will eventually contain very few time stamps.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't see a huge problem with "detector" not necessarily meaning "photon detector" (hence my suggestion above). It can be an (electrical pulse) edge detector or whatever. ... IMHO "detector" is actually a pretty good term to mean "something that produces timestamps (records) for observed events that are otherwise indistinguishable".

This is a really good point, the relevant fields are all called detector ... in the HDF5 file, I'll update to keep the term detector as the "universal" term, and we can then distinguish between photon and non-photon detectors.


The MAIN advantage I see in it, is that it does not confuse a reader that is not aware of those additional markers (as I mentioned, since the "detector" of a "marker" is irrelevant, it can be anything : no need for a special value as I was musing about).

I don't see a separate array as providing any real advantage, because you need to make some correlation between the indexes of the markers array and the detectors and timestamps arrays, so either you introduce 2 arrays, a markers_id and a markers_timestamp into the data, where now they will likely be very small, or you make markers the same size as detectors and you still have markers in the detectors array, defeating the whole point of keeping backwards compatibility.

I also checked over the FRETBursts code, it already looks specifically at the detectors_specs/spectral_ch1/2 fields to identify donor/acceptor photons, and detector ids not in those fields will be ignored if unintentionally. I agree we want to keep things backwards compatible, but if you don't have any markers a v0.5 file will be readable by all current readers anyways.

In the end there is never full backwards compatibility, it winds up being a pipe dream. I think there is value in maintaining backwards compatibility, but a new feature will always create some level of a break in backwards compatibility.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, just as a datapoint, markers are not always vastly less frequent than photons in a FLIM dataset. In fact, pixel markers (if recorded) can be more frequent than photons depending on specimen and conditions. Not that I'm primarily worried about performance.

Regarding backward compatibility: readers that interpret the photon detector IDs based on the existing Photon-HDF5 fields (such as spectral_ch1, split_ch1) will (hopefully) not be bothered by extra (non-photon) detector IDs. Readers that only handle a single detector ID are also safe in principle. There may be edge cases (such as readers that don't check their assumptions), but I don't think the compatibility problem is that severe. Admittedly a reader that does not fully interpret the data but somehow still assumes photons (say, code that computes simple statistics) will not be able to distinguish between photon and non-photon channels.

docs/phdata.rst Outdated
Comment on lines 270 to 271
- **markers1**
- **markers2**

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Bikeshedding the names.

  • Might it be more uniform to use the _ch1, _ch2 suffixes to indicate that that these contain the same types of ids as spectral_ch1, split_ch1, etc? (Counterargument would be that "channel" is to be used only for detectors that observe something split from a common photon source. If so, perhaps this should be defined explicitly.)

  • Having thought about this a bit more, if the intent is to use this as a mechanism to segregate out all timestamps that are not photons (including things like a laser SYNC or raw (pre-CFD) photon falling/rising edges recorded by Swabian Time Tagger), it might be better to use a more direct name such as non_photon_ch1, non_photon_ch2, etc., because "marker" has a fairly specific connotation (at least to those familiar with PQ and BH data). We probably don't want to keep adding categories that a photon-only reader needs to skip.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • _ch1 _ch1 I have no strong thoughts one way or another, I've though the same as you that having _ch1 _ch2 serves as a way to set apart enumerations of IDs, but it also suggests they are photons
  • I think I agree with your thoughts here, I would agree non_photon is a better name.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've been giving the naming some more thought, and the reality is there are at least two distinct categories of non-photon detectors, those that correspond to some form of detected event, ie a sync signal, a Swabian rising/falling edge tag etc. and those that are produced internally by the software, ie markers indicating the advance of a pixel or a line.

I'm looking to 0.6 when we introduce a FLIM standard, how will we define which non_photon/marker channels correspond to which marker (pixel/line advance). The /setup/scan/ group could contain a mapping, but it would be less intuitive for a person reading the file. Another strategy might be to say that ```detectors_specs/non_photonXmeans a detector with a purpose defined in the/user/`` group, and in 0.6 we will introduce specific fields ``detectors_specs/scan_chX`` and maybe ``detectors_specs/sync_chX`` to more concretely define functions of specific categories of non-photon events.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Something like your latter strategy is what I was envisioning (same in spirit as #45). That is, we specify detector IDs for known/standardized roles; not the other way around.

This follows how the photon detector IDs are specified. It also allows for nonstandard detector IDs (photon or non-photon) whose role is only documented in /user (and probably not recommended for final publication of data, but useful in early stages of a project). General-purpose readers should not interpret such detector IDs.

(I would note that frame/line/pixel markers (or a subset of those) are usually recorded by hardware, although software could, say, generate pixel markers from line markers if it wants to.)

docs/phdata.rst Outdated
Comment on lines 282 to 287
All previous fields are arrays containing one or more :ref:`record IDs<record_ids>`
(Detector IDs for all ``spectral_chX``, ``polarization_ch1`` and ``split_chX``,
and :ref:`marker IDs <marker_ids>` for ``markersX``).
For example, a 2-color smFRET measurement without polarization or split channels
(2 detectors) will have only one value in ``spectral_ch1`` (donor) and one value in
``spectral_ch2`` (acceptor). A 2-color smFRET measurement with polarization

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think this part applies to the markers (non-photons), so perhaps this paragraph should come before the markers (and not mention markers).

(And perhaps a single field, marker_ids (or non_photon_ids) is sufficient, because we do not anticipate these forming an axis in an N-dimensional space in the way that the spectral/pol/split ones do.)

Comment on lines +635 to +637
If the acquisition software also records events such as sync signals or markers indicating
a change in position of a piezo or galvo scanner, these events should also be assigned a unique
pixel ID. These events are all considered markers. To distinguish between detector (i.e. real photons)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Might I suggest "a unique record ID" (or whatever we decide on if my other comment gets any traction) and not "pixel ID" here, because the use of "pixel" to mean a detector element in this raster-scan-related context is potentially confusing?

@harripd
Copy link
Author

harripd commented Oct 29, 2024

@smXplorer @marktsuchida
I've updated the text with these main changes:

  1. Changed markers to non-photon IDs, and clarified the language throughout
  2. Added a note that while supported, non-photon IDs will break backwards compatibility, and thus should only be used if absolutely necessary.

I encourage everyone to go over this carefully,

@marktsuchida
Copy link

@harripd and @smXplorer,

Regarding the notes (and points raised in email) about backward compatibility: it might help to define what exactly we mean by backward compatibility (even if in an unofficial way to begin with).

There are broadly two possible policies:

  • We could say that readers must check the format version, and simply reject the file if it is of a newer version than the reader supports.

  • We could make it a policy to ensure that all new features are introduced in a way that old readers will "just work" as long as they ignore fields that they do not know of.

There could also be a mix of the two policies, such as readers rejecting files if the major version number is newer than known (I think this might be reasonable). Also, validators (as opposed to mere readers) will necessarily have to reject files of a newer version, or at least warn that they are not performing a full validation.

Without at least some stated policy, I suspect it will be hard to agree on what constitutes backward compatibility.


For the specific case of adding the non-photon IDs, I do not think that the addition is backward incompatible, even under the second interpretation above. Prior to the addition of non-photon IDs, data from each detector ID was already only interpretable if somewhere it is specified what the detectors are (such as with spectral_ch_1), with the possible exception of when there is only one detector ID. So in the 0.4 format, it was valid to have non-photon channels in the file -- except that there was no way for a reader to interpret it.

With the addition of the non-photon designation, old (0.4, or earlier 0.5dev) readers should still be able to read files containing non-photon channels, as long as they ignore the newly added fields. I think @harripd made a similar point specifically for FRETBursts in an email.

I do not have any issue with discouraging the use of non-photon channels in general, though. It remains more a feature for internal intermediate datastores than for final ("publication-quality") sharing, at least until we standardize ways to attach semantics (such as related to raster scanning) to these channels.

@harripd
Copy link
Author

harripd commented Nov 10, 2024

@marktsuchida I agree with your main points. Most importantly, the term backwards compatible was not precisely defined, and we need a concrete definition.


Upon giving it some thought, I think was we often call "backwards compatibility" is actually "forwards compatibility" so let me define them bellow:

Backward compatibility means that you can read and interpret files of previous versions using newer versions of the software. This is fairly easy, and more closely aligns with Mark's first principle

Forward compatibility means that you can read and interpret files of newer versions using older versions of the software. This is much more difficult. In fact it is impossible, as any new addition will add something that, unless it is entirely redundant (and thus what is the point), previous versions of the software will not know about and thus be unable to implement. Therefore forward compatibility inevitably means that new features are implemented in a "minimally breaking" way, in other words, as long as a particular file written in a new version doesn't require a new feature, it will be readable by older versions of the software. This is basically Mark's second definition.


Ideally we strive for both, insisting on backward compatibility, and implementing forward compatibility as much as possible. Bellow I outline some more specific principles for how we can achieve this inside of photon-HDF5:

  1. New fields should always be optional/conditionally mandatory (ie mandatory only when a new feature is used in the particular experiment) with minor version updates, major version updates may make a new field mandatory.
  2. The data type (options) of a field will not change from version to version.
  3. Fields will not be removed.
  4. Whether or not it is required must be implemented in a way consistent with previous versions, again this means:
    1. Any field introduced as mandatory will necessarily be mandatory in all future versions
    2. For conditionally mandatory fields, if a set of conditions requires a field to be mandatory in a previous version, it will also be mandatory under those conditions in future versions.
    3. In cases where new features (usually in another field) are added, then the new implementation should keep the field mandatory in all cases where, ignoring the new feature.
  5. Validators should be considered version specific, they cannot validate photon-HDF5 files of versions newer that what they were designed for. However, we can implement a "permisive/stict" option (or other similar name) determining whether or not the validator checks for unknown fields, and whether or not to throw an error or simply warn.

I also like Mark's point about major and minor versions. I suggest we follow how many software packages, including Python and Numpy, as well as file formats handle their versions: major versions (the number before the . ) can introduce changes that break backwards compatibility (although trying to minimize this) while minor versions should only add new features, ie they must be fully backwards compatible, and as forwards compatible as possible.


Regarding non-photon IDs, as written they fully follow the above principles, and discouraging their inclusion further makes clear that they are internal, and intermediate, preparing for when we standardize more specific types of non-photon IDs like markers and sync signals etc.

Regarding Marks point that they do not break either forwards or backwards compatibility, it is a bit of a gray area.
Basically, according to the strict interpretation of v0.4, any detector ID not defined in spectral/polarization/split_chXwas simply illegal.
But in the way most code was implemented, as Mark said, the meaning of that ID is simply unknown, and so readers should simply disregard it.
The latter is how most software was implemented.
The biggest issue I could see happening is is someone wrote a "lazy" reader, where it assumed say just spectral_ch1 and spectral_ch2 were present, and so made a simple boolean table by mask = detectors == spectral_ch1 in which case both spectral_ch2 and the undefined non-photon IDs become False.
But I would point out this could better be considered a bug or improper implementation of the v0.4 spec anyways..

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants