-
Notifications
You must be signed in to change notification settings - Fork 42
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
clearly defined anatomical orientation #208
Comments
@dyf thanks for raising this important issue! I think what you are describing could be expressed via the If you have imaging data in the condition where it was acquired / stored in instrument coordinates, but can be transformed to anatomical coordinates via some transformation, you basically have two "coordinateSystems" : [
{
"name" : "instrument",
"axes": [
{"name": "z", "type": "space", "unit": "micrometer"},
{"name": "y", "type": "space", "unit": "micrometer"},
{"name": "x", "type": "space", "unit": "micrometer"}
]
},
{
"name" : "my_reference_brain",
"axes": [
{"name": "anterior-to-posterior", "type": "space", "unit": "micrometer"},
{"name": "inferior-to-superior", "type": "space", "unit": "micrometer"},
{"name": "left-to-right", "type": "space", "unit": "micrometer"}
]
}
], The transformation from one Would this work for you @dyf ? |
Yup, the way @d-v-b described is how I had in mind to describe anatomical coordinates using the v0.5 spec - coming soon. |
Timely discussion as we just migrated the content of the wiki page to Read the Docs. |
@dyf wrote
Regarding In that vein, your point about |
Wow, amazed at the quick response, thanks all! How about a more domain-agnostic name? e.g. That said, I'm open to putting this into an extension (e.g. an anatomy or neuroanatomy extension). It's most important (to me) that downstream applications know what terms to expect and how they are defined, so either in the core spec or an easily discoverable extension would be fine. Pardon my ignorance - is there an extension mechanism already? |
In talking to the good folks of Get Your Brain Together (late one night), I proposed either as part of the existing coordinate system or perhaps by having a dedicated subclass of existing systems, the ability to add a field that is defined by an enum or ontology that is managed by a community outside of the NGFF process. The "name" fields in the above examples could almost be used for these purposes but issues such as:
could benefit from having additional type information that the medical community can detect and "do the right" with. I think the question is whether this should be a specifically anatomical extension, or if there's a way to add this type information more generically. (Oops. Comments were added while I was writing. I think nonetheless that this holds. I don't think names are sufficient alone. Another possible option would be a prefixing mechanism that is community-specific, but this would likely become unwieldy.) |
@joshmoore I agree that names are insufficient for the reasons you say. Adding a new field that refers to an enum/ontology would be great. Different communities would be able to use different controlled terms by indicating what ontology they use. Validation may get tricky, but it's at least a start. I this |
I had a look at how the CF conventions handles this: https://cfconventions.org/Data/cf-conventions/cf-conventions-1.10/cf-conventions.html#standard-name. As I understand it, a physical quantity (like length) can be associated with a
I think the @joshmoore said:
Can you elaborate on these concerns? I don't really understand how namespace collision with another community would be a problem -- presumably all that matters is that community X can save and load their data. If community Y uses the same metadata names as X, why is that bad? Perhaps I'm missing something. And I didn't understand the second concern at all. |
I like adding a I can't speak for @joshmoore, but for me a primary goal is to remove undocumented assumptions so that tools can reliably interpret the orientation of images. If multiple subcommunities have different, disparately (or un-) documented conventions for how to interpret |
I was primarily referring to collisions either on the namespace prefix itself or on the key if there is no namespace prefix. What's bad is if community X and Y cannot tell if the data came from the other community.
I may have misunderstood @dyf's example, but if detection is based on the use of a unique value as the
In my opinion, |
I think this issue relates to #203, insofar as we are thinking about giving a "proper name" to a measurement / quantity (anatomical coordinates can be thought of as special lengths). Depending on how big the ontology is going to be, I wonder if we should consider requiring that it be versioned and stored inside the zarr metadata under the appropriate namespace (maybe under an |
It would be great to be able to use dicom headers for lossless conversion between ome-ngff and WSI or other dicom objects. A lot of the concepts mentioned in this thread already have well standardized representations in dicom and it would be a shame to devise something from scratch that duplicates the concepts in an incompatible way. I know a lot of people think dicom is complex and hard to use, but in my experience it's the data that's complex and so any standard will be complex and we might aw well use the one we have. It may not be realistic to assume that everyone will use dicom, but the ability to losslessly transcode between formats seems like a very valuable goal. |
Yes, we put axis units next to anatomical orientation in our metadata schema.
It's quite small. I would love to be able to include it here so that we can use it for validation. |
I wonder if
That might make things more self-describing for humans but not for the programs that will need to understand a priori. Further, if this is a general mechanism that we want to use for the interpretation of other fields in the future, I fear the inlining burden will become burdensome.
What do you mean by "include it here", @dyf? i.e. in this issue?
💯 for working towards interoperability, @pieper. Can you show a snippet of what you think that header information would look like?
Can you share an example of that, @dyf? I think working towards a collection of N similar json blurbs that encode the same information that we can start referring to by name for this discussion would be useful. |
Hmm.. these are a bit subjective, no? I feel like
I think the alternative to making a dataset self-describing is to use links, but links can break, either because the content at the URI has moved, or because internet connectivity is unreliable. I'd be a a little uncomfortable requiring an internet connection for validating an ome-ngff with ontologies in it, at least if the ontology information is small enough to fit in JSON. Hopefully an example ontology document can clear up some of these issues. |
@d-v-b I want to include the terms that are in the issue (e.g. If you are looking for a relevant external source, see: https://openminds.ebrains.eu/v3/ --> controlledTerms -> anatomicalAxisOrientation. They describe each term in jsonld, e.g.: RAS, RAI. The difference is that I am asking for these 3-letter codes to be broken up so we can describe each axis clearly and independently (rather than assume that all arrays are 3-dimensional). We could very easily package these terms up into a JSON file and add them to the repository. |
FYI, the DICOM coordinate systems and orientations may be relevant when something is patient-relative. The co-ordinate system is used when the origin of an image TLHC and the unit vectors defining the orientation of the rows and columns of an image are to be described. DICOM is +ve in the LPS (left-posterior-superior) directions. For 2D images (such as a mammogram) the row and column directions of an image are defined in a patient-relative sense categorically. These are described at:
Note that quadrupeds as well as bipeds are accounted for (theoretically). See also this (outdated) explanation: There are also (US) volume-relative and slide-relative coordinate systems: |
Do folks have an opinion on the format and location of the controlled vocabulary? I am planning to open a draft PR for this in the next week or so. |
It would be nice if the file format could store standard coded entry data as specified in the DICOM standard (and used in all clinical imaging applications). You can find all the intricate details in the DICOM standard, but in practice it is quite simple - an entry is specified by these 3 strings:
|
+1 to @lassoan's suggestion of adopting the |
Did all the talk of DICOM derail this conversation? I would like to encode some microCT data that is currently in nrrd format and I'm curious if anyone has settled on a convention for storing the 'space-directions' and related metadata. From what I can see the ITK implementation only stores spacing and origin. If there is no existing convention I'll be glad to make up my own. |
I took the liberty to:
in #253 |
@thewtex thanks for working on that PR 👍 I do have a comment that I'm not sure whether to put here or there, but I'll start here and if people agree it's worth addressing we could try to adapt the PR. My issue is that the PR as currently written assumes that the imaging axes will always be along one of these defined anatomical directions (e.g. right-to-left or rostral-to-caudal). While this is often the case, it's also likely that the imaging will be rotated with respect to these axes, or even sheared for some scan types. This is where the concepts in nrrd of 'space' and 'space directions' are useful. What you have already documented would be really good for the definitions of the These situations come up frequently in medical imaging, for example MRs acquired on a tilted plane, or CTs with a shear due to gantry tilt. But they can also arise if microscopy images need to be spatially correlated with macroscopic anatomy. I'm afraid I find it hard to follow the discussion in this PR related to transforms, but my fear is that if we don't have a clear way of describing common image acquisition geometries then the logic around transforms will end up with extra complexities. |
As I understand, the ngff file will define multiple coordinate systems and transforms between them. ITK will specify "image" and "physical" coordinate systems and an affine transform between them. Each axis of the "physical" coordinate system axes will have |
Yes, I agree that's how it should work, but when I read the current pr text it doesn't come across that way. To me it says that if, for example, you had a coronal acquisition you would define that using the Maybe we can have worked out examples for common imaging scenarios so that it's clearer how these anatomical labels and transforms should be used together. To me, it's good for the information about anatomical mapping to be included in the transform, while the low level pixel container only talks about memory layout. I.e. at the zarr level you are only talking about rows, columns, slices, blocks, etc. But then the transform introduces the idea that you are mapping from these indices into a particular physical space (such as "LPS" or "RAS"). To say it another way, the concept of "inferior-to-superior" should exist within the transform to physical space, not as a label assigned to the z axis of a data array. |
|
Okay, sounds like the right direction. I'll take another look when all this stuff is merged. |
"Regarding I think that there are two issues here, one that is completely general and one that is domain specific. The general one is that the current proposal leaves it up to interpretation which axes correspond to what directions in space, and how those axes are oriented. There are some conventions in how data is laid out with respect to space, but there are many variations. 1. 'matrix' like order, with up/down followed by left/right and increasing values in "up/down" reflecting going downward on a 2d page, and increasing values of "left/right" indicating rightward. When dealing with Z, some people follow on the convention and put the third axis (toward/away) in front of the others, some people place it last. Many follow the right hand rule, but others don't. The other is 'axes' style where 'left/right' is followed by 'up/down', and then 'in/out' comes last. Most would follow the convention that higher values represent right, up and in. This specific one relates to domain specific schemas about what the interpretation of these physical orientations are, and the reasonable desire for communities to have shared interpretations of those physical orientations. I think allowing the spec to let users specify more restrictive schemas (in the form of a pointer to a json_schema) would allow for readers to be able to identify data that could be trivially and reliably cross compared. |
I agree, and fortunately there's nothing in the spec today that prevents this -- as far as I know, you are generally allowed to add extra metadata fields to all parts of OME-NGFF metadata. I don't think this is very common (I've never seen it) but it's certainly possible. |
The anatomical orientation proposed here is not a " Anatomical orientation is critical for domains that need it, and it should not be blocked for "it is not important to me" reasons. If
This spec is intended to work with the Coordinate systems and transformations spec, which addresses the issues mentioned. If there are gaps, please provide a proper review to this spec or RFC-5 that clearly lays out where the are issues are in the spec with demonstrations in implementations and use cases. |
Exactly. Since It could be useful to define "profiles" to specify rules about which of the optional fields are required for what domain. For example, the |
Yes we are working on one collectively at the Allen! should be done in the next week or so. |
Making the My concern is that the fundamental problem you are trying to solve for neuroimaging is in fact common to all imaging modalities. It would be great if your solution could be generalized (e.g., was not overly specific). Perhaps this concern looks like I'm saying "it is not important for me", but IMO that's not a charitable summary, and to be clear I'm not trying to block anything, just posting feedback that hopefully leads to a refined solution. |
Two wrongs don't make a right! I think special-casing the HCS layout was a mistake, and a much better solution would be to define a "collection of images spec" that could work for HCS and any other imaging dataset that needs to express structured collections. |
The
The RFC is not specific to neuroimaging, and it addresses both bipeds and quadrupeds.
Constructive feedback is welcome.
I could not disagree more. The HCS (High Content Screening) layout is a valuable standard that addresses a real and pressing issue in the scientific community. It provides a concrete, well-defined solution for handling structured collections of images in high-throughput imaging experiments, which are increasingly common across many research domains. By standardizing how these datasets are stored and accessed, HCS not only ensures reproducibility but also facilitates interoperability between different tools and platforms used by researchers. To suggest that special-casing the HCS layout was a mistake overlooks the practical benefits that it has already delivered. While the idea of a more general "collection of images spec" might sound appealing, it remains a vague and hypothetical alternative that has not been implemented, tested, or proven to work in real-world scenarios. Obstructing a concrete, widely-adopted solution in favor of an abstract possibility would only hinder progress and slow down the development of tools that scientists need today. In practice, real-world problems require real-world solutions, and HCS fills that need effectively. If a more generalized solution is eventually developed, it can be integrated in the future, but dismissing HCS in its favor at this stage would be premature and counterproductive. |
For me the key question is whether bioimaging scientists who are not imaging bipeds and quadrupeds can use this spec to assign semantics to directions in their data. My read of the relevant RFC and the discussion here is that the answer to this question is "no", and it's up to specific communities to repeat the process here with their own key on
I don't follow this logic. The alternative to special-casing the HCS layout is not "no HCS layout", it's "define spec that allows users to define an HCS layout, but also non-HCS layouts". The practical benefits are greater in the latter case, because HCS users and non-HCS users can express their data models. Maybe the word "mistake" is causing friction here? Apologies if that comes across as harsh -- I'm using the term to denote "technical debt we could have avoided with a bit more planning". Personally I think technical debt / mistakes are OK, and even inevitable in software; we should just endeavor to fix those mistakes. And there is an ongoing effort to define an image collections spec to address this issue. |
Thanks everyone for the ongoing engagement. The introduction of (more) domain specific information is certainly a cause for care and applying previously learned lessons. I'm concerned, however, that this issue will grow without bound. In an attempt to wrap up it up, here are some thoughts on what's in-scope and out-of-scope from my point-of-view fir @thewtex's RFC: In-scope
Out-of-scopeThere were a few suggestions that I'd like to come back to, but elsewhere:
If any of the above commentators is interested in driving an RFC for one of those, please get in touch! Otherwise, I'll make sure they are included on the overall roadmap. Did I miss anything? Any final comments? Otherwise, I'd propose we close this issue, and charge ahead with RFC-4. |
I will make some comments here, since I do not know how to make comments on the RFC: I find the rostrum a questionable term as well. Correct term should be Also are there any fish experts here to comment on the anatomical orientations discussed? Much like insects, they are neither bipeds nor quadrupeds. |
@muratmaga good catch! I guess it was just a not careful enough rephrasing of the DICOM standard (https://dicom.nema.org/medical/dicom/current/output/chtml/part03/sect_C.7.6.2.html#sect_C.7.6.2.1.1). I would remove the interpretation part (don't try to match "front" and "back" colloquial terms) and add all the directions that the DICOM standard defines. @muratmaga would these directions cover all the biology use cases that you are aware of?
The DICOM standard does exactly this for terms when it is not possible to choose a single terminology that is suitable for all use cases. Each term is specified with a triplet: CodeValue (unique identifier for a term), CodingSchemeDesignator (identifier for the authority that issued this code), CodeMeaning (human-readable code description). In this case, it would look something like Authors of the DICOM standard decided that for anatomical orientation they don't need to rely on external terminologies and they specified all the terms inside the DICOM standard. I believe the same choice would work well in the NGFF standard, too. The proposed If we introduce the |
Those look good for tetrapods, which I am mostly familiar with. Beyond that all I can say multicellular life has a lot of weird bodyplans (think of seahorses, echinoderms. I am sure plants have their own anatomical planes), which may require new axes definitions eventually. But this set should cover most of what frequently needed. |
The RFC 4 Anatomical Orientation values are expanded and the descriptions improved based on the discussion in ome#208 and the content in https://en.wikipedia.org/wiki/Anatomical_terms_of_location.
The RFC 4 Anatomical Orientation values are expanded and the descriptions improved based on the discussion in ome#208 and the content in https://en.wikipedia.org/wiki/Anatomical_terms_of_location.
Thanks to all for the discussion. I emphatically agree with @joshmoore that identification of what is in-scope here and out-of-scope is important. While I personally find the ideas of profiles and extension systems and a collections spec extremely exciting, they do not exist yet and should not block the clear definition of anatomical orientation. Regarding a prefix mechanism and specifying anatomical orientation in this way, I think further fleshing out is also required. Furthermore, there are trade-offs in both in generality and specificity and in complexity and simplicity. Generality addresses more use cases, but it is more limited in the information it can convey. Specificity will convey more information, but it will be limited in the use cases it addresses. Nothing real and valuable will be simultaneously completely general and completely specific. And there are similar trade-offs in complexity and simplicity. Where to land should be use-case driven, and I do not think we are too far from hitting the sweet spot for anatomical orientation in RFC 4. In #267, I expanded the orientation values as suggested by @lassoan. This aligns well with DICOM and the categorization and overview in https://en.wikipedia.org/wiki/Anatomical_terms_of_location. I also improved the description as suggested by @muratmaga to identify I think it is important to include the brief but expanded descriptions because we want this functionality to be accessible to more than expert anatomists. If folks know of additional anatomists that can provide reviews, their feedback would also strengthen the standard. This is issue thread is meandering and difficult to follow, and I do agree that it should be closed. |
The RFC 4 Anatomical Orientation values are expanded and the descriptions improved based on the discussion in ome#208 and the content in https://en.wikipedia.org/wiki/Anatomical_terms_of_location.
From what I can find, the values in RFC 4 also cover fish and insects. |
Thank you for the summary @thewtex. I fully agree. |
Thanks all! Merging the changes in #267 now and closing this issue. Further comments and updates are of course still welcome. |
The anatomical orientation of an array is a critical piece of metadata for downstream analysis, particularly for the increasingly common task of aligning acquired images to an atlas for anatomical quantification and standardized comparison to other data.
Currently the NGFF spec includes coordinate transformations, but the anatomical orientation of the sample once the transformation is applied is unspecified. As a result, tools simply make assumptions about orientation, which leads to wasted time and erroneous results. In systems with a fair amount of anatomical symmetry like the brain, it is impossible to retroactively inspect data to understand the orientation in which it was acquired. A place in the spec where we are explicit about anatomical orientation will allow acquisition and analysis tools to stop making assumptions.
I propose we add a field with a controlled vocabulary for anatomical orientation. Some prior art:
The ITK ecosystem uses 3 letter acronyms to describe anatomical orientation. For example
RAS
corresponds to:This works, however the acronyms are ambiguous. I personally continually have to look up if
R
is left-to-right or right-to-left.Nifti's coordinate transforms are assumed to map data into
RAS
. This approach also works, however it relies on users and data generators being familiar with the Nifti spec and abiding by it.The Brain Image Library asks for a more explicit definition of anatomical orientation. Submitters choose for each axis from a controlled vocabular that resembles the following:
left-to-right
right-to-left
anterior-to-posterior
posterior-to-anterior
inferior-to-superior
superior-to-inferior
We have adopted this at the the Allen Institute for Neural Dynamics in our data schema. We may consider adding
dorsal-to-ventral
,ventral-to-dorsal
,rostral-to-caudal
, andcaudal-to-rostral
to this vocabulary.At the recent Get Your Brain Together hackathon hosted at the Allen Institute this was discussed at length.
Please consider adding an
anatomicalOrientation
field toaxes
metadata. Because this would be a controlled vocabulary, I recommend separating it fromlongName
, which is uncontrolled (see #142). I am of course also open to this living elsewhere.Should this have a default, I suggest it be
RAS
to be consistent with Nifti.The text was updated successfully, but these errors were encountered: