Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How should tools process higher fidelity values than they can handle internally? #157

Open
c1rrus opened this issue Jul 6, 2022 · 17 comments

Comments

@c1rrus
Copy link
Member

c1rrus commented Jul 6, 2022

Background

The discussions in #137 have raised an interesting question: What is the expected behaviour of tools that only support "traditional" 24bit sRGB colors, when they encounter color tokens whose values have wider gamuts or higher depths than the tool can handle internally?

I think we will encounter variations of the same question for other types too. For example, how should a tool that only understands pixel dimensions deal with values expressed in rem? Or, how should a tool that only supports setting a single font family when styling text deal with a token that provides an array of font values? I suspect this kind of question could arise for new types that get added in future versions of the spec too.

I therefore think it would be a good idea for our spec to define some generalised rules around what the expected behaviour should be for tools whenever they encounter tokens that have higher fidelity values than they are able to process or produce internally.

Requirements

I believe the overarching goal of our format is interoperability:

  • Any tool that can read tokens files must be able to successfully read any valid tokens file and interpret all relevant tokens as intended.
  • Any tool that can write tokens files must always write valid tokens files, so that they can be read by any other tool.

I intentionally say "relevant" tokens, as I believe it's perfectly acceptable for a tool to only operate on a subset of token types. For example, if we imagine a color palette generating tool like Leonardo added the ability to read tokens files, then I'd expect it to only surface color tokens to its users and just ignore any other kinds of tokens that might be in the file.

Therefore our spec needs to specify just enough for that to become possible. Any tool vendor should be able to read our spec and write code that can successfully read or write valid tokens files. Any human author should be able to read our spec and write or edit valid tokens files which will then work in any tool.

When we get down to the level of token values, I believe this means:

  • Any tool that can read tokens files MUST be able to accept any valid value of relevant tokens and map that value to whatever its internal representation is

The question I'd like us to discuss in this issue is: What should tools do when their internal representation of token values has a lower fidelity than what is permitted in tokens files?

I don't believe "tool makers should improve their internal representation" is a viable option though. In my view, interoperability is worthless without widespread adoption. There are lots of existing tools out there that could benefit from being able to read/write tokens files (e.g. UI design tools like Figma, Xd, Sketch, etc.; DS documentation tools like zeroheight, InVision DSM, Supernova, etc.; Color palette generators like Leonardo, ColorBox, etc; and so on). There's a good chance they each have very different ways of representating values like colors, dimensions, fonts, etc. internally. It wouldn't be reasonable for our spec to necessitate them changing how their internals work and we can't assume, even if they wanted to do so, that it's quick or easy to achieve.

At the same time, I don't want our spec to become a lowest common denominator. That would reduce its usefulness to everyone. It might also lead to a proliferation of $extensions as a result of teams and tools working around limitations of the format. While, I think having some $extensions being used in the wild is healthy and could highlight areas future versions of the spec should focus on, having too many might lead to a situation where our standard format splinters into several, incompatible de-facto standards, each supported by different subsets of tools. That would hurt interoperability and, IMHO, suck!

Use-cases

Very broadly, I think tools that do stuff with tokens files can be divided into 3 categories:

  • Tools that only write tokens files
  • Tools that only read tokens files
  • Tools that read and write tokens files

For the purpose of this issue, I think it's worth considering each case individually

Write-only tools

If a tool internally only supports lower fidelity values than what can be expressed in the format, I don't see a problem. As long as every value those tools can produce can be accurately expressed in the DTCG format, I don't think it matters that there are other values that could be expressed in the format.

Furthermore, if our format mandates a particular syntax for the value, but how the tool chooses to display or prompt for that value uses an alternate syntax, that's not a problem. Converting between equivalent syntax is easy to implement and so I do believe it's acceptable to expact that tool makers convert values where needed when writing them.

This is akin to expressing a tempature in ºC or ºF - 0ºC and 32ºF are the exact same temperature - they're just being expressed in different ways. Similarly, (if sRGB color space is assumed) #ff7700, { red: 255, green: 127, blue: 0} or { red: 1, green: 0.5, blue: 0 } are the exact same color, just expressed using different syntaxes. Converting between those is simple to do in code.

Color example

A UI design tool internally only supports "traditional" 24bit RGB colors in the sRGB color space. The user defines a color token in that tool - e.g. via a color picker, or by typing in RGB values - and then wants to export that to a .tokens file.

If our spec also supported other color spaces and/or color depths (note: the current 2nd editors draft does not), that tool could still save our the exact color the user chose.

Dimension example

A modular scale generator only supports generating (viewport) pixel values. The user sets a base sizes and multiplier and tool generates a set of spacing values for them. The user wants to save out that spacing scale to a .tokens file.

The format supports px values, so those values can be accurately saved out. The fact that the format also supports rem values is irrelevant in this use-case.

Read-only tools

If can only read tokens from a .tokens file, to then be used within that tool but, internally, it only supports a lower fidelity than what can be expressed in the DTCG format then the following situations may occur:

  • The tokens have values that map 1:1 to something the tool can represent internally
  • The tokens have values that exceed what the tool can represent internally

In the first case, there is no issue - the tool can just use the original value as is. In the second case, the tool should convert the original token value to the closest approximation that it can handle internally.

Theoretically the tool could reject the token too, but I think our spec should disallow that. If a file contains N number of relevant tokens, I think it's reasonable for all N tokens to be used by that tool. However, where the tool needs to do some kind of lossy conversion of the values, I think tools should be encouraged to notify the user. E.g. they might display a warning message or equivalent to indicate that approximations of some tokens values are being used.

Color example

A UI design tool internally only supports "traditional" 24bit RGB colors in the sRGB color space. The user loads a .tokens file that contains some color tokens whose values have have been defined in a different color space and are out of gamut for sRBG.

In this case the tool should perform a lossy conversion of those colors to their nearest equivalents in the sRGB space that it supports. It's up to the tool maker to decide when that conversions takes place. It could happen as the file is loaded - all out of gamut colors are converted at that point and that's what the tool uses thereafter. Alternatively, if it makes sense for that tool's internal implementation, it could preserve the original value from the token file but convert it on the fly whenever that value is used or displayed in the tool.

Either way though, the tool should try to inform the user what has happened. For example, when the .tokens file is first loaded, it might display a message saying that tokens X, Y and Z had out of gamut value and they have been converted to their closest equivalents.

Dimension example

A UI design tool internally only supports (viewport) pixel values when setting dimensions (e.g. widths, heights, coordinates, border thicknesses, font sizes, etc.). The user loads a .tokens file that contains some dimension tokens whose values have have been defined as rem values.

Since the tool lacks the concept of dimensions that are relative to an end-user's default font size settings, it needs to perform a lossy conversion of those rem values to appropriate, absolute pixel values. Since most web browsers' default font size is 16px, converting N rems to 16 * N px is likely to be an appropriate method to use. The token values are converted and thereafter the user only sees the corresponding px values in the tool. As with the color example, when that conversion happens is up to the tool maker.

Again, the tool should try to inform the user what has happened. For example, when the .tokens file is first loaded, it might display a message saying that tokens X, Y and Z used rem values and they have been converted to pixels by using an assumed default font size of 16px.

Read and write tools

This is a special case because such tools may be used to read tokens from a file, manipulate that set of tokens somehow and then write the result back out. The following edge cases therefore need to be considered:

Imagine a .tokens file contains design tokens A, B and C. These tokens have higher fidelity values than the tool can handle internally. Consider these use-cases:

  1. The tool loads that file, the user then adds a new token, D, via the tool and then saves out the full set of tokens (A, B, C and D)?
  2. The tool loads that file, the user then deletes token C and saves out the remaining tokens (A and B)?
  3. The tool loads that file, the user modifies the value of token A in the tool and then saves out the full set of tokens (A (with its new value), B and C)?

Should the values of the tokens which the user has not touched (for example the tokens A, B and C in the first case) still have their original (high fidelity) values, or is it acceptable for them to have been replaced by their nearest lossy equivalents?

The latter is probably easier for tool vendors to handle. If they follow the rules I outlined in the "Read-only tools" section above, then they will have done a lossy conversion when importing the tokens values into the tool's internal representation. When that is later saved out, the original high-fidelity value has been lost so, as per the "Write-only tools" rules those lossy values are saved out.

However, I think this is sub-optimal from the user's perspective. If they never edited a token in the tool, it feels wrong for some lossy conversion to have been applied to those tokens' values "behind the user's back". Furthermore, if we take the view that design tokens represent design decisions, one could argue that the tool is changing those decisions without the user's consent.

Btw, a related scenario is tools which only operate on certain token types. Imagine a .tokens file that contains design tokens X, Y and Z. X is of type color, Y is of type cubicBezier and Z is of type fontFamily. The user loads the token file into a tool for creating and editing animation timing functions. Only token Y is relevant to that tool, so it ignores tokens X and Z and never displays them to the user anywhere in its UI. Consider the same kinds of uses cases as above - the user adds a another cubicBezier token and saves it back to the tokens file, or the user edits the value of token Y and saves it back to the tokens file.

Should tokens X and Z still be present in the file? I'd argue yes. I think it would be confusing to users if those tokens just vanished when, from their perspective, all they were doing was using a specialised tool to tweak the cubicBezier tokens.

Therefore, I think tools that read and write tokens files need to have the following behaviour in addition to the read-only and write-only rules outlined the previous sections:

  • When reading a .tokens file, the tool must keep a copy of all tokens (regardless of whether they are relevant or not to that tool) along with their original values (even if thoser are higher fidelity than what the tool can handle internally).
  • It can perform lossy conversions an relevant tokens as needed (as per the "Write-only tools" rules) and present only those converted values to users to use within the application. However, it should keep track of whether or not the tokens value was modified by the user.
  • If the user deletes a token, the copy of token's orginal value should also be discarded
  • When writing a .tokens file, the tool must write out the full set of tokens. For each token:
    • If the value was modified, the new value set via the tool should be exported as per the "Write-only tools" rules
    • If the value was not modified, the original value should be exported
    • If it is a new token created in the tool, its value should be exported as per the "Write-only tools" rules

While this will add some complexity for tool makers, I believe this kind of functionality should be achievable without needing to drastically change the internals of the tool. The copies of unused tokens and original values could be kept "outside" of the tools existing internals. The tool would just need to maintain some kind of mapping between its internal values and the corresponding "originals".

What do you all think?

@c1rrus c1rrus mentioned this issue Jul 6, 2022
@romainmenke
Copy link
Contributor

romainmenke commented Jul 6, 2022

I generally agree with everything above and I think that only lowering fidelity of modified tokens is a good strategy.

For me this has a lot of overlap with "forward compatibility" and providing an "escape hatch" in case a tool needs to process something it wasn't designed for.

I think the principle behind this can be further abstracted and will overal improve the format.

  • explicit vs. implied information
  • destructuring values vs. micro syntaxes
  • raw vs. encoded data
  • no type overloading
  • ...

A hex color is composed of 3 or 4 numbers which have been encoded and then concatenated.
It also has an implied color space of sRGB.

Defining solid principles to make the points by @c1rrus easier and revisiting past choices in the format with these in mind will make the format better.


When V2 adds a fictional zz unit.

{
  "$value": "3zz"
}

vs.

{
  "$type": "length",
  "$value": {
    "number": 3,
    "unit": "zz" 
  }
}

This is exactly the same problem as hex vs wide gamut color spaces.

By avoiding micro syntaxes it is much easier for tools to determine how to handle a token that they weren't designed for.

  • they might understand $type: length and number: <some-number>
  • maybe they can downgrade, or provide a manual escape hatch?
  • a more clear error/warning message can be shown

@o-t-w
Copy link

o-t-w commented Jul 7, 2022

Question from @kaelig :

Here's an example use-case, followed by a few questions about the possible flows:

Use-case

  1. import a token file
  2. edit a wide-gamut color
  3. save/re-export the token file

Questions

In the context of an sRGB-only tool...

  1. What should the editing flow/color picking UX be?
  2. When saving/exporting edited color tokens, what sort of prompts would you expect in relation to potential data-loss?
  3. What should the exported data look like (considering the user may have picked an sRGB color, overwriting a P3 color)?
  1. If the color's are downgraded to their closest approximation when imported, then color picking would work in Figma/XD/Sketch exactly as it does now.
    2/3. I know that the ideal scenario of design tokens is for designers and developers to equally be able to update/change/add tokens, including directly from a design tool, but for teams using modern color, it's not an advisable workflow yet. However, if a user does want to overwrite a P3 color with an sRGB color from a design tool, they should be free to do so. The tool should make it clear to the user on initial import that the colors were downgraded with some kind of informational message displaying a list of the names of any affected tokens. I think that is sufficient and does not need to be reiterated on export.

@c1rrus I completely agree with everything you have written here, except:

Theoretically the tool could reject the token too, but I think our spec should disallow that.
Seeing as simply ignoring values the tool doesn't understand would make it easier for tools to implement, I think they should be given this option in order to aid adoption of the spec. People using modern color realize that it comes with some drawbacks for the time being.

@marcedwards
Copy link

marcedwards commented Jul 7, 2022

Furthermore, if our format mandates a particular syntax for the value, but how the tool chooses to display or prompt for that value uses an alternate syntax, that's not a problem.

I completely agree.

I don't believe "tool makers should improve their internal representation" is a viable option though. In my view, interoperability is worthless without widespread adoption.

I agree, but it’s a bit tricky with colors. I think your suggestion for conversion and warnings is a good one (“tokens X, Y and Z had out of gamut value and they have been converted to their closest equivalents”).

For colors, there’s many scenarios where color values could be altered in an unexpected or destructive way. Maybe a good way to approach the issue is to list all the possible permutations?

Tool has no color management

If the design tool has no color management, there’s really nothing the Design Tokens format can do to help. Raw values will likely be read in with no conversion. They’ll look as incorrect as other colors within the tool. On the positive side, the same color from a token file will likely match a color using the value on the canvas (they’ll both be displayed incorrectly).

Tools that support color management will likely also have this behaviour, if the document set to be unmanaged.

Tool has low color depth

If the tool represents colors as 32bit ints, and the source values are floats, rounding will likely occur. Warning is a good strategy. Please note that some tools store individual colors at a higher depth than their actual canvas and renderer.

This may only be a minor issue, especially if the original colors were chosen as HEX/32bit int — the conversion from int to float and back to int should give the same value.

Document color space is smaller gamut

Even if the tool supports wide gamut colors, the current document may be set to sRGB. In this instance, out of gamut colors may be clipped, resulting in vibrant colors looking duller. For example, a very vivid Display P3 red token being used in a document set to sRGB would result in the red looking less vibrant.

Document color space is wider gamut

If the document color space is wider gamut than the token, conversion and some rounding is likely to occur. This isn’t really an issue when working with floats (the values will change, but the appearance should be maintained).


Another consideration is that the Design Tokens format proposal, CSS, iOS and other color representations have per-color profiles, but almost all design tools have per-document profiles, if they’re color managed at all. Even in scenarios where everything is behaving, a Design Tokens file with mixed color space colors will almost certainly need some kind of destructive conversion.

This is a very long-winded way of saying that I think color space and color depth conversions are likely, and in many cases, unavoidable. The actual format chosen as the representation in the Design Tokens file probably can’t change that.

It may sound like I’m being negative, but I’m not — if the format does a good job of describing colors and the space they’re in, that’s awesome! If there’s some accompanying policies and suggestions, that’s also great!

@DominikDeak
Copy link

DominikDeak commented Jul 8, 2022

This doesn't have to be super complicated, but one thing is certain, tagging colour data with a colour space is an absolute must - especially if the goal is making universal interchange format. Otherwise, colour data will be meaningless and subject to open interpretation. Here's my recommendation with respects to colour:

  • Colour Model: Always RGB channels, with an optional opacity channel.
  • Data Type: IEEE 754 floating point representation, support at least double. That will cover most bases in terms of precision, as double has a 53 bit mantissa.
  • Data Range: Colour and opacity channels should have a normalised range between [0, 1]. Theoretically, extended ranges is possible represent for colour (i.e. values outside the normalised range [0, 1], see extended sRGB as an example), but in the interest of simplicity, we should keep everything clamped to [0, 1]. If you need extended values, then use a colour space with a wider gamut, which leads me to the next point...
  • Colour Space: Finally, colours need to be tagged with a colour space. This is perhaps the most important metadata, because colour spaces make colours unambiguous, and will appear consistently the same everywhere. Support for limited subset of colour spaces should be enough, i propose support for sRGB, DisplayP3, and ACEScg. If colour space info is missing, then sRGB is implied, always. We don't have to embed an entire ICC colour profiles, supplying the colour space as a simple enumeration value is enough.

@romainmenke
Copy link
Contributor

@DominikDeak I think your comment might have been intended for #137 :)

@svgeesus
Copy link

svgeesus commented Jul 8, 2022

This doesn't have to be super complicated, but one thing is certain, tagging colour data with a colour space is an absolute must - especially if the goal is making universal interchange format. Otherwise, colour data will be meaningless and subject to open interpretation. Here's my recommendation with respects to colour:

Absolutely. That is a bare minimum requirement.

Support for limited subset of colour spaces should be enough, i propose support for sRGB, DisplayP3, and ACEScg. If colour space info is missing, then sRGB is implied, always. We don't have to embed an entire ICC colour profiles, supplying the colour space as a simple enumeration value is enough.

Agree too that a simple enumeration is sufficient, and more interoperable. Suggesting ACEScg fr interchange is interesting, care to say a bit more about it? (I know what it is, and implemented it in color.js; I mean why that particular space if you are going for a very small enumeration of allowed spaces).

@DominikDeak
Copy link

My reasoning for picking ACEScg is because it is used by high end production and rendering, and is specifically tailored for CGI effects and image compositing applications.

ACEScg has a colour gamut that pretty much covers almost every other gamuts in existence. The benefit here is future support for display technologies that will exceed Display P3 capability. In fact, some OLED displays available today already do that, and the newer quantum dot OLEDs are expected to approach (or even match) Rec.2020 capability. Who knows what other marvels we’ll see 10 years down the track?

I think having the foresight for supporting widest available colour spaces (either ACEScg, or at least something equivalent) will be a benefit for future content creators. It’s all about establishing plenty of headroom early on in the standard, and not having to push revisions/amendments later (which I suspect might suffer from fragmented adoption).

@kaelig kaelig added Needs Feedback/Review Open for feedback from the community, and may require a review from editors. dtcg-format All issues related to the format specification. dtcg-color All issues related to the color module specification. labels Jul 13, 2022
@kevinmpowell
Copy link
Contributor

I know that the ideal scenario of design tokens is for designers and developers to equally be able to update/change/add tokens, including directly from a design tool, but for teams using modern color, it's not an advisable workflow yet.

☝️ @o-t-w Can you elaborate on this statement? As a developer with almost no experience working with color spaces, I'm unclear on what the workflow involves? Does it involve a specific design tool? Or hand-editing the values to make sure they're represented accurately?

@kevinmpowell
Copy link
Contributor

@c1rrus I agree with your outline and I think those concepts will hold, regardless of how broad or narrow our token types are. I believe our aim, regarding specific types, is to start with the most widely adoptable (if narrow) types we can, and broaden types as needed (color) over time in future revisions of the spec.

To help redirect some of the comments in this issue, I believe many of the color-specific topics would be better discussed on #137.

@romainmenke
Copy link
Contributor

@kevinmpowell Can you clarify this :

I believe our aim, regarding specific types, is to start with the most widely adoptable (if narrow) types we can, and broaden types as needed (color) over time in future revisions of the spec.

Narrow vs. broad types is not the same as values with high/low fidelity.
Did you mean values?

@NateBaldwinDesign
Copy link

@DominikDeak

ACEScg has a colour gamut that pretty much covers almost every other gamuts in existence.

This comment, plus the desire to have a flexible system for future color spaces makes me wonder: should we not just support CIEXYZ in the spec? Reason being, XYZ is the lowest common denominator in color conversion formulas and actually is the mapping of the visible spectrum. If a token set has XYZ values for their color, conversions into almost any other color space are one or two conversions away.

Albeit maybe this isn't useful in practice (XYZ colors can't be used out-of-the-box), but worth surfacing the question to the group.

@kevinmpowell
Copy link
Contributor

@romainmenke
narrow type example: Color - supports HEX representation of colors
broad type example: Color - supports HEX + multiple other representations of colors (including those in color spaces HEX can't support)

@romainmenke
Copy link
Contributor

In that case I disagree with your statement :)

This is a new specification and in my opinion there are no good reasons to choose a type format that is obsolete.

But this issue originally focussed on high vs. low fidelity values, regardless of the type format :)

@kevinmpowell
Copy link
Contributor

@romainmenke by what definition is HEX notation obsolete?

@romainmenke
Copy link
Contributor

romainmenke commented Jul 25, 2022

@kevinmpowell That was not even up for debate (as far as I understood).
There is broad consensus that hex is a legacy format.

People with a web background might be more numerous and for them it appears as something new. But the web was/is actually lagging behind native here. Native contexts have had wide gamut colors for a lot longer.

The question mainly was if this specification should choose an obsolete format to aid adoption.

My opinion is that it should not.

@DominikDeak
Copy link

@romainmenke by what definition is HEX notation obsolete?

Obsolete is probably not the term I would use, but I would consider hex representation problematic in terms of precision. Conventional hex notation, as used by web standards, will only support 8-bit per channel. This is fine for small gamut colour spaces, such as sRGB (which also use non-linear transfer curves for representing values). However, 8-bits per channel is completely inadequate for wide gamut colour spaces (Display P3, especially ACEScg with linear values), as this would lead to colour quantisation artefacts (colour banding).

Making hex notation available will create an unintended scenario where users unwittingly specify low-precision colour for wide gamut colour spaces. I think the goal here should be about minimising accidental quality loss and limit data representation to IEEE double precision.

@kevinmpowell kevinmpowell added this to the Next Draft Priority milestone Oct 3, 2022
@kevinmpowell kevinmpowell removed Needs Feedback/Review Open for feedback from the community, and may require a review from editors. dtcg-format All issues related to the format specification. labels Oct 3, 2022
@kevinmpowell kevinmpowell added Color Type Enhancements and removed dtcg-color All issues related to the color module specification. labels Oct 17, 2022
@danosek
Copy link

danosek commented May 30, 2024

What is the expected behaviour of tools that only support "traditional" 24bit sRGB colors, when they encounter color tokens whose values have wider gamuts or higher depths than the tool can handle internally?

I think it is up to a tool to deal with it

  • Crash (the worst scenario)
  • Warning
  • Prompt that colors have to be converted - guide the user, what is going on, what effects will the conversion make and why

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests