-
Notifications
You must be signed in to change notification settings - Fork 63
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How should tools process higher fidelity values than they can handle internally? #157
Comments
I generally agree with everything above and I think that only lowering fidelity of modified tokens is a good strategy. For me this has a lot of overlap with "forward compatibility" and providing an "escape hatch" in case a tool needs to process something it wasn't designed for. I think the principle behind this can be further abstracted and will overal improve the format.
A hex color is composed of 3 or 4 numbers which have been encoded and then concatenated. Defining solid principles to make the points by @c1rrus easier and revisiting past choices in the format with these in mind will make the format better. When V2 adds a fictional {
"$value": "3zz"
} vs. {
"$type": "length",
"$value": {
"number": 3,
"unit": "zz"
}
} This is exactly the same problem as hex vs wide gamut color spaces. By avoiding micro syntaxes it is much easier for tools to determine how to handle a token that they weren't designed for.
|
Question from @kaelig :
@c1rrus I completely agree with everything you have written here, except:
|
I completely agree.
I agree, but it’s a bit tricky with colors. I think your suggestion for conversion and warnings is a good one (“tokens X, Y and Z had out of gamut value and they have been converted to their closest equivalents”). For colors, there’s many scenarios where color values could be altered in an unexpected or destructive way. Maybe a good way to approach the issue is to list all the possible permutations? Tool has no color managementIf the design tool has no color management, there’s really nothing the Design Tokens format can do to help. Raw values will likely be read in with no conversion. They’ll look as incorrect as other colors within the tool. On the positive side, the same color from a token file will likely match a color using the value on the canvas (they’ll both be displayed incorrectly). Tools that support color management will likely also have this behaviour, if the document set to be unmanaged. Tool has low color depthIf the tool represents colors as 32bit ints, and the source values are floats, rounding will likely occur. Warning is a good strategy. Please note that some tools store individual colors at a higher depth than their actual canvas and renderer. This may only be a minor issue, especially if the original colors were chosen as HEX/32bit int — the conversion from int to float and back to int should give the same value. Document color space is smaller gamutEven if the tool supports wide gamut colors, the current document may be set to sRGB. In this instance, out of gamut colors may be clipped, resulting in vibrant colors looking duller. For example, a very vivid Display P3 red token being used in a document set to sRGB would result in the red looking less vibrant. Document color space is wider gamutIf the document color space is wider gamut than the token, conversion and some rounding is likely to occur. This isn’t really an issue when working with floats (the values will change, but the appearance should be maintained). Another consideration is that the Design Tokens format proposal, CSS, iOS and other color representations have per-color profiles, but almost all design tools have per-document profiles, if they’re color managed at all. Even in scenarios where everything is behaving, a Design Tokens file with mixed color space colors will almost certainly need some kind of destructive conversion. This is a very long-winded way of saying that I think color space and color depth conversions are likely, and in many cases, unavoidable. The actual format chosen as the representation in the Design Tokens file probably can’t change that. It may sound like I’m being negative, but I’m not — if the format does a good job of describing colors and the space they’re in, that’s awesome! If there’s some accompanying policies and suggestions, that’s also great! |
This doesn't have to be super complicated, but one thing is certain, tagging colour data with a colour space is an absolute must - especially if the goal is making universal interchange format. Otherwise, colour data will be meaningless and subject to open interpretation. Here's my recommendation with respects to colour:
|
@DominikDeak I think your comment might have been intended for #137 :) |
Absolutely. That is a bare minimum requirement.
Agree too that a simple enumeration is sufficient, and more interoperable. Suggesting ACEScg fr interchange is interesting, care to say a bit more about it? (I know what it is, and implemented it in color.js; I mean why that particular space if you are going for a very small enumeration of allowed spaces). |
My reasoning for picking ACEScg is because it is used by high end production and rendering, and is specifically tailored for CGI effects and image compositing applications. ACEScg has a colour gamut that pretty much covers almost every other gamuts in existence. The benefit here is future support for display technologies that will exceed Display P3 capability. In fact, some OLED displays available today already do that, and the newer quantum dot OLEDs are expected to approach (or even match) Rec.2020 capability. Who knows what other marvels we’ll see 10 years down the track? I think having the foresight for supporting widest available colour spaces (either ACEScg, or at least something equivalent) will be a benefit for future content creators. It’s all about establishing plenty of headroom early on in the standard, and not having to push revisions/amendments later (which I suspect might suffer from fragmented adoption). |
☝️ @o-t-w Can you elaborate on this statement? As a developer with almost no experience working with color spaces, I'm unclear on what the workflow involves? Does it involve a specific design tool? Or hand-editing the values to make sure they're represented accurately? |
@c1rrus I agree with your outline and I think those concepts will hold, regardless of how broad or narrow our token types are. I believe our aim, regarding specific types, is to start with the most widely adoptable (if narrow) types we can, and broaden types as needed (color) over time in future revisions of the spec. To help redirect some of the comments in this issue, I believe many of the color-specific topics would be better discussed on #137. |
@kevinmpowell Can you clarify this :
Narrow vs. broad types is not the same as values with high/low fidelity. |
This comment, plus the desire to have a flexible system for future color spaces makes me wonder: should we not just support CIEXYZ in the spec? Reason being, XYZ is the lowest common denominator in color conversion formulas and actually is the mapping of the visible spectrum. If a token set has XYZ values for their color, conversions into almost any other color space are one or two conversions away. Albeit maybe this isn't useful in practice (XYZ colors can't be used out-of-the-box), but worth surfacing the question to the group. |
@romainmenke |
In that case I disagree with your statement :) This is a new specification and in my opinion there are no good reasons to choose a type format that is obsolete. But this issue originally focussed on high vs. low fidelity values, regardless of the type format :) |
@romainmenke by what definition is HEX notation obsolete? |
@kevinmpowell That was not even up for debate (as far as I understood). People with a web background might be more numerous and for them it appears as something new. But the web was/is actually lagging behind native here. Native contexts have had wide gamut colors for a lot longer. The question mainly was if this specification should choose an obsolete format to aid adoption. My opinion is that it should not. |
Obsolete is probably not the term I would use, but I would consider hex representation problematic in terms of precision. Conventional hex notation, as used by web standards, will only support 8-bit per channel. This is fine for small gamut colour spaces, such as sRGB (which also use non-linear transfer curves for representing values). However, 8-bits per channel is completely inadequate for wide gamut colour spaces (Display P3, especially ACEScg with linear values), as this would lead to colour quantisation artefacts (colour banding). Making hex notation available will create an unintended scenario where users unwittingly specify low-precision colour for wide gamut colour spaces. I think the goal here should be about minimising accidental quality loss and limit data representation to IEEE double precision. |
I think it is up to a tool to deal with it
|
Background
The discussions in #137 have raised an interesting question: What is the expected behaviour of tools that only support "traditional" 24bit sRGB colors, when they encounter color tokens whose values have wider gamuts or higher depths than the tool can handle internally?
I think we will encounter variations of the same question for other types too. For example, how should a tool that only understands pixel dimensions deal with values expressed in
rem
? Or, how should a tool that only supports setting a single font family when styling text deal with a token that provides an array of font values? I suspect this kind of question could arise for new types that get added in future versions of the spec too.I therefore think it would be a good idea for our spec to define some generalised rules around what the expected behaviour should be for tools whenever they encounter tokens that have higher fidelity values than they are able to process or produce internally.
Requirements
I believe the overarching goal of our format is interoperability:
I intentionally say "relevant" tokens, as I believe it's perfectly acceptable for a tool to only operate on a subset of token types. For example, if we imagine a color palette generating tool like Leonardo added the ability to read tokens files, then I'd expect it to only surface color tokens to its users and just ignore any other kinds of tokens that might be in the file.
Therefore our spec needs to specify just enough for that to become possible. Any tool vendor should be able to read our spec and write code that can successfully read or write valid tokens files. Any human author should be able to read our spec and write or edit valid tokens files which will then work in any tool.
When we get down to the level of token values, I believe this means:
The question I'd like us to discuss in this issue is: What should tools do when their internal representation of token values has a lower fidelity than what is permitted in tokens files?
I don't believe "tool makers should improve their internal representation" is a viable option though. In my view, interoperability is worthless without widespread adoption. There are lots of existing tools out there that could benefit from being able to read/write tokens files (e.g. UI design tools like Figma, Xd, Sketch, etc.; DS documentation tools like zeroheight, InVision DSM, Supernova, etc.; Color palette generators like Leonardo, ColorBox, etc; and so on). There's a good chance they each have very different ways of representating values like colors, dimensions, fonts, etc. internally. It wouldn't be reasonable for our spec to necessitate them changing how their internals work and we can't assume, even if they wanted to do so, that it's quick or easy to achieve.
At the same time, I don't want our spec to become a lowest common denominator. That would reduce its usefulness to everyone. It might also lead to a proliferation of
$extensions
as a result of teams and tools working around limitations of the format. While, I think having some$extensions
being used in the wild is healthy and could highlight areas future versions of the spec should focus on, having too many might lead to a situation where our standard format splinters into several, incompatible de-facto standards, each supported by different subsets of tools. That would hurt interoperability and, IMHO, suck!Use-cases
Very broadly, I think tools that do stuff with tokens files can be divided into 3 categories:
For the purpose of this issue, I think it's worth considering each case individually
Write-only tools
If a tool internally only supports lower fidelity values than what can be expressed in the format, I don't see a problem. As long as every value those tools can produce can be accurately expressed in the DTCG format, I don't think it matters that there are other values that could be expressed in the format.
Furthermore, if our format mandates a particular syntax for the value, but how the tool chooses to display or prompt for that value uses an alternate syntax, that's not a problem. Converting between equivalent syntax is easy to implement and so I do believe it's acceptable to expact that tool makers convert values where needed when writing them.
This is akin to expressing a tempature in ºC or ºF - 0ºC and 32ºF are the exact same temperature - they're just being expressed in different ways. Similarly, (if sRGB color space is assumed)
#ff7700
,{ red: 255, green: 127, blue: 0}
or{ red: 1, green: 0.5, blue: 0 }
are the exact same color, just expressed using different syntaxes. Converting between those is simple to do in code.Color example
A UI design tool internally only supports "traditional" 24bit RGB colors in the sRGB color space. The user defines a color token in that tool - e.g. via a color picker, or by typing in RGB values - and then wants to export that to a
.tokens
file.If our spec also supported other color spaces and/or color depths (note: the current 2nd editors draft does not), that tool could still save our the exact color the user chose.
Dimension example
A modular scale generator only supports generating (viewport) pixel values. The user sets a base sizes and multiplier and tool generates a set of spacing values for them. The user wants to save out that spacing scale to a
.tokens
file.The format supports
px
values, so those values can be accurately saved out. The fact that the format also supportsrem
values is irrelevant in this use-case.Read-only tools
If can only read tokens from a
.tokens
file, to then be used within that tool but, internally, it only supports a lower fidelity than what can be expressed in the DTCG format then the following situations may occur:In the first case, there is no issue - the tool can just use the original value as is. In the second case, the tool should convert the original token value to the closest approximation that it can handle internally.
Theoretically the tool could reject the token too, but I think our spec should disallow that. If a file contains N number of relevant tokens, I think it's reasonable for all N tokens to be used by that tool. However, where the tool needs to do some kind of lossy conversion of the values, I think tools should be encouraged to notify the user. E.g. they might display a warning message or equivalent to indicate that approximations of some tokens values are being used.
Color example
A UI design tool internally only supports "traditional" 24bit RGB colors in the sRGB color space. The user loads a
.tokens
file that contains some color tokens whose values have have been defined in a different color space and are out of gamut for sRBG.In this case the tool should perform a lossy conversion of those colors to their nearest equivalents in the sRGB space that it supports. It's up to the tool maker to decide when that conversions takes place. It could happen as the file is loaded - all out of gamut colors are converted at that point and that's what the tool uses thereafter. Alternatively, if it makes sense for that tool's internal implementation, it could preserve the original value from the token file but convert it on the fly whenever that value is used or displayed in the tool.
Either way though, the tool should try to inform the user what has happened. For example, when the
.tokens
file is first loaded, it might display a message saying that tokens X, Y and Z had out of gamut value and they have been converted to their closest equivalents.Dimension example
A UI design tool internally only supports (viewport) pixel values when setting dimensions (e.g. widths, heights, coordinates, border thicknesses, font sizes, etc.). The user loads a
.tokens
file that contains some dimension tokens whose values have have been defined asrem
values.Since the tool lacks the concept of dimensions that are relative to an end-user's default font size settings, it needs to perform a lossy conversion of those rem values to appropriate, absolute pixel values. Since most web browsers' default font size is 16px, converting N rems to 16 * N px is likely to be an appropriate method to use. The token values are converted and thereafter the user only sees the corresponding px values in the tool. As with the color example, when that conversion happens is up to the tool maker.
Again, the tool should try to inform the user what has happened. For example, when the
.tokens
file is first loaded, it might display a message saying that tokens X, Y and Z used rem values and they have been converted to pixels by using an assumed default font size of 16px.Read and write tools
This is a special case because such tools may be used to read tokens from a file, manipulate that set of tokens somehow and then write the result back out. The following edge cases therefore need to be considered:
Imagine a
.tokens
file contains design tokens A, B and C. These tokens have higher fidelity values than the tool can handle internally. Consider these use-cases:Should the values of the tokens which the user has not touched (for example the tokens A, B and C in the first case) still have their original (high fidelity) values, or is it acceptable for them to have been replaced by their nearest lossy equivalents?
The latter is probably easier for tool vendors to handle. If they follow the rules I outlined in the "Read-only tools" section above, then they will have done a lossy conversion when importing the tokens values into the tool's internal representation. When that is later saved out, the original high-fidelity value has been lost so, as per the "Write-only tools" rules those lossy values are saved out.
However, I think this is sub-optimal from the user's perspective. If they never edited a token in the tool, it feels wrong for some lossy conversion to have been applied to those tokens' values "behind the user's back". Furthermore, if we take the view that design tokens represent design decisions, one could argue that the tool is changing those decisions without the user's consent.
Btw, a related scenario is tools which only operate on certain token types. Imagine a
.tokens
file that contains design tokens X, Y and Z. X is of typecolor
, Y is of typecubicBezier
and Z is of typefontFamily
. The user loads the token file into a tool for creating and editing animation timing functions. Only token Y is relevant to that tool, so it ignores tokens X and Z and never displays them to the user anywhere in its UI. Consider the same kinds of uses cases as above - the user adds a anothercubicBezier
token and saves it back to the tokens file, or the user edits the value of token Y and saves it back to the tokens file.Should tokens X and Z still be present in the file? I'd argue yes. I think it would be confusing to users if those tokens just vanished when, from their perspective, all they were doing was using a specialised tool to tweak the
cubicBezier
tokens.Therefore, I think tools that read and write tokens files need to have the following behaviour in addition to the read-only and write-only rules outlined the previous sections:
.tokens
file, the tool must keep a copy of all tokens (regardless of whether they are relevant or not to that tool) along with their original values (even if thoser are higher fidelity than what the tool can handle internally)..tokens
file, the tool must write out the full set of tokens. For each token:While this will add some complexity for tool makers, I believe this kind of functionality should be achievable without needing to drastically change the internals of the tool. The copies of unused tokens and original values could be kept "outside" of the tools existing internals. The tool would just need to maintain some kind of mapping between its internal values and the corresponding "originals".
What do you all think?
The text was updated successfully, but these errors were encountered: