Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Define order of operations during parsing of tokens files #123

Open
romainmenke opened this issue Apr 3, 2022 · 5 comments
Open

Define order of operations during parsing of tokens files #123

romainmenke opened this issue Apr 3, 2022 · 5 comments

Comments

@romainmenke
Copy link
Contributor

romainmenke commented Apr 3, 2022

At the moment there doesn't seem to be a specific order in the specification for certain operations during parsing.

This leads to ambiguity when multiple tokens files are combined.

Example definition :

  1. parse token files in order of declaration (whatever declaration is)
  2. add all tokens to a shared token bag
    2.1. if a token id already exists, override it
  3. dereference all token values

This seems more powerful as it allows publishing a "system" of tokens that end users can manipulate with a few overrides.

or

  1. parse token files in order of declaration (whatever declaration is)
  2. add tokens to a specific token bag
  3. dereference all token values in isolation
  4. add all tokens to a shared token bag
    4.1. if a token id already exists, override it

This might work more intuitively.


Examples of ambiguity :

file a, loaded first

{
	"font": {
		"family": {
			"sans-serif": { "$value": "sans" },
			"base": { "$value": "{font.family.sans-serif}" }
		}
	}
}

file b, loaded second

{
	"font": {
		"family": {
			"sans-serif": { "$value": "Helvetica" }
		}
	}
}

What is the value of font.family.base?

  1. sans
  2. Helvetica

file a, loaded first

{
	"font": {
		"family": {
			"base": { "$value": "{font.family.sans-serif}" }
		}
	}
}

file b, loaded second

{
	"font": {
		"family": {
			"sans-serif": { "$value": "sans" },
			
		}
	}
}

Does this give an error because font.family.sans-serif is not yet defined?
Or does it lazily resolve?

@c1rrus
Copy link
Member

c1rrus commented Apr 6, 2022

You've raised a really interesting point that we've not really addressed at all.

In one of our earliest drafts (before it was even public) we were considering some kind of import mechanism, where one token file could import another. But we quickly realised there's quite a lot of complexity to resolve with an approach like that. For example, if token file A imports token file B, and a tool reads token file A...

  • does it see a combined set of all tokens from files A and B?
    • or does it only see tokens from file A (but perhaps tokens in A can reference tokens in file B)?
  • if there are tokens with the same name in files A and B is that:
    • an error?
    • or does the token from file A override the one from B
  • what a about groups with the same name in files A and B - do they get merged? Or does one replace the other?
  • if tokens in one file can override tokens from another file, do their types need to be the same? If not, what are the consequences for other tokens that reference that token?

In the interest of keeping our version 1 spec simple, we decided to drop the idea for the time being. I think there was a hope/assumption that tools would solve this somehow.

But, as shown in your example, that does raise an interesting question when it comes to references. Is a token that references another which does not exist in the same file valid? If you take the view that, since the spec says nothing about working with multiple token files, each token file must be self-contained, then I'd say that should not be valid. But overriding some tokens is desireable for use-cases like theming. And being able to split very large sets of tokens over several files is also desirable. So there probably should be an official way for a token in one file to reference a token in another.

My personal preference would be to revisit the import idea. That would put the onus on the spec to clearly define what the behaviour should be which will benefit interoperability between tools. I think it would also help make the order in which files are being included explicit.

To encourage more discussion, here's a rough proposal of how this could work...

file1.tokens.json (a self-contained tokens file, where all references must point to tokens in the same file):

{
  "token-a": {
    "$value": "#123456",
    "$type": "color"
  },
  
  "group-b": {
     "token-b-1": {
      "$value": "1.5rem",
      "$type": "dimension"
    }
  },
  
  "alias-token-c": {
    "$value": "{group-b.token-b-1}"
  }
}

file2.tokens.json (another self-contained tokens file, where all references must point to tokens in the same file):

{
  "token-a": {
    "$value": "#abcdef",
    "$type": "color"
  },
  
  "group-b": {
     "token-b-2": {
      "$value": "320ms",
      "$type": "duration"
    }
  },
  
  "alias-token-d": {
    "$value": "{group-b.token-b-2}"
  }
}

file3.tokens.json (which includes file1 & file2. Tokens in file3 are therefore allowed to reference tokens in file1 and file2):

{
  "$includes": [
    "./path/to/file2.tokens.json",
    "https://design-system.example.com/tokens/file3.tokens.json"
  ],
  
  "alias-token-c": {
    "$value": "{token-a}"
  }
}

The behaviour I would suggest when parsing file3 is:

  • Files listed in the $includes array are loaded, parsed and then deep merged into the current file
  • Where tokens have the same name, the order of precedence is: The file that is including the others followed by the files listed in the $includes array in reverse order.

So, in this example: tokens in file3 override tokens in file2, which in turn override tokens in file1.

Therefore, the end result is equivalent to a single file like this:

{
  // token-a in file2 overrides token-a in file1, so
  // the value is #abcdef
  "token-a": {
    "$value": "#abcdef",
    "$type": "color"
  },

  // Since group-b exists in both file1 and file2, a
  // merged version of those is added here:
  "group-b": {
     // this token comes from file1
     "token-b-1": {
      "$value": "1.5rem",
      "$type": "dimension"
    },

    // this token comes from file2
    "token-b-2": {
      "$value": "320ms",
      "$type": "duration"
    }
  },
  
  // alias-token-c in file3 overrides alias-token-c in file1
  // so it references token-a.
  // Therefore, its resolved value is #abcdef
  "alias-token-c": {
    "$value": "{token-a}"
  }
  
  // this token comes from file2
  "alias-token-d": {
    "$value": "{group-b.token-b-2}"
  }
}

Thoughts?

@romainmenke
Copy link
Contributor Author

$include is definitely interesting as it allows the creation of a single tokens collection that is composed of multiple sources that can originate from multiple tools.

I would however not include a network protocol as a way to include design token files.
This has obvious security concerns and doesn't solve anything that can not be worked around :)


$include as a feature does not eliminate the need to define the parsing and resolving steps in this specification.

My example above was also just to illustrate the need for a full definition of parsing and resolving.

This is something that other specifications also define for their syntaxes and helps to eliminate subtle interop issues.

@romainmenke
Copy link
Contributor Author

When a tool needs the actual value of a token it MUST resolve the reference - i.e. lookup the token being referenced and fetch its value. In the above example, the "alias name" token's value would resolve to 1234 because it references the token whose path is {group name.token name} which has the value 1234.

Tools SHOULD preserve references and therefore only resolve them whenever the actual value needs to be retrieved. For instance, in a design tool, changes to the value of a token being referenced by aliases SHOULD be reflected wherever those aliases are being used.

Found this recently after re-reading the current draft.

I might be wrong but I think that the intention here is to define value invalidation, not the order or timing of dereferencing.

  • You can resolve references early while still having value invalidation.
  • You can resolve references late without having any value invalidation. (clearly not intended here)

If this is the intention I think it might be fine to do early de-referencing as long as all relevant values are invalidated and updated in case of a change.

Is this correct?


My concern with late de-referencing is that it is undefined how this works when there are multiple token files.

@kevinmpowell
Copy link
Contributor

Related to #166

@kevinmpowell kevinmpowell removed this from the Next Draft Priority milestone Oct 17, 2022
@romainmenke
Copy link
Contributor Author

Another possible way to process multiple files :

  1. parse token files in order of declaration (whatever declaration is)
  2. add tokens to a specific token bag
  3. dereference all token values in isolation
  4. add all tokens to a shared token bag
    4.1. if a token id already exists, override it
    4.2. dereference remaining aliases

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants