-
Notifications
You must be signed in to change notification settings - Fork 31
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Splitting tokens into separate files #27
Comments
I think this is a great idea, and I would expect the spec to support it at some point! But splitting up the tokens file is more than an extension; its implementation could make it incompatible with future versions of the spec if they implement a different thing. So in that light, I’m going to close this and suggest that you put your proposal to the spec authors on either discussion, that way this library doesn’t shoot off into that uncanny valley of only implementing part of the spec:
I’d be happy to reopen this if, say, the spec reaches an impasse, or nothing happens, etc. But until there’s a very clear “this will never happen in the spec yet many people feel it’s critical to implement” I’d like to avoid making that fork now. |
Thanks @drwpow, that makes sense. design-tokens/community-group#123 seems to have stalled, but I'll hold out hope before I make some homebrew solution to combine files before running them through cobalt 😄 |
One possible workaround that I’ve seen happen for OpenAPI specs is to make a separate one-off “bundler” that flattens multiple tokens.json schemas into one. And that happens before Cobalt scans it. This can be very crude and rudimentary, and can basically just be a spaghetti script that spits out a single tokens.json file that’s ignored from Git. You could also shortcut it because you know exactly what you do/don’t need. From my work in bundlers, I do know that dealing with conflicts/overrides/outlining what are valid and invalid subschemas (e.g. can a single hex color be a subschema?) you delve into the realm of differences of opinion quite quickly. And there are so many things people want to do that you don’t even think should be possible let alone a good idea 😄. But anyways, again, all that to say that solving a single example of a multi-file schema is actually pretty easy. But solving that problem holistically for most users, in a standardized way, is complex, and is more community alignment than it is a technical challenge. |
So I’ve been tinkering with Tokens Studio for Figma support (#30) and I think that feature isn’t really valuable without this feature. Since Figma/Tokens Studio don’t have animation tokens such as While I still want to wait on the W3C spec to handle true imports, I think we could allow for simple flattening of multiple token files into one in a safe, future-compatible way. In other words, just simply support: export default {
tokens: ['_one.json', '_two.json', '_three.json']
} And simply combine them all into one tokens file. It would simply throw an error on any token name conflicts. There wouldn’t be any importing or external aliasing; you’d simply alias as if it was all one big file (kinda like the early, early days of Grunt/Gulp). That way if/when external aliases are supported, this won’t conflict, and the two could work together. |
This makes sense to me, and is basically a more robust solution than what I came up with. Mine just smashes together everything that matches |
@mike-engel just shipped support for this in the latest version—you can just pass an array in. With the caveat again that tokens will get overridden (no warning for now, but if we want to warn or fail we can) |
Awesome, thanks @drwpow. Going to remove our custom solution in favor of this, and will let you know if we run into issues 😄 |
Actually @drwpow, what would you think about a
where I could write a custom plugin like I did for tailwind, but this seems pretty useful to anyone that it might be worth an official plugin? |
Oh that makes sense. I’d probably call it In a sense, Seems like an extra step, I know, but I’m taking all this from the Redoc CLI for REST OpenAPI schemas which I can’t say enough good things about. Been following this exact process for OpenAPI schemas (sources of truth are split up, but the CLI bundles them, and other tools import the bundle) and it’s been great. |
|
Shipped a |
Works wonderfully, thanks for the quick turnaround @drwpow! |
As the number of tokens increase, it would be helpful to be able to split those up into separate files for maintainability. A rough proposal might look like this:
Which for parsing would compile that into
Alternatives could be
Happy to hear your thoughts!
The text was updated successfully, but these errors were encountered: