-
Notifications
You must be signed in to change notification settings - Fork 54
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Data API V2: Custom differ #3315
Comments
Calling out a design limitation from the current setup / a topic that we've previously talked about here too: The V1 tool ( In addition the GitHub API uses pagination when the changeset associated with a pull request contains over 3000 files changes, which the V1 tool isn't accounting for (meaning that we're not outputting all the changes today, example) - and whilst we can update the tool to handle pagination, switching to a local setup removes this limitation. As such rather than relying on the contents of local files, presumably it'd be better to query the Data API to get the data from both
This approach would mean that we're both able to run that locally - but also would allow us to output more of the semantic type of changes in the future too (e.g. highlighting when a new Discriminator implementation gets added, or a flag when the value for a constant changes [e.g. casing]) - in addition to what we're doing today (around the Resource ID segments). In addition it means that when this is run in Github and an output file for the diff is specified - that we can have that posted as a comment on the pull request when run in automation (referencing how In order to do that, I suspect we'd want to make a couple of changes to the Data API:
WDYT? |
The auto generated PRs that regenerate the Data API definitions often contain many changes that are time consuming to review and validate. We need an automated way to summarise the changes made and presented in a user friendly manner to expedite the review process of these changes and to reduce the chance of errors or problems being overlooked and creeping their way into the provider.
Common scenarios that need to be covered by this differ are:
extract-tf-resource-ids
job)The text was updated successfully, but these errors were encountered: