Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: structured output #676

Merged

Conversation

EverlastingBugstopper
Copy link
Contributor

@EverlastingBugstopper EverlastingBugstopper commented Jul 15, 2021

This fixes #285 by adding a global --json flag that structures the output of Rover (and should only be merged after #677)

I think maybe we should cut a beta or a release candidate from this branch so that we can maybe gather some feedback on the structure of the output before we commit to never breaking the API?

Example Output

All errors look very similar, currently they only have the string description and the error code like so (unless there are build errors, which are special cased and outlined below):

{
    "json_version": "1.beta",
    "data": {
        "success": false
    },
    "error": {
        "message": "Could not find subgraph \"invalid_subgraph\".",
        "code": "E009"
    }
}

the rest of the commands are outlined here with sample data:

rover supergraph fetch

{
    "json_version": "1.beta",
    "data": {
        "sdl": {
            "contents": "sdl contents",
        },
        "success": true
    },
    "error": null
}

rover supergraph fetch (edge case where there has never been a successful build)

{
    "json_version": "1.beta",
    "data": {
        "success": false
    },
    "error": {
        "message": "No supergraph SDL exists for \"name@current\" because its subgraphs failed to build.",
        "details": {
            "build_errors": [
                {
                    "message": "[Accounts] -> Things went really wrong",
                    "code": "AN_ERROR_CODE",
                    "type": "composition",
                },
                {
                    "message": "[Films] -> Something else also went wrong",
                    "code": null,
                    "type": "composition"
                }
            ]
        },
        "code": "E027"
    }
}

rover supergraph compose

{
    "json_version": "1.beta",
    "data": {
        "core_schema": "core schema contents",
        "success": true
    },
    "error": null
}

rover subgraph list

{
"json_version": "1.beta",
"data": {
"subgraphs": [
{
"name": "subgraph one",
"url": "http://localhost:4001",
"updated_at": {
"local": now_local,
"utc": now_utc
}
},
{
"name": "subgraph two",
"url": null,
"updated_at": {
"local": null,
"utc": null
}
}
],
"success": true
},
"error": null
}


#### `rover graph check` _and_ `rover subgraph check` (no operation check failures)
```json
{
    "json_version": "1.beta",
    "data": {
        "target_url": "https://studio.apollographql.com/graph/my-graph/composition/big-hash?variant=current",
        "operation_check_count": 10,
        "changes": [
            {
                "code": "SOMETHING_HAPPENED",
                "description": "beeg yoshi",
                "severity": "PASS"
            },
            {
                "code": "WOW",
                "description": "that was so cool",
                "severity": "PASS"
            },
        ],
        "failure_count": 0,
        "success": true,
    },
    "error": null
}

rover graph check and rover subgraph check (with some operation check failures)

{
    "json_version": "1.beta",
    "data": {
        "target_url": "https://studio.apollographql.com/graph/my-graph/composition/big-hash?variant=current",
        "operation_check_count": 10,
        "changes": [
            {
                "code": "SOMETHING_HAPPENED",
                "description": "beeg yoshi",
                "severity": "FAIL"
            },
            {
                "code": "WOW",
                "description": "that was so cool",
                "severity": "FAIL"
            },
        ],
        "failure_count": 2,
        "success": false,
    },
    "error": {
        "message": "This operation check has encountered 2 schema changes that would break operations from existing client traffic.",
        "code": "E030",
    }
}

rover subgraph check (with build errors)

{
    "json_version": "1.beta",
    "data": {
        "supergraph_was_updated": false,
        "success": true,
    },
    "error": {
        "message": "Encountered 2 build errors while trying to build subgraph \"subgraph\" into supergraph \"name@current\".",
        "code": "E029",
        "details": {
            "build_errors": [
                {
                    "message": "[Accounts] -> Things went really wrong",
                    "code": "AN_ERROR_CODE",
                    "type": "composition"
                },
                {
                    "message": "[Films] -> Something else also went wrong",
                    "code": null,
                    "type": "composition"
                }
            ],
        }
    }
}

rover graph publish

{
    "json_version": "1.beta",
    "data": {
        "api_schema_hash": "123456",
        "field_changes": {
            "additions": 2,
            "removals": 1,
            "edits": 0
        },
        "type_changes": {
            "additions": 4,
            "removals": 0,
            "edits": 7
        },
        "success": true
    },
    "error": null
}

rover subgraph publish (no build errors)

{
    "json_version": "1.beta",
    "data": {
        "api_schema_hash": "123456",
        "supergraph_was_updated": true,
        "subgraph_was_created": true,
        "success": true
    },
    "error": null
}

rover subgraph publish (with build errors)

{
    "json_version": "1.beta",
    "data": {
        "api_schema_hash": null,
        "subgraph_was_created": false,
        "supergraph_was_updated": false,
        "success": true
    },
    "error": {
        "message": "Encountered 2 build errors while trying to build subgraph \"subgraph\" into supergraph \"name@current\".",
        "code": "E029",
        "details": {
            "build_errors": [
                {
                    "message": "[Accounts] -> Things went really wrong",
                    "code": "AN_ERROR_CODE",
                    "type": "composition",
                },
                {
                    "message": "[Films] -> Something else also went wrong",
                    "code": null,
                    "type": "composition"
                }
            ]
        }
    }
}

IMPORTANT NOTE

If you run rover subgraph delete ${graph_ref} --name ${subgraph} --json without the --confirm flag, it will still do a dry-run first and confirm with the user before continuing. That dry run response will not return build errors in JSON format as it's an interactive command, but the subsequent responses (or if they did pass --confirm) will return the actual JSON.

i.e. this will still happen, and build errors will be printed as normal strings if --json is passed and --confirm is not. I considered adding another --dry-run flag here, but thought it was probably out of scope for this change.

Checking for composition errors resulting from deleting subgraph products from averys-federated-graph@current using credentials from the default profile.
WARN: At the time of checking, there would be no composition errors resulting from the deletion of this subgraph.
WARN: This is only a prediction. If the graph changes before confirming, there could be composition errors.
Would you like to continue [y/n]

rover subgraph delete (no build errors)

{
  "data": {
    "supergraph_was_updated": true,
    "success": true
  },
  "error": null
}

rover subgraph delete (with build errors)

{
    "json_version": "1.beta",
    "data": {
        "supergraph_was_updated": false,
        "success": true,
    },
    "error": {
        "message": "Encountered 2 build errors while trying to build subgraph \"subgraph\" into supergraph \"name@current\".",
        "code": "E029",
        "details": {
            "build_errors": [
                {
                    "message": "[Accounts] -> Things went really wrong",
                    "code": "AN_ERROR_CODE",
                    "type": "composition"
                },
                {
                    "message": "[Films] -> Something else also went wrong",
                    "code": null,
                    "type": "composition"
                }
            ],
        }
    }
}

rover config list

{
    "json_version": "1.beta",
    "data": {
        "profiles": [
            "default",
            "staging"
        ],
        "success": true
    },
    "error": null
}

rover subgraph introspect and rover graph introspect

{
    "json_version": "1.beta",
    "data": {
      "introspection_response": "schema {\n  query: Root\n}\ntype Root {\n  allFilms(after: String, first: Int, before: String, last: Int): FilmsConnection\n  film(id: ID, filmID: ID): Film\n  allPeople(after: String, first: Int, before: String, last: Int): PeopleConnection\n.... and the rest of the introspection would be here but it's so long...",
      "success": true
    },
    "error": null
}

rover explain E001

{
    "json_version": "1.beta",
    "data": {
      "explanation_markdown": "**E001**\n\nThis error occurs when the expected JSON response from a GraphQL endpoint can't be deserialized.\n\nThis is most likely caused by an invalid endpoint or headers, causing the server to return something that is not JSON (like an HTML error page).\n\nTry running the command again with `--log trace` to see what the GraphQL endpoint is responding with.\n\nIf this error occurs on a command interacting with the Apollo Registry, please [open an issue](https://github.com/apollographql/rover/issues/new?body=Error%20E001%0A%0ADescribe%20your%20issue%20or%20question%20here&labels=triage) and let us know!\n\n",
      "success": true
    },
    "error": null
}

rover docs list

{
    "json_version": "1.beta",
    "data": {
      "shortlinks": [
        {
          "slug": "api-keys",
          "description": "Understanding Apollo's API Keys"
        },
        {
          "slug": "contributing",
          "description": "Contributing to Rover"
        },
        {
          "slug": "docs",
          "description": "Rover's Documentation Homepage"
        },
        {
          "slug": "migration",
          "description": "Migrate from the Apollo CLI to Rover"
        },
        {
          "slug": "start",
          "description": "Getting Started with Rover"
        }
      ],
      "success": true
    },
    "error": null
}

Any command that does not have any structured output will return the following JSON if there are no errors:

{
    "json_version": "1.beta",
    "data": {
      "success": true
    },
    "error": null
}

The following commands do not have any structured output: rover info, rover config auth, rover config clear, rover config delete.

The following commands also do not have any structured output, but probably should: rover config whoami rover docs open, rover install, and rover update check. These need some refactoring in order to satisfy the requirements for structured output but for now I think the default success structure will be OK. Any additional keys will not break additional scripts (that aren't like, checking for exact string matches instead of properly parsing the JSON).

@EverlastingBugstopper EverlastingBugstopper changed the base branch from main to avery/refactor-subgraph-check July 15, 2021 19:27
@EverlastingBugstopper EverlastingBugstopper added this to the July 20 milestone Jul 15, 2021
@EverlastingBugstopper EverlastingBugstopper force-pushed the avery/structured-output branch 2 times, most recently from e96872e to 2c91188 Compare July 15, 2021 20:53
@EverlastingBugstopper EverlastingBugstopper changed the base branch from avery/refactor-subgraph-check to avery/add-graphql-linter July 15, 2021 20:53
Base automatically changed from avery/add-graphql-linter to avery/refactor-subgraph-check July 19, 2021 16:12
@abernix abernix requested a review from queerviolet July 20, 2021 15:18
@jsegaran
Copy link
Contributor

We updated instances of composition errors in the UI with build errors. I think we would want the json here to also be build errors?

@jstjoe
Copy link

jstjoe commented Jul 21, 2021

Agree with @jsegaran on making space for 'build' errors.
But that leaves the question of how granular we want to get. I'm happy with just categorizing composition errors as build errors and using that level of granularity, but down the road we may want to introduce another level to bubble up a boolean for different categories of build errors.

For rover subgraph publish I think we should provide more detail and make a call around what success means.
This has been something that has confused customers for a while. Specifically in the case of subgraph publish but maybe also subgraph delete, there are two outcomes that I want to know the status of: 1) publishing the subgraph to the registry, 2) a) successful build and b) publish of the supergraph.

2 is going to get more complex with some of the changes we're introducing to managed federation. In some cases, a customer's supergraph changes won't be publish-ed until the changes are actually rolled out to gateways and they report back a successful rollout across the fleet. So this will probably become async.

But even short of that, I think I want a clear success boolean for the immediate goal of publishing the subgraph since that does NOT depend on build being successful NOR the built supergraph schema being published.

This example, provided above, highlights the issue:

{ "data": { "schema_hash": "123456", "subgraph_was_created": true, "composition_errors": [ { "message": "[Accounts] -> Things went really wrong", "code": "AN_ERROR_CODE" }, { "message": "[Films] -> Something else also went wrong", "code": null } ], "success": false }, "error": null }

In this case the response says "success": false but the command was subgraph publish and the subgraph itself is in fact successfully published.
I see there's "subgraph_was_created": true and maybe that works, but I have a nitpick about the key. In most cases it's not created it's updated so maybe this can just be "subgraph_published":true? But ultimately I feel from a developer experience perspective that the name of the command and the success boolean should be aligned. If I run subgraph publish and the subgraph is indeed published, that's success. But often the user's goal is actually to get a new supergraph published. So maybe success needs to be an object indicating which steps in the process were successful?

Example:
"success": { "subgraph_publish": true, "supergraph_build": false, "supergraph_publish": false }

I don't know how well that would work with CI/CD platforms though?

@jstjoe
Copy link

jstjoe commented Jul 21, 2021

What this boils down to is "how do we define success?" and I think the answer to that question will vary by customer, by graph, and even by variant. In some cases my goal is simply to publish my subgraph and I don't care whether the supergraph is successfully built or launched and published. But in others success hinges on the supergraph build succeeding, or on the supergraph being published.

Separately, I'd also like to get back some of the Studio metadata for the publish (e.g. the build_id and the launch_id) and maybe even links I can follow to view the change in Studio or retrieve the updated supergraph schema. That's not necessary for structured output to ship but I'd consider it a nice-to-have.

@abernix
Copy link
Member

abernix commented Jul 21, 2021

Few fly-by thoughts:

  1. To provide us another surface area to evolve the format, should we include a version: "1", in each output and have an explicit suggestion to implementors in our documentation that they should check to make sure the version = the one they're implementing. Alternatively, we just have to not make breaking changes or count on doing that in a major version which we don't really have the liberty of having at the moment since this is a CLI tool. (We could consider something like what meteor did eventually and leave a .rover_version in a project directory, I suppose, and springboard to older versions as necessary? It worked great. Ask me about it sometime!)
  2. Should the flag be --output=json to allow for other formats in the future? e.g., --output=table (thinking like kubectl here a bit).
  3. Maybe I've been working with GraphQL too long which has data and errors (note plural!), but I was for a moment thinking that:
    • This perhaps implies GraphQL (perhaps wrongly on my part) and error versus errors seems fraught with opportunities to conflate them.
    • Do we want error to actually be errors and be an array (always?). Would we have multiple errors? (e.g., coded errors?)
    • In some ways, however, it'd actually be nice if the output was GraphQL!

@EverlastingBugstopper
Copy link
Contributor Author

EverlastingBugstopper commented Jul 21, 2021


@jsegaran

We updated instances of composition errors in the UI with build errors. I think we would want the json here to also be build errors?

Makes sense to me!


@jstjoe

Agree with @jsegaran on making space for 'build' errors.
But that leaves the question of how granular we want to get. I'm happy with just categorizing composition errors as build errors and using that level of granularity, but down the road we may want to introduce another level to bubble up a boolean for different categories of build errors.

How do we feel about adding a "type" field to those type of errors. I'm thinking it would look like this:

{
  "data": {
    "success": false
  },
  "error": {
    "message": "Encountered 3 build errors while trying to compose subgraph \"products\" into supergraph \"rover-supergraph-demo@test\".",
    "build_errors": [
      {
        "message": "[inventory] Product.id -> is marked as @external but is not used by a @requires, @key, or @provides directive.",
        "code": "EXTERNAL_UNUSED",
        "type": "composition"
      },
      {
        "message": "[products] Product -> A @key selects iddddd, but Product.iddddd could not be found",
        "code": "KEY_FIELDS_SELECT_INVALID_TYPE"
		"type": "composition"
      },
      {
        "message": "[inventory] Product -> extends from products but specifies an invalid @key directive. Valid @key directives are specified by the originating type. Available @key directives for this type are:\n\t@key(fields: \"iddddd\")\n\t@key(fields: \"sku package\")\n\t@key(fields: \"sku variation{id}\")",
        "code": "KEY_NOT_SPECIFIED",
		"type": "composition"
      }
    ],
    "code": "E029"
  }
}

@jstjoe

For rover subgraph publish I think we should provide more detail and make a call around what success means.
This has been something that has confused customers for a while. Specifically in the case of subgraph publish but maybe also subgraph delete, there are two outcomes that I want to know the status of: 1) publishing the subgraph to the registry, 2) a) successful build and b) publish of the supergraph.

This is something we've already decided for subgraph publish today, when running the command, it will succeed, and print warnings about composition errors.

But even short of that, I think I want a clear success boolean for the immediate goal of publishing the subgraph since that does NOT depend on build being successful NOR the built supergraph schema being published.

This example, provided above, highlights the issue:

{ "data": { "schema_hash": "123456", "subgraph_was_created": true, "composition_errors": [ { "message": "[Accounts] -> Things went really wrong", "code": "AN_ERROR_CODE" }, { "message": "[Films] -> Something else also went wrong", "code": null } ], "success": false }, "error": null }

Yeah... I actually just implemented this incorrectly! success should always be true here even if the supergraph wasn't updated, because if we get a successful response from the Studio API here, the subgraph has been published. Great catch!


@jstjoe

I see there's "subgraph_was_created": true and maybe that works, but I have a nitpick about the key. In most cases it's not created it's updated so maybe this can just be "subgraph_published":true? But ultimately I feel from a developer experience perspective that the name of the command and the success boolean should be aligned. If I run subgraph publish and the subgraph is indeed published, that's success. But often the user's goal is actually to get a new supergraph published.

subgraph_was_created is only true if this publish is for a very brand new subgraph, so mapping that to success isn't quite right here. You're right that it just needs to be success even if the supergraph wasn't updated!

So maybe success needs to be an object indicating which steps in the process were successful?

Example:
"success": { "subgraph_publish": true, "supergraph_build": false, "supergraph_publish": false }

As for making success an object, I'd rather keep that a simple boolean that maps directly to the exit codes so folks have an easy way of tracking the overall success of any command. We can have those other fields remain at the top level in data I think. Also I think subgraph_publish maps to success, supergraph_build maps to build_errors == null, and supergraph_publish maps to supergraph_was_updated (which today also maps to build_errors == null, but may have to eventually hook up into some machinery w/managed federation?


Separately, I'd also like to get back some of the Studio metadata for the publish (e.g. the build_id and the launch_id) and maybe even links I can follow to view the change in Studio or retrieve the updated supergraph schema. That's not necessary for structured output to ship but I'd consider it a nice-to-have.

Would you mind making a new issue for this? We'll definitely need changes to our queries for this and I'd rather track that work separately. It's pretty easy to add new fields to the JSON output without breaking people, and much harder to take them away.


@abernix

Few fly-by thoughts:

To provide us another surface area to evolve the format, should we include a version: "1", in each output and have an explicit suggestion to implementors in our documentation that they should check to make sure the version = the one they're implementing. Alternatively, we just have to not make breaking changes or count on doing that in a major version which we don't really have the liberty of having at the moment since this is a CLI tool. (We could consider something like what meteor did eventually and leave a .rover_version in a project directory, I suppose, and springboard to older versions as necessary? It worked great. Ask me about it sometime!)

Few thoughts here:

  1. I'd really rather not have a .rover_version if it can be helped at all
  2. I don't really want to break people at all! I think backwards compatibility is super important with CLIs and I'm hoping we never need to bump a version.
  3. We probably will want to make breaking changes eventually, but I think we can maybe talk about versioning separately

On the whole I don't think it hurts anything at all if we output a data.version, so I think I'll just go ahead and add that! Do we think we'd want to update it at all for adding new fields (non-breaking) or only update it if we need to make a breaking change?

Should the flag be --output=json to allow for other formats in the future? e.g., --output=table (thinking like kubectl here a bit).

I think --json is the more common option, and it's also what the CLIG:Guidelines doc recommends, but I don't feel incredibly strongly either way. How likely do you think it is someone would want to work with an output type other than --json? Either way, I think we could add --output={type} down the line if we ever have a need for another output type. structopt gives us ways to require/forbid certain combinations of arguments.

Maybe I've been working with GraphQL too long which has data and errors (note plural!), but I was for a moment thinking that:

  • This perhaps implies GraphQL (perhaps wrongly on my part) and error versus errors seems fraught with opportunities to conflate them.

Interesting note! It's not really meant to be GraphQL output, but I can see how that could be confusing given this is a GraphQL tool!

I have it as error right now, which, at minimum, displays the error message and its code. If we have more structure for the errors we can have those nested within that top level error, but right now the only example of this is composition errors.

  • Do we want error to actually be errors and be an array (always?). Would we have multiple errors? (e.g., coded errors?)

The way Rover is structured, you can only ever return a single error type. This error type can be caused by any number of other errors, but all errors must be combined into a singular, top-level error type. This is why the top level is error rather than errors here, it matches the internal state.

However, that is an implementation detail! We could definitely serialize errors any way we like, however I think that nesting them like we have them makes things a bit more flexible and predictable. Scripts just have to check for null, not for null and an empty array.

  • In some ways, however, it'd actually be nice if the output was GraphQL!

Not gonna happen pal.

@jstjoe
Copy link

jstjoe commented Jul 21, 2021

Thanks @EverlastingBugstopper !

How do we feel about adding a "type" field to those type of errors.

That looks great to me. What do you think @jsegaran?


Yeah... I actually just implemented this incorrectly! success should always be true here even if the supergraph wasn't updated, because if we get a successful response from the Studio API here, the subgraph has been published. Great catch!

Okay great, let me know if Studio is actually returning an error in this case though. I think I remember some funkiness here.


As for making success an object, I'd rather keep that a simple boolean that maps directly to the exit codes so folks have an easy way of tracking the overall success of any command. We can have those other fields remain at the top level in data I think. Also I think subgraph_publish maps to success, supergraph_build maps to build_errors == null, and supergraph_publish maps to supergraph_was_updated (which today also maps to build_errors == null, but may have to eventually hook up into some machinery w/managed federation?

Yeah keeping it as a simple boolean makes sense. But I think I want booleans for some of the other statuses too. This stuff is going to get more complex with time, particularly when we start to introduce non-blocking composition errors/warnings. I know this has been on @prasek 's mind. In that future we may have cases where there are errors/warnings but composition still succeeded. To prevent breaking changes and/or confusion it may be good to introduce a boolean for composition success now, but that's a bit beyond my wheelhouse and I'll defer to @prasek.

supergraph_was_updated

Is the key dependent on whether I'm trying to create vs update? If so it feels a little strange to me that I'd need to know whether my publish was meant to create or update in order to properly parse the response. I think using updated for both cases is fine though.

@EverlastingBugstopper
Copy link
Contributor Author

Yeah keeping it as a simple boolean makes sense. But I think I want booleans for some of the other statuses too. This stuff is going to get more complex with time, particularly when we start to introduce non-blocking composition errors/warnings.

Currently we'll have one additional boolean, which is supergraph_was_updated, and this maps pretty much 100% to composition_success I think. It's easy to add more, and as the state gets a bit more complex we can do that, but afaik there isn't any info in Studio we can even query here.


Is the (supergraph_was_updated) key dependent on whether I'm trying to create vs update? If so it feels a little strange to me that I'd need to know whether my publish was meant to create or update in order to properly parse the response. I think using updated for both cases is fine though.

supergraph_was_updated maps purely to composition right now I think. subgraph_was_created is about whether this was a brand new subgraph that has never ever been part of composition before. Both of these fields are what's available on the Studio query and it's the information we convey today when running subgraph publish.

@prasek
Copy link

prasek commented Jul 21, 2021

@EverlastingBugstopper @abernix @jstjoe

Should the flag be --output=json to allow for other formats in the future? e.g., --output=table (thinking like kubectl here a bit).

Perhaps I've typed kubectl ... -o yaml too many times, but yes please. 🙂

"success": { "subgraph_publish": true, "supergraph_build": false, "supergraph_publish": false }
Also I think subgraph_publish maps to success, supergraph_build maps to build_errors == null, and supergraph_publish maps to supergraph_was_updated (which today also maps to build_errors == null, but may have to eventually hook up into some machinery w/managed federation

Longer term we'll probably want to represent the states for a subgraph and it's associated supergraph in a status of some kind, like:

{
  "data": {
    "status": {
        "subgraph": "published",
        "supergraph": "composition_error | built | published"
    }
  }
}

But perhaps it's best to keep things simple (a) until we have those defined in the public API and (b) have specific use cases where they're needed in the structured output, so we can deliver smaller incremental additions without breaking changes as the public API is finalized?

Along those lines I'd ask:

  1. what's the minimal structured output that can get the job done?
  2. what key use cases should we minimally solve for with the 1st release of structured output?
  3. is it important for structured output to have a rich domain model vs. a simpler presentation model?
    • if the key use case is to detect success/failure of the command and store the results/errors somewhere, then a simpler presentation model would likely suffice for a 1st release and provide a facade we can adapt to an evolving public API until it goes GA, while also providing a simpler, more generic programming model for consuming the structured output.

As for making success an object, I'd rather keep that a simple boolean that maps directly to the exit codes so folks have an easy way of tracking the overall success of any command.

The way Rover is structured, you can only ever return a single error type. This error type can be caused by any number of other errors, but all errors must be combined into a singular, top-level error type. This is why the top level is error rather than errors here, it matches the internal state.

If success maps directly to exit codes and we'll only have a single top-level error (over time), then would suggest:

  • .data.success -> .exit_code
  • .error.build_errors -> .error.details (to enable generic handling of different types of errors)
  • .error.build_errors[].type -> error.type (unless the error.details will differ in type)
{
  "exit_code": 1
  "data": null,
  "error": {
    "code": "E029"
    "type": "composition"
    "message": "Encountered 3 build errors while trying to compose subgraph \"products\" into supergraph \"rover-supergraph-demo@test\".",
    "details": [
      {
        "message": "[inventory] Product.id -> is marked as @external but is not used by a @requires, @key, or @provides directive.",
        "code": "EXTERNAL_UNUSED",
      },
      {
        "message": "[products] Product -> A @key selects iddddd, but Product.iddddd could not be found",
        "code": "KEY_FIELDS_SELECT_INVALID_TYPE"
      },
      {
        "message": "[inventory] Product -> extends from products but specifies an invalid @key directive. Valid @key directives are specified by the originating type. Available @key directives for this type are:\n\t@key(fields: \"iddddd\")\n\t@key(fields: \"sku package\")\n\t@key(fields: \"sku variation{id}\")",
        "code": "KEY_NOT_SPECIFIED",
      }
    ]
  }
}

Circling back on 3. is it important for structured output to have a rich data/domain model vs. a simpler presentation model?

If we think a simpler presentation model will work for the initial release of rover structured output, perhaps we start with that and then once we have a documented public API with a stable domain model we can expose the domain model.

If we expose the domain model today, it's likely we'll be forced to introduce breaking changes as the public API gets finalized, so that argues for keeping the structured output in beta longer, unless we're good with breaking changes.

For example, a presentation model lets us encapsulate the domain model and provides a simpler consumption model:

{
  "exit_code": 0
  "display": {
     "message": "Subgraph inventory published successfully to supergraph-router@dev"
     "details": [
      {
        "message": "Publishing SDL to supergraph-router:dev (subgraph: inventory) using credentials from the default profile."
      },
      {
        "message": "The 'inventory' subgraph for the 'supergraph-router' graph was updated"
      },
      {
        "message": "The gateway for the 'supergraph-router' graph was NOT updated with a new schema"
      },
  },
  "error": null
}

Then as the public API is released we could add a richer domain model:

{
  "exit_code": 0
  "display": {
     "message": "Graph published successfully to monolith@dev"
     "details": [ ... ]
  },
  "data": {
    "api_schema_hash": "123456",
    "field_changes": {
      "additions": 2,
      "removals": 1,
      "edits": 0
    },
    "type_changes": {
      "additions": 4,
      "removals": 0,
      "edits": 7
    },
  }
  "error": null
}

Note that for federated graphs in particular we talk about two types of federated graph:

  • backend-focused graph - just exposes the domain model directly but doesn't encourage UX consistency across apps/devices/channels -- and results in duplicated domain logic in each app that often diverges across web/mobile/etc.
  • customer-focused graph - provides a presentation model abstraction of your backend services to simplify the app logic and provide a consistent UX across apps & client devices

@jstjoe
Copy link

jstjoe commented Jul 21, 2021

I definitely like the simpler presentation model @prasek. With the goal of minimizing breaking changes, and limited bandwidth on gravity to take on a big domain modeling exercise for our future features (both in Studio's data model and to the composition model) I think this is a winning approach.

Copy link
Member

@lrlna lrlna left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So much good work in here! Really well done, Avery!

crates/rover-client/src/operations/graph/publish/runner.rs Outdated Show resolved Hide resolved
src/cli.rs Outdated Show resolved Hide resolved
crates/rover-client/src/shared/composition_error.rs Outdated Show resolved Hide resolved
@prasek
Copy link

prasek commented Jul 22, 2021

Agree 🙏 if we're comfortable exposing the domain model now and can maintain this interface for a while, so user scripts can stay stable regardless of rover minor version even as the api might change, we could do a beta release as-is or with something like --output=jsonv1beta to get community feedback.

@EverlastingBugstopper
Copy link
Contributor Author

EverlastingBugstopper commented Jul 22, 2021


@prasek

If success maps directly to exit codes and we'll only have a single top-level error (over time), then would suggest:

.data.success -> .exit_code

This seems fine, but since we only exit with code 1 or 0 right now, I still think a boolean is best here. If in the future we add more exit codes, folks can just... check for the actual exit code. Even if the output is JSON we'll still exit with the same exit code.

.error.build_errors -> .error.details (to enable generic handling of different types of errors)

How about .error.details.build_errors? I imagine there are other types of errors that will have details as well, but they won't all have the same structure so I think it makes the most sense to have a different top level key so scripts can interact with a stable structure.

.error.build_errors[].type -> error.type (unless the error.details will differ in type)

Yeah the purpose for introducing the type at all was to differentiate between different types of build errors. Currently there will only ever be composition type build errors, but in the future we may have different types of composition errors, so I don't think we want to move the type to the top level.


re: simpler presentation model - I'm pretty strongly opposed to the display suggestion you've outlined.

If we have a top level display and just print the exact strings that we print to stdout, then... it kinda defeats the purpose of structured output, no? Strings are not structure, and we do not want to include string messages like this in our structured output because then we can almost guarantee people will parse strings and there's no real way to track breaking changes to strings. It also means we can't easily update the human readable output for commands without also breaking the structure. Furthermore, if that's the structure that people want, they should be able to just loop through the lines printed to stdout and interact with them directly, without adding an extra indirection layer to JSON.


Note that for federated graphs in particular we talk about two types of federated graph:

  • backend-focused graph - just exposes the domain model directly but doesn't encourage UX consistency across apps/devices/channels -- and results in duplicated domain logic in each app that often diverges across web/mobile/etc.
  • customer-focused graph - provides a presentation model abstraction of your backend services to simplify the app logic and provide a consistent UX across apps & client devices

I'm... not exactly sure how this is relevant here? Could you elaborate a bit on how this relates? Is it that the Studio API itself is currently a backend-focused graph and that affects our implementation here?


@prasek
Copy link

prasek commented Jul 22, 2021

How about .error.details.build_errors? I imagine there are other types of errors that will have details as well, but they won't all have the same structure so I think it makes the most sense to have a different top level key so scripts can interact with a stable structure.

If were not adding a top-level error.type then this totally makes sense. 👍

I'm... not exactly sure how this is relevant here? Could you elaborate a bit on how this relates? Is it that the Studio API itself is currently a backend-focused graph and that affects our implementation here?

Just by analogy

  • display would provide the equivalent of a customer-focused graph
  • data provides the equivalent of a backend-focused graph

If the goal of structured output is get an early version of the public API for tighter integration then the data model makes sense.

If the goal of structured output is to (a) determine basic pass/fail (b) capture errors in a simple/generic way (c) format the output in a slightly more formatted way vs. stdout, then display would enable that.

As mentioned above, the end-state could include both so consumers could pick the simplest programming model for their use cases.

This commit adds a global `--json` flag that structures the output of Rover like so:

**success:**

{
  "data": {
    "sdl": {
      "contents": "type Person {\n  id: ID!\n  name: String\n  appearedIn: [Film]\n  directed: [Film]\n}\n\ntype Film {\n  id: ID!\n  title: String\n  actors: [Person]\n  director: Person\n}\n\ntype Query {\n  person(id: ID!): Person\n  people: [Person]\n  film(id: ID!): Film!\n  films: [Film]\n}\n",
      "type": "graph"
    }
  },
  "error": null
}

**errors:**

{
  "data": null,
  "error": {
    "message": "Could not find subgraph \"products\".",
    "suggestion": "Try running this command with one of the following valid subgraphs: [people, films]",
    "code": "E009"
  }
}
@abernix
Copy link
Member

abernix commented Jul 23, 2021

  • display would provide the equivalent of a customer-focused graph

I'd much rather not provide strings in a more structured way as I think it would encourage folks to write bad scripts.

This is a real risk and a perfectly validated reason give pause when contemplating including error strings in output. Particularly if they don't have error codes. In fact, I recall that prior to introducing error codes for every error message, the Node.js Foundation could not change any error message text because every change was considered a breaking change. Thankfully, they now have error codes for everything and they can instruct people to not parse error messages.

Parsing error messages is a Bad Idea. However, we're not going to stop people from doing it. I think the important thing to do here is to make sure that the user doesn't resort to parsing error messages because they have no other choice and that they feel like it was their bad choice when choosing to do it, not a limitation of the system they're extracting data from.

If they want an opinionated way to output things, good news! We already have that, and they should just call rover directly and not bother with structured output at all.

A supporting use-case here is that we've heard from customers who actually just want to re-structure the output we're providing in our CLIs to ship its output from CI workflows into to other tools — e.g., Slack / ChatOps / Logging pipelines, etc. I think of this as: humans have familiarity with particular opinionated messaging that they see when running locally and they enjoy having some of that mirrored to automated workflows.

@abernix
Copy link
Member

abernix commented Jul 23, 2021

This seems fine, but since we only exit with code 1 or 0 right now, I still think a boolean is best here. If in the future we add more exit codes, folks can just... check for the actual exit code. Even if the output is JSON we'll still exit with the same exit code.

Agree.

@EverlastingBugstopper
Copy link
Contributor Author

EverlastingBugstopper commented Jul 23, 2021

@abernix

This is a real risk and a perfectly validated reason give pause when contemplating including error strings in output. Particularly if they don't have error codes. In fact, I recall that prior to introducing error codes for every error message, the Node.js Foundation could not change any error message text because every change was considered a breaking change. Thankfully, they now have error codes for everything and they can instruct people to not parse error messages.

We already have both error codes and error messages in structured output, so we should be OK there - I think Phil was talking about a top level data.display[] that would be an array of all of of the messages we log, and then having that be the structured API to start until we nail down the types.

I think it'd be fine to have both, but if we introduce just display and no structure, then I think we'd run into pretty much the same thing that happened to Node with people parsing those strings to get at the underlying structure (defeating the purpose of structured output).

A supporting use-case here is that we've heard from customers who actually just want to re-structure the output we're providing in our CLIs to ship its output from CI workflows into to other tools — e.g., Slack / ChatOps / Logging pipelines, etc. I think of this as: humans have familiarity with particular opinionated messaging that they see when running locally and they enjoy having some of that mirrored to automated workflows.

Is this something that you think they'd need --output=json for if they just want regular logs? I think if that's the use case they want, they shouldn't use structured output at all. They should only be using structured output if they need access to, well, structured data!

I'm not 100% opposed to adding the top level display here but I don't think that it can be alone, I think it should be added in addition to the structure we've already outlined.

TODO:

  • Switch --json to --output=json
  • Fix the success output for subgraph publish
  • Add ChangeSummary::with_diff, FieldChanges::with_diff, and TypeChanges::with_diff
  • Move all composition errors in data to to top level error
  • Create structured error output for operation checks
  • Switch error.composition_errors to error.details.build_errors
  • Add new error.details.build_errors[].type: composition
  • Add a top level json_version: 1 so that in the future we can make breaking changes if need be

@EverlastingBugstopper
Copy link
Contributor Author

Making good progress! I'll finish up the next TODOs on Monday.

@prasek
Copy link

prasek commented Jul 23, 2021

Capturing errors in a simple/generic way is definitely a goal here, and having error codes in structured output should help quite a bit with this! { data: null, error: { "message": "bad", "code": "E029" }

Agree this is perfect.

We already have both error codes and error messages in structured output, so we should be OK there

Agree. Think we're good on the error codes.

I think Phil was talking about a top level data.display[] that would be an array of all of of the messages we log, and then having that be the structured API to start until we nail down the types.

I think it'd be fine to have both, but if we introduce just display and no structure, then I think we'd run into pretty much the same thing that happened to Node with people parsing those strings to get at the underlying structure (defeating the purpose of structured output).

  • For error:
    • error code is likely sufficient to avoid parsing error strings in most cases, so think we're good there.
  • For success display results:
    • would expect the vast majority of users to just slightly re-structure them and use each message as-is

A supporting use-case here is that we've heard from customers who actually just want to re-structure the output we're providing in our CLIs to ship its output from CI workflows into to other tools — e.g., Slack / ChatOps / Logging pipelines, etc. I think of this as: humans have familiarity with particular opinionated messaging that they see when running locally and they enjoy having some of that mirrored to automated workflows.

Agree. Supporting a simple and consistent UX to do this basic use case seems like the most important thing.

Is this something that you think they'd need --output=json for if they just want regular logs? I think if that's the use case they want, they shouldn't use structured output at all. They should only be using structured output if they need access to, well, structured data!

All --json is structured data, it's more a question of what structure we provide:

  • display - simple presentation model

    • supports basic re-structuring use case above: CI pipelines, Slack, ChatOps, Logging, custom CLI wrappers, etc.
    • simple and consistent UX both for:
      • authoring the integration: simple message and details model for both errors and display.
        • errors adds code and type, to help avoid parsing error strings.
        • pretty simple to get a basic integration working
      • consuming the message output of integrations
        • consistent messages across rover, CI pipeline output, and all integrations
        • google search on error/display messages might yield better results for improved troubleshooting
  • data - full/raw data model

    • essentially what you'd get by going to the public API directly
    • more fields/logic to get a integration working?
    • only needed for very deep/detailed integrations?
    • best to add this when the public API is GA?

I'm not 100% opposed to adding the top level display here but I don't think that it can be alone, I think it should be added in addition to the structure we've already outlined.

Agree we'll want both

  • display for simple integrations
  • data for deeper integrations that need the raw data model.

For beta suggest we have both data and display to get feedback and learn more about usage.

@abernix
Copy link
Member

abernix commented Jul 26, 2021

We already have both error codes and error messages in structured output, so we should be OK there

Agree. Think we're good on the error codes.

Yeah, sorry, didn't mean to imply that we weren't! We are good on this front!

but if we introduce just display and no structure, then I think we'd run into pretty much the same thing that happened to Node with people parsing those strings to get at the underlying structure (defeating the purpose of structured output).

I'm not 100% opposed to adding the top level display here but I don't think that it can be alone, I think it should be added in addition to the structure we've already outlined.

Agree on both quotes.

Is this something that you think they'd need --output=json for if they just want regular logs? I think if that's the use case they want, they shouldn't use structured output at all

I think providing structure for display elements is still better than necessitating someone parse out our own display contents.

Agree we'll want both

  • display for simple integrations
  • data for deeper integrations that need the raw data model.

I wouldn't over-index on which one is the simple/deeper approach. Many folks probably would never use the display (e.g., for Dashboards and Slack integrations), but both structures are nice to have.


It sounds like we want both. We could introduce display later and declare it out of scope for now, so long as we make room for it in the information architecture (IA).

@EverlastingBugstopper
Copy link
Contributor Author

Consider me convinced! I don't think it should be too much extra work to get in the display structure and will probably lead to some cleaner code regardless. I'll try to get that in and we can cut a beta this week.

I've also now mapped everything so that if error == null, then success == true, so the argument for removing data.success just got quite a bit stronger @prasek

@EverlastingBugstopper
Copy link
Contributor Author

After some investigation, I've uncovered some kinda gnarly design questions with the display proposal. I'm going to open another issue where we can discuss those changes.

@EverlastingBugstopper
Copy link
Contributor Author

All TODOs have been addressed and the top level comment has been updated to reflect the changes to the JSON structure since the initial PR was opened.

@EverlastingBugstopper EverlastingBugstopper merged commit 5138bcd into avery/refactor-subgraph-check Jul 26, 2021
@EverlastingBugstopper EverlastingBugstopper deleted the avery/structured-output branch July 26, 2021 19:49
EverlastingBugstopper added a commit that referenced this pull request Jul 26, 2021
* chore: refactor subgraph check

This commit does a lot of heavy lifting for the pending rebase.

1) Creates new input and output types in rover-client for subgraph check
2) Moves GitContext out of rover::utils to rover-client::utils
3) Creates error code E029 for composition errors
4) Styles cloud-composition errors like harmonizer

* chore: refactor subgraph fetch (#575)

* chore: refactor subgraph publish (#630)

* chore: refactor config whoami (#633)

* chore: refactor subgraph delete (#639)

* chore: refactor subgraph list (#640)

* chore: refactor subgraph introspect (#641)

* chore: refactor graph introspect (#643)

* chore: refactor release update checker (#646)

* chore: begin adding shared types and consolidate check operations (#652)

* chore: move GraphRef to rover-client (#664)

* chore: refactor the rest of rover-client (#675)

* chore: do not re-export queries

* chore: finish wiring OperationCheck error

* chore: adds graphql linter (#677)

* fix: graph_ref -> graphref

* feat: structured output (#676)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

feat: structured output
6 participants