-
Notifications
You must be signed in to change notification settings - Fork 257
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Using IntrospectAndCompose
with High Availability Micro Services
#1784
Comments
Hi @Borduhh 👋 re: In general, once a given subgraph (fleet) is available to serve an updated schema, it's published to the schema registry using This can be done with a form of
See the Federation docs for details
What you have with the Gateway See:
|
Expansion of issue #349 (comment)
We are trying to use Apollo Federation with AWS services (i.e. AppSync) and have the following constraints that might apply to a lot of other companies.
IAM Support for Apollo Studio
We cannot use Apollo Studio because all of our services are created and authenticated using AWS IAM. It would be nice if we could give Apollo Studio an ID key and Secret from an IAM Role that would be used to authenticate all of our requests. Right now we do that manually like so:
IntrospectAndCompose
is all or nothingRight now, the
IntrospectAndCompose.initialize()
method fails completely if even one service has a network timeout, which makes it almost impossible to use in production scenarios. For each service we add to our gateway, we increase the likelihood of a network error that cancels the entire process inevitably causing downtime or CI/CD failures.To solve this, it would be rather easy to have
loadServicesFromRemoteEndpoint()
process schema fetching on a per-service basis. This could be hyper-efficient by wrappingdataSource.process()
with a retry counter and retrying5xx
errors. That way the user can choose how many times they want to retry beforeIntrospectAndCompose
fails altogether and rolls back.Right now we are manually adding retries around the entirety of
IntrospectAndCompose
but as we add more services, this becomes really inefficient (I.E., if we have 150 services and service 148 fails, we still need to re-fetch services 1 through 147 on the next attempt).Central Caching Schema Files
This isn't something that necessarily needs to be done by Apollo, but is something that is required for microservices. Our team currently uses S3 to cache a schema file since in our case we can be relatively confident that it will not change without the services being redeployed. The first (and sometimes possibly second) ECS container that comes online builds it's own schema using
IntrospectAndCompose
and then stores the cached file with a unique per-deployment ID that other service can use when they scale to fetch the cached schema.The text was updated successfully, but these errors were encountered: