-
Notifications
You must be signed in to change notification settings - Fork 173
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Xkeeper support #5828
base: main
Are you sure you want to change the base?
Xkeeper support #5828
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is neat.
I thought about this a bit, my main concern - In order to get significant savings this (keep3r) has to be the default. If we do that, then we are expecting our relayer infra to be able to handle production load on tasks/chains for which we use it.
We have seen our relayer do 2-3 requests per minute, beyond that we might see reliability issues, especially when there is a burst of requests from the users, or even when our messaging layer hits a 2 hour mark, and we fire off PnP for multiple chains at once.
Suggest, we can selectively roll this out on mainnet, and LH PnP, and then gradually figure out how well it scales.
|
||
// call keeperRelay.exec with the transaction data | ||
// TODO: batch exec calls from cache that are keeper flagged | ||
// TODO: fallthrough if keeper sending fails. likely in cache |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We could handle fallback at the caller level sendWithRelayerWithBackup
so that it tries in this order:
1 Connext Relayer using Keep3r network, setup Keep3r api endpoint/flag on relayer HTTP server
2 Gelato
3 Connext Relayer as hot wallet.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
that's the idea
we can prioritize keeper for prove and process, which is where we will see the most gains
We can add a cache full error and cap the number of active tasks if needed, but agree this is a risk |
Add keep3r relayer type
Description
If this implementation is okay, then the call for prove and process should be updated to attempt keeper first.
Associated test vault can be found here.
Type of change
High-level change(s) description - from the user's perspective
Related Issue(s)
Fixes
Related pull request(s)