Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Xkeeper support #5828

Draft
wants to merge 10 commits into
base: main
Choose a base branch
from
Draft

Xkeeper support #5828

wants to merge 10 commits into from

Conversation

LayneHaber
Copy link
Contributor

Description

  • Allows our relayer to operate as a keeper
  • Assumes the relayer is using the Keep3r Relay to call the data

If this implementation is okay, then the call for prove and process should be updated to attempt keeper first.

Associated test vault can be found here.

Type of change

  • Docs change / dependency upgrade
  • Configuration / tooling changes
  • Refactoring
  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to change)
  • Requires changes in customer code

High-level change(s) description - from the user's perspective

Related Issue(s)

Fixes

Related pull request(s)

Copy link
Collaborator

@preethamr preethamr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is neat.

I thought about this a bit, my main concern - In order to get significant savings this (keep3r) has to be the default. If we do that, then we are expecting our relayer infra to be able to handle production load on tasks/chains for which we use it.

We have seen our relayer do 2-3 requests per minute, beyond that we might see reliability issues, especially when there is a burst of requests from the users, or even when our messaging layer hits a 2 hour mark, and we fire off PnP for multiple chains at once.

Suggest, we can selectively roll this out on mainnet, and LH PnP, and then gradually figure out how well it scales.


// call keeperRelay.exec with the transaction data
// TODO: batch exec calls from cache that are keeper flagged
// TODO: fallthrough if keeper sending fails. likely in cache
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We could handle fallback at the caller level sendWithRelayerWithBackup so that it tries in this order:
1 Connext Relayer using Keep3r network, setup Keep3r api endpoint/flag on relayer HTTP server
2 Gelato
3 Connext Relayer as hot wallet.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

that's the idea

we can prioritize keeper for prove and process, which is where we will see the most gains

@LayneHaber
Copy link
Contributor Author

We have seen our relayer do 2-3 requests per minute, beyond that we might see reliability issues, especially when there is a burst of requests from the users, or even when our messaging layer hits a 2 hour mark, and we fire off PnP for multiple chains at once.

We can add a cache full error and cap the number of active tasks if needed, but agree this is a risk

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants