Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: allow forking a forked network #2349

Open
wants to merge 5 commits into
base: main
Choose a base branch
from

Conversation

antazoey
Copy link
Member

@antazoey antazoey commented Oct 27, 2024

What I did

Instead of mainnet-fork-fork, you just get another mainnet-fork process with a different port

How I did it

Fix name and use auto host

How to verify it

(ape) ➜ ape git:(feat/allow-forking-a-fork) ✗ ape console --network ethereum:sepolia

In [1]: with networks.fork("foundry") as provider:
   ...:     print(provider.uri)
   ...:     with networks.fork("foundry") as prov2:
   ...:         print(prov2.uri)
   ...: 
INFO:     Starting 'anvil' process.
INFO:     Connecting to existing Geth node at https://eth-sepolia.g.alchemy.com/[hidden].
http://127.0.0.1:8545
INFO:     Starting 'anvil' process.
http://127.0.0.1:52015
INFO:     Stopping 'anvil' process.
INFO:     Stopping 'anvil' process.

Checklist

  • All changes are completed
  • New test cases have been added
  • Documentation has been updated

Copy link
Contributor

@bitwise-constructs bitwise-constructs left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fork yeah

fubuloubu
fubuloubu previously approved these changes Oct 27, 2024
Copy link
Member

@fubuloubu fubuloubu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maintainer note: Side-steps but does not resolve #2348 (prevents the bug where the un-isolated network context tries to fork the fork network, but does not properly isolate fork contexts from each other)

@antazoey
Copy link
Member Author

I think for this to really work (unrelated to #2348) it would have spin up another fork process using auto-port or something, not sure how to make that feel clean yet in the UX.

@antazoey antazoey marked this pull request as draft October 27, 2024 17:24
@fubuloubu
Copy link
Member

I think for this to really work (unrelated to #2348) it would have spin up another fork process using auto-port or something, not sure how to make that feel clean yet in the UX.

Yeah I think at the end of the day, in an async ape this code would need some deeper thought (and probably changes to the provider plugin interface)

@fubuloubu
Copy link
Member

Confirmed this is working

fubuloubu added a commit to ApeWorX/silverback that referenced this pull request Oct 28, 2024
@antazoey
Copy link
Member Author

Confirmed this is working

I think you may run into issues on disconnect

@fubuloubu
Copy link
Member

fubuloubu commented Oct 29, 2024

Confirmed this is working

I think you may run into issues on disconnect

Hmm, it is stalling weirdly

Check out this PR, ApeWorX/silverback#157

I got it working somewhat (poorly but it works ish)

silverback test -k backtest

@antazoey
Copy link
Member Author

Hmm, it is stalling weirdly

One of them disconnecting is probably affecting that other, that is why I think we need this to work with auto port somehow so you can maintain all of your different forks more separately.

@fubuloubu
Copy link
Member

Hmm, it is stalling weirdly

One of them disconnecting is probably affecting that other, that is why I think we need this to work with auto port somehow so you can maintain all of your different forks more separately.

Its a good case study, I don't need this feature for now but it got me much further on my path towards backtesting silverback bots. Eventual plan is to multiprocess many forks across many blocks in parallel for speed. (I hope one day we get a forkable pure Python provider so it doesnt require double the processes)

Its also cool how we can see ape's query system becoming useful for this.

Anyways, a dog can dream

@antazoey antazoey marked this pull request as ready for review October 31, 2024 17:57
@antazoey
Copy link
Member Author

antazoey commented Oct 31, 2024

@fubuloubu Wondering if this will fix it for you now and what you think

INFO:     Connecting to existing Geth node at https://eth-sepolia.g.alchemy.com/[hidden].
http://127.0.0.1:8545
INFO:     Starting 'anvil' process.
http://127.0.0.1:52015
INFO:     Stopping 'anvil' process.
INFO:     Stopping 'anvil' process.

You can see the second fork is a automatically a different port than the first one.

fubuloubu
fubuloubu previously approved these changes Oct 31, 2024
Copy link
Member

@fubuloubu fubuloubu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice, this looks good. It doesn't require a plugin change?

@antazoey
Copy link
Member Author

Nice, this looks good. It doesn't require a plugin change?

No, both foundry and hardhat should work this way, though I only tested Foundry because I am having issues with NPM right now

@fubuloubu
Copy link
Member

Nice, this looks good. It doesn't require a plugin change?

No, both foundry and hardhat should work this way, though I only tested Foundry because I am having issues with NPM right now

Sick

Tenderly was the only other fork provider, but it's not currently very maintained

I might try and fix that at some point, but not a blocker now

@antazoey
Copy link
Member Author

Tenderly was the only other fork provider, but it's not currently very maintained

At worst, the host setting is just not handled and it will work like it did before the recent changes in this PR (won't do a fork-fork but also wont really make a new network.

Is there a way to have multiple forks of the same networks in tenderly? Maybe we'd have to handle this in the API somewhere

@fubuloubu
Copy link
Member

fubuloubu commented Oct 31, 2024

Tenderly was the only other fork provider, but it's not currently very maintained

At worst, the host setting is just not handled and it will work like it did before the recent changes in this PR (won't do a fork-fork but also wont really make a new network.

Is there a way to have multiple forks of the same networks in tenderly? Maybe we'd have to handle this in the API somewhere

anytime you started a new fork network context with the tenderly provider, it would call the API to create the fork (returning a random UUID for it). when the network exits context, it would delete the fork by UUID


while probably really expensive, this would work great for the backtesting feature because you can double the number of processes used in parallel to execute blocks in, and also way less heavy infra wise on your local machine

@antazoey
Copy link
Member Author

anytime you started a new fork network context with the tenderly provider, it would call the API to create the fork (returning a random UUID for it). when the network exits context, it would delete the fork by UUID

That may make testing multi-chains apps kinda difficult. If you have to shift between networks back and forth. But that is how .fork() behaves anyway so it would fine here.

@fubuloubu
Copy link
Member

anytime you started a new fork network context with the tenderly provider, it would call the API to create the fork (returning a random UUID for it). when the network exits context, it would delete the fork by UUID

That may make testing multi-chains apps kinda difficult. If you have to shift between networks back and forth. But that is how .fork() behaves anyway so it would fine here.

Yeah I think for the multichain use case, especially with async, there would have to be some additional functionality/state that would allow it to work

For this use case, I basically just need a ton of forks that are completely independent of each other

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants