Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Superfluid: Reduce epoch time #1083

Closed
Tracked by #1030
ValarDragon opened this issue Mar 13, 2022 · 9 comments
Closed
Tracked by #1030

Superfluid: Reduce epoch time #1083

ValarDragon opened this issue Mar 13, 2022 · 9 comments

Comments

@ValarDragon
Copy link
Member

ValarDragon commented Mar 13, 2022

Joe abbey very helpfully got pprof results of the post-superfluid epoch https://twitter.com/JoeAbbey/status/1503072747349499905?s=20&t=fLB87nwcpX7kFigVoR9O1A (We've all been surprised at why it wasn't sub one minute)

Turns out the answer was that we're doing too many calls to GetSyntheticLock! We call this function right here: https://github.com/osmosis-labs/osmosis/blob/main/x/incentives/keeper/distribute.go#L235

Whats happening is in the distribution code for superfluid delegations to Atom/Osmo, we do one distribute for every superfluid delegated-to validator. Right now we do one distribute per validator / superfluid denom pair. Each distribute iterates over all L lockups of sufficient length in the Atom/Osmo pool, and checks if it has a superlfuid delegation to the respective validator. This causes L * # num validators calls here.

We actually index lockups by synthetic denom as well. We reverted a change that was using that previously, due to some uncertainty around correctness pre-feature shipping, I'm now pretty sure post refactors to the module we did, that its correct. This would reduce the number of data iterations to the ideal of # superfluid locks.

To fix this, we should get all locks with unbonding period duration, of the synthetic lockup duration length. A couple ways to do this, but they basically all involve changing the calls here: https://github.com/osmosis-labs/osmosis/blob/main/x/incentives/keeper/distribute.go#L370-L384 . The underlying locks are keyed as 'normal', so you can do k.lk.GetLocksLongerThanDurationDenom(ctx, distrTo.Denom, distrTo.Duration) with the synthetic denom.

@ValarDragon
Copy link
Member Author

I've made a branch dev/v7-epoch-speedup, and am testing it on a full node on mainnet to see if its still compatible.

I believe this patch should fix the immediate amplification overhead here.

cc @joeabbey

@ValarDragon
Copy link
Member Author

Forgot to update this yesterday, the first attempt in that branch was not state compatible. TBD why it wasn't!

@daniel-farina daniel-farina added the T:task ⚙️ A task belongs to a story label Mar 17, 2022
@mattverse
Copy link
Member

mattverse commented Mar 17, 2022

I'm surprised that this one-line fix is not state compatible. By what method did you find out that this wasn't state compatible?

@ValarDragon
Copy link
Member Author

Me and @hleb-albau tried it on a node, and got a state hash error :/

@mattverse
Copy link
Member

mattverse commented Mar 18, 2022

Tried changing binary from v7.x -> dev/v7-epoch-speedup on a testnet with default genesis, it seems to build blocks, should I try to do it off mainnet?

@hleb-albau
Copy link
Contributor

it did work, but not survive epoch blocks

@mattverse
Copy link
Member

I see i see, does that mean the fix needs more reduce in epoch time?

@ValarDragon
Copy link
Member Author

Its not state compatible with the existing binary, meaning that if you had a v7.0.4 node, and this node, they will not get the same app hash

@ValarDragon
Copy link
Member Author

Closed by #2214

Repository owner moved this from Todo 🕒 to Done ✅ in Osmosis Chain Development Dec 8, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Archived in project
Development

No branches or pull requests

5 participants