Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

F1 Distribution #750

Closed
wants to merge 5 commits into from
Closed

F1 Distribution #750

wants to merge 5 commits into from

Conversation

mattverse
Copy link
Member

@mattverse mattverse commented Jan 11, 2022

Description

This PR implements F1 distribtuion mainly in the incentives module and the lockup module. The implementation of f1 distribution also fixes the current bug in distribution logic where unbonding locks would still receive rewards.

This PR mainly introduces three new state entries in gauge.proto in the process of implementing f1 distribution.

  1. CurrentReward
  • CurrentReward has a unique key for each denom + lock duration combination. If current epoch is bigger than the last epoch the reward was processed, it increments the period entry in CurrentReward, reset CurrentReward and move what was in currentReward to HistoricalReward.
  1. HistoricalReward
  • HistoricalReward's main usage is to keep track of an accumulated reward ratio per denom of gauge, represented as CumulativeRewardRatio. This is used to calculate how much user is eligible to get distributed for a specific gauge reward(ex. rewardRatio * amtofTokens = amtToBeDistributed). Each unique denom + lock duration + period has its own key, each being stored with a different key.
  1. PeriodLockReward
  • PeriodLockReward stores lockReward to be claimed for every individual lock. A map of string->uint64 is also included inside the state structure for PeriodLockReward, keeping track of what period each denom+lock duration combination it is in. PeriodLockReward references HistoricalReward to calculate reward for each lock.

Main implementation of the f1 distribution is in distribute.go and rewards.go(where getters and setters are implemented for the new state entries).

Additional points to be reviewed:

  • Do we need all the HistoricalRewards in the state machine? Do we want to keep only the most recent n number of historicalRewards
  • Do we want all the event happening in tx? Are we currently avoiding emitting events?

For contributor use:

  • Targeted PR against correct branch (see CONTRIBUTING.md)
  • Updated relevant documentation (docs/) or specification (x/<module>/spec/)
  • Added a relevant changelog entry to the Unreleased section in CHANGELOG.md
  • Re-reviewed Files changed in the Github PR explorer

app/app.go Outdated Show resolved Hide resolved
];
}

message PeriodLockReward {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't understand just via reading the proto files what this struct is for.

Is it the return value when getting rewards for a lock?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Correct, we store how much rewards each individual locks are eligible for.

Copy link
Member

@ValarDragon ValarDragon Jan 30, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if its per lock, why is it repeated LastEligibleEpochByDurationAndDenom last_eligible_epochs = 3; instead of just a single entry for last_claimed_epoch

(A lock can only have one lockup duration right?)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same answer as in the comment below

}

message GenesisReward {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh interesting, this exports to genesis undistributed rewards. The cosmos SDK's staking auto distributes the rewards at export genesis.

What do you think makes the overall code simpler?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you mean adding distribution logic at export genesis?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah. (I'm also fine not doing it, just a question of overall simplicity)

Copy link
Member

@UnityChaos UnityChaos left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So my reading of this is that we're copying over the pattern from staking/distribution, but with epochs instead of blocks. Is this intended to be the final result? or is this an intermediate solution, and we intend to eventually do block level updates of CurrentReward for each gauge?

I suppose a middle ground is always possible where we keep it epoch based, but progressively shorten the epoch length?

Also I'm assuming that the reason to make this initially (or always) epoch based is performance, as epoch=block would be highly costly, even if we aren't iterating all locks and are only iterating the gauges.

x/incentives/README.md Show resolved Hide resolved
proto/osmosis/incentives/gauge.proto Outdated Show resolved Hide resolved
proto/osmosis/incentives/gauge.proto Outdated Show resolved Hide resolved
@ValarDragon ValarDragon mentioned this pull request Jan 21, 2022
4 tasks
@codecov-commenter
Copy link

Codecov Report

Merging #750 (a0f6297) into main (6e7d32b) will decrease coverage by 1.37%.
The diff coverage is 12.02%.

Impacted file tree graph

@@            Coverage Diff             @@
##             main     #750      +/-   ##
==========================================
- Coverage   19.96%   18.58%   -1.38%     
==========================================
  Files         189      190       +1     
  Lines       24542    27871    +3329     
==========================================
+ Hits         4899     5180     +281     
- Misses      18788    21779    +2991     
- Partials      855      912      +57     
Impacted Files Coverage Δ
x/incentives/client/cli/query.go 0.00% <0.00%> (ø)
x/incentives/client/cli/tx.go 0.00% <0.00%> (ø)
x/incentives/handler.go 0.00% <0.00%> (ø)
x/incentives/keeper/msg_server.go 0.00% <0.00%> (ø)
x/incentives/keeper/utils.go 100.00% <ø> (ø)
x/incentives/types/keys.go 0.00% <ø> (ø)
x/incentives/types/msgs.go 59.67% <0.00%> (-30.57%) ⬇️
x/incentives/types/query.pb.go 0.78% <ø> (-0.05%) ⬇️
x/incentives/types/query.pb.gw.go 0.00% <0.00%> (ø)
x/lockup/keeper/iterator.go 77.27% <0.00%> (-9.17%) ⬇️
... and 12 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 6e7d32b...a0f6297. Read the comment docs.

@mattverse
Copy link
Member Author

So there has been a major change, I've deleted the whole period state in proto struct, it turned out that we do not need to track periods separately bc locks' distribution only happen based on epoch. With the proto changes, logics for f1 distribution also became way simpler!

Comment on lines +78 to +79
// LastEligibleEpochByDurationAndDenom is a struct to store the latest epoch for each denom + lock duration combination
message LastEligibleEpochByDurationAndDenom {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't apriori understand the purpose of this struct. (Why isn't the latest epoch for every denom, lock duration combo the latest)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

or is this instead for when a user is claiming rewards, what the 'start' epoch is?

If so, is this indexed by user-address, rather than lock ID?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, this can be rather confusing. So even though a period lock would have a single lock duration that could be locked, why would there be arrays of denom + lock duration?
Lock duration here does not mean the duration of the lock itself, we're keeping an array of lock duration for keeping track of at which epoch for each lock duration was this lock eligible for reward.

@@ -718,6 +718,8 @@ func (k Keeper) unlock(ctx sdk.Context, lock types.PeriodLock) error {
return err
}

k.hooks.OnTokenUnlocked(ctx, owner, lock.ID, lock.Coins, lock.Duration, lock.EndTime)
Copy link
Member

@ValarDragon ValarDragon Jan 30, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a reason behind moving the hook? (I don't really have an opinion, just curious)

Also perhaps worth us coming back and renaming the hook to be AfterTokenUnlocked or Before. On has weird semantics

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So there's logic added to hooks that uses lock reference to update periodLockReward before we delete the lock!

Copy link
Member Author

@mattverse mattverse Jan 31, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For renaming the hooks, do you think it's a better decision to do it in a separate PR or straight up ahead on this one?

@p0mvn
Copy link
Member

p0mvn commented May 4, 2022

Hi team! What is the status of this?

@mattverse
Copy link
Member Author

@p0mvn We were delaying this since Jan mainly because of timelines, we should further discuss abt when we have the capacity to review this / push foward

@ValarDragon
Copy link
Member

Going to close for now due to how stale this is. Lets revisit as we do more F1 work.

But realistically, I think what we should be doing is making a generalized accumulator system, and then switch out parts of the stack to use that as a back-end.

@ValarDragon ValarDragon closed this Jul 7, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: Done
Development

Successfully merging this pull request may close these issues.

5 participants