-
Notifications
You must be signed in to change notification settings - Fork 709
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improve Coretime pricing #4360
Comments
Issue was discussed in person on the retreat but I'll post some of my thoughts for transparency + some extra new insights. PricingFirst of all I strongly agree with the overall direction of the changes. In short I believe that the renewals are the sole provider of price stability for those that care and the on-demand and bulk-auctions can be fully governed by a free market model (very wide lead-in and low base price**). This approach gives the best of both worlds:
Bulk-auctions could in my opinion even function without a base price (or minimum price**) at all but the concept of base/target price is near necessary for adjusting the renewals. Assuming that changes to the renewals are capped at some amount that should grant enough stability. So in summary I agree that above changes alleviate most price manipulation or volatility concerns in bulk auctions. RenewalsBecause the bulk auctions dictate the target price I think that the topic of renewals is inseparable from the price adaptor discussions. The main difficulty here is how exactly do we calculate the target price to be used for renewals and if it differs significantly from current renewal price by how much do we allow it to change to not compromise the price stability goal. In general I believe the renewal price should only go up over time, but the maximum price growth should be capped. When it comes to the cap it needs to chosen such that it is still meaningful considering the inflation targets. Putting target price in the middle of the lead-in might reinforce some short-term hoarding behaviour from parachains. Unsure if it is harmful but worth discussing. As an example: Assume you are a team running a parachain and will pay Overall this seems like a healthy mechanism of locking in lower prices for renewals (considering renewal prices don't drop by themselves). So it seems that renewals allow you to benefit from price swings down but be mostly immune to price swings up. Is it possible to pull off this manoeuvre without wasting coretime? Based on my description above it seems like there is a period when the team owns 2x more coretime then they need to facilitate the transition period. Capping the renewal price ticks also has the potential adverse consequence that if the prices shot up significantly old renewals are an extremely, extremely good deal. Technically it can be possible that target price for coretime is Although consider an extreme case, where all bulk regions are getting renewed. Then none of them end up in the bulk auctions so the price doesn't change at all. The way the target price is defined right now, it only adapts if cores end up being sold in bulk auctions. This can be problematic especially when we start with the bulk auctions prices configured to sth super small. Then hoarding it all ensures the price doesn't even change in the future which seems unintended. Are renewals taken into account for price adapting? Notes* Approaches with an extremely high lead-in and potentially exponential decay function for price could be explored but a 100x lead-in should already be plenty in 99% of practical cases. ** There are some good arguments for minimum price in bulk mainly due to more stress on validators but it does not seem to be a strict requirement and just a nice thing to have. |
Not true with my PR anymore. Renewals also influence price. If there are only renewals the minimum price will be so that the last renewal is in the middle of the leadin next time. Buying a second core to go down in your renewal price should work, but only with overlap - thus you still pay a premium fee to get that guaranteed renewal and you are giving up the core you already had, back to the free market. Seems fine. If we find it is not at some point, we might want to restrict renewals to only one core per para or at least make a second core more expensive. E.g. you already have a guaranteed renewal, a second core costs you at least as much. Calculation of the renewal price is the sellout price for leases for normal renewals it is determined by the position on the price curve, just as normal purchases. Except that it is limited by a maximum increase, related to the last renewal -> You can only benefit from lower market prices (not assuming second core tricks), by competing with normal sales in the lead in. |
I see... Well then maybe my phrasing was not correct but in fact the situation you are describing is virtually identical. If only renewals are present then the price does not adapt. Then no matter how many other people would want to buy coretime or what is the actual demand the price stays forever fixed. Seems wrong to me.
That is a fair cost I guess. They can temporarily use it for elastic. I assume that this route will be often pursued by parachain teams looking to lock in lower prices. I assume lastic and others will implement ways to monitor the market for those opportunities.
What I was missing is that bit of logic here: https://github.com/paritytech/polkadot-sdk/blob/master/substrate/frame/broker/src/dispatchable_impls.rs#L169 that calculates future renewal costs 👍 Actually there's one element I wasn't sure about. Are renewals submitted during the lead-in phase? Or is there a separate phase to submit those? AFAIK there is no separate phase but if the renewal is submitted in the lead-in it is simply price capped even if submitted early (when the price was high). Is that correct? |
There is the interlude before the lead_in. This is the only time where you get your guaranteed renewal and thus also a guaranteed price increase. If you wait for the leadin, you are competing against others. Notice how this line is missing for renewals. |
This issue has been mentioned on Polkadot Forum. There might be relevant details there: https://forum.polkadot.network/t/initial-coretime-pricing/5187/31 |
This issue has been mentioned on Polkadot Forum. There might be relevant details there: https://forum.polkadot.network/t/agile-coretime-launch-status-report/8347/1 |
Fixes #4360 Also rename: AllowedRenewals -> PotentialRenewals to avoid confusion of future readers. (An entry in `AllowedRenewals` is not enough to allow a renewal, the assignment also has to be complete, which is only checked afterwards.) - [x] Does not work with renewals as is - fix. - [x] More tests - [x] PR docs Edit 1: (Relevant blog post: https://grillapp.net/12935/agile-coretime-pricing-explained-166522?ref=29715) --------- Co-authored-by: eskimor <[email protected]> Co-authored-by: Dónal Murray <[email protected]> Co-authored-by: command-bot <>
Fixes #4360 Also rename: AllowedRenewals -> PotentialRenewals to avoid confusion of future readers. (An entry in `AllowedRenewals` is not enough to allow a renewal, the assignment also has to be complete, which is only checked afterwards.) - [x] Does not work with renewals as is - fix. - [x] More tests - [x] PR docs Edit 1: (Relevant blog post: https://grillapp.net/12935/agile-coretime-pricing-explained-166522?ref=29715) --------- Co-authored-by: eskimor <[email protected]> Co-authored-by: Dónal Murray <[email protected]> Co-authored-by: command-bot <>
Fixes paritytech#4360 Also rename: AllowedRenewals -> PotentialRenewals to avoid confusion of future readers. (An entry in `AllowedRenewals` is not enough to allow a renewal, the assignment also has to be complete, which is only checked afterwards.) - [x] Does not work with renewals as is - fix. - [x] More tests - [x] PR docs Edit 1: (Relevant blog post: https://grillapp.net/12935/agile-coretime-pricing-explained-166522?ref=29715) --------- Co-authored-by: eskimor <[email protected]> Co-authored-by: Dónal Murray <[email protected]> Co-authored-by: command-bot <>
This issue has been mentioned on Polkadot Forum. There might be relevant details there: https://forum.polkadot.network/t/agile-coretime-launch-status-report/8347/2 |
Fixes paritytech#4360 Also rename: AllowedRenewals -> PotentialRenewals to avoid confusion of future readers. (An entry in `AllowedRenewals` is not enough to allow a renewal, the assignment also has to be complete, which is only checked afterwards.) - [x] Does not work with renewals as is - fix. - [x] More tests - [x] PR docs Edit 1: (Relevant blog post: https://grillapp.net/12935/agile-coretime-pricing-explained-166522?ref=29715) --------- Co-authored-by: eskimor <[email protected]> Co-authored-by: Dónal Murray <[email protected]> Co-authored-by: command-bot <>
Current Situation
At the moment the price controller of the broker pallet is configured around an optimal number of cores being sold and aims to adjust the base price to the optimal price.
I believe this approach is flawed for two reasons:
No stable base price
It operates under the assumption that "the" price can be found and is stable, as we can only adjust the base price over months to come (especially downwards). This is problematic, because:
Price Manipulation Amplification
This is the more important one. It can become very easy to manipulate prices for others with a relatively modest investment, if the number of cores offered varies from one sale to the other. Let's assume the worst case and only a single core is offered in sale X, but 10 cores will be offered in sale X + 1. Now if someone bought that one core at the highest possible price (begin of leadin), the next sale will have the base price now at that price (multiplied by some factor of the
AdaptPrice
trait). Let's assume the lead in factor was also ten: By only buying a single core, someone just managed to drive the price for at least 10 more cores to that 10x price. Assuming not all cores are sold on that X + 1 sale, then the price will adjust slowly in the sale afterwards, but the price impact of that single sale will still be significant, even at X + 2. In other words the current model amplifies market manipulation attacks, which has to be avoided.Solution
I am proposing to adjust the model to
In other words, we have:
b
t
u
Now with a total leadin factor of 100, we have:
With a base price of
b
we will start the leadin at100*b
and go down to b. Assumingb
was set initially around the right order of magnitude, cores will sell at the right price even at the very first sale.Now the base price will be adjusted for the next sale according the following scheme:
s
is the price at which the last core was sold, or the base priceb
if not all cores have been sold. If no core was offereds
will be undefined (None
).b_n
will be calculated, such that the sell out price of the former sale will be the new target:With this the previous sell out price will be the new "optimal/target" price, in the middle of the leadin range in the next sale. If no core was offered, obviously nothing is changed and
b_n = b
.Conclusion
What have we gained from this? First we have a huge range for the price to be found and it is adjusting quickly if need be, without amplifying price manipulation.
Let's take our original example where we sold out only one core at the maximum price, which would be 100*b. Now the b on the next sale will be:
b_n = 10 * b
: An attacker had to invest 10 times the target price, at least for one core to drive the next base price at exactly the previous target price, which is 10x less than what the attacker paid. Hence there needs to be at least 10 times more cores to be offered on the next sale for the attack to make any sense (break even) and even in that case people will just need to pay what was previously considered the target price, so nothing unreasonable.Opposite example, cores are not getting sold (or not within the lead_in):
b_n = b/10
Now the new upper boundu
is the the previous target. So again, we stay reasonable with pricing even on edge cases.Leadin curve
The lead_in curve should be adjusted, so that the target price is in the middle of the curve time wise:
The text was updated successfully, but these errors were encountered: