-
Notifications
You must be signed in to change notification settings - Fork 4.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature Gate: round down compute-unit-price to the nearest 1_000 microlamports #31453
Comments
hi @taozhu-chicago, i don't understand why this is a good solution:
Most people I know use 1-10,000 microlamports. 1,000,000+ sounds like a severe congestion case on-chain, or wallets / UIs just blindly setting a fixed value. Do you have more data? i think it's important to look more differentiated at users in segments:
Most CLOB workloads are dependent on account state, e.g. if you order $1M of any asset on openbook you are going to match with more than 1 party, hence there's a dynamic component that is very hard to limit by a fixed number. As there's no way for programs to terminate a transaction before it hits the gas limit gracefully, people are just forced to over-estimate for the worst-case. I don't see how this fee change improves the actual issue and gives developers the tools to actually estimate the CU usage of their transactions. |
Hey @mschneider, good to have your inputs. The easy one first: this doesn't aim to fix but to encourage requesting accurate, or at least more reasonable, CU limit. I'd guess it'd take multiple approaches and much works, but eventually the requested CU limit need to be at a reasonable level.
You mean priority fee be charged at different level? Also, for HFT in your example, don't you think 0.0001Sol/Week is too insignificant? |
i think the right scale is somewhere in between! 1 micro lamport in priority fee = e-5 SOL / week probably e-2 is a good reference value |
Yea, a in-between value does sound safe / comfortable, but replacing one fraction unit with another fraction unit doesn't sound like final solution, tbh. I'd be arguing that 10Sol/week has more chance than 0.01Sol/week to incentivize dev digging into refine, even redesign, transactions. |
If the goal is to allow users to set accurate compute limits I believe the best way would be to expose the CU consumed in a sysvar. This way users can decide to stop consuming CU before reaching the limit |
By introducing way too large buckets you will prevent real price discovery we see the same with order and price lots on orderbooks the ideal measure of increment is usually 1bps or 1e-4. the best example would be volume on sol/usd on serum which we fixed with the openbook launch |
Sample TX CU: 50,000 |
hey Jayant from Pyth here. I understand the desire to have people set CU limits accurately, but I don't think this proposal is going to solve that problem. As @mschneider 's points out, you have to set the CU limit to the maximum that the transaction could possibly consume. In Pyth's case, we have an expensive aggregation operation that needs to be performed once per slot. Most price update transactions use very few CUs, except for the one that happens to trigger aggregation. However, since we can't tell up front which transactions will trigger aggregation, we have no choice but to set the CU limit conservatively (as if every transaction triggers aggregation). |
Longer-term would the functionality being proposed here help with that, solana-foundation/solana-improvement-documents#16, where the Pyth program could potentially rebate fees when the aggregation is not triggered. |
Hi guys, Sorry to jump in like this, maybe a quick intro first before I comment: I do market-making/HFT on Solana, currently have the most volume on all exchanges on Solana, and most of the liquidity on all exchanges except one. I'd guess a good percentage of all TXs on Solana are directly/indirectly related to my system. It's a small ecosystem right now, a lot of the other MMs left but I'm doing my best to keep supporting different protocols even when it's not profitable. Based on my own usage, 1_000_000 increments is extremely high. You can see my TXs on openbook for example, the priority fee is 50k-200k usually. IMO I'd increase to 1_000 increments (milli lamports) but definitely not 1 full lamport because then it won't be sustainable and I'll stop paying fees completely. There's a good analogy for this on the orderbook where the "tick size" (price increments) needs to be granular enough to allow market makers to quote efficiently (~0.5 bps of the price is good), but not too granular/small that it leads to abuse (you can bid 0.0001 bps higher than the previous best price to get the lead) which defeats the time-priority part of the orderbook. I appreciate that this is a public/open debate, a big reason why I got into DeFi is the fact there's no gatekeeping and everyone can take part. I hope my input was useful. Thank you |
Hi @SpaceMonkeyForever, thanks for contributing!
Can you provide a few examples of your txs? I'm assuming by "priority fee" you mean compute_unit_price that is 50k-200k microlamports?
Yes that identifying right increment is the key. However, keep in mind that one doesn't have to always pay prioritization fee for every transactions, only when congested; |
Hey, thanks for replying! Yes, Yes, I understand it's only used when needed, 1_000_000 is a large increment IMO still. When doing HFT, this doesn't help much because I cannot wait for my TXs to start timing out first before adding a fee. Market makers have to be quoting all the time every second or else you start losing money and you stop providing liquidity (you can still trade as a taker i.e. toxic flow which will hurt the protocol even more than pulling out liquidity). IMHO I would rather keep it the same as it is now or increase to 1_000 increments or maybe 10_000 increments at most. |
if Utilizing RPC (https://docs.solana.com/api/http#getrecentprioritizationfees) to determine when and how much prioritization fee might be needed before sending transactions, there are more room to further reduce overall fees. Would be great if you can explore possibilities of reducing requested |
@taozhu-chicago Right, I know about the RPC endpoint, it's useful for UI apps for sure. I think you are focused on the general use-case, I'm talking about a market maker use-case. The best I can do is estimate what the priority fee needed might be. Also, it seems your overall goal is to increase the fees, which would kill all orderbook liquidity on the chain and only AMMs will remain (what Ethereum has). Aside from killing all orderbook protocols like openbook, I can go into detail on why that's a very bad idea in general, but it's better to stay on topic and we can all agree that we need orderbooks and HFT (not CEX-level but still good) on Solana, it's what makes it better than all other chains. If you want to cut down on compute, IMHO there are 2 ideas I can think of:
|
+1 to exposing CU consumption if we want users to request tighter CU limits |
Agreed. Simulations or careful design help on some use cases, but may not be enough for others. SIMD #49 also helps better utilize requested CU limit. Any suggestions are welcomed. |
The overall goal is to continuously improve network so it serves better for everyone. To that end, the motivation of this proposal is to set prioritization fee to a reasonably meaningful level, described in SIMD #50. Ideal outcome of adjusting
Open to suggestions ofc.
Charge base fee based on CU limits is coming, SIMD 19. It's more relevent to address item2 above as early as possible;
There are only ~6% non-vote transactions explicitly set CU limits right now. IMHO, this needs to be increased to (close to) 100% before adjusting default value. |
Can you elaborate on why higher CU limits is problematic? Will make it easier to brainstorm suggestions! |
I imagine you'd end up getting less fees overall. That sounds bad to me, but maybe I'm not getting it.
We already do this when setting a priority fee AFAIK to reduce costs, I definitely do. However, the limit needs to be high enough that the TX will not fail even in the worst case scenario, as others already mentioned. So, as far as I can see, the two goal points are already covered. IMHO making the network better for everyone includes keeping liquidity providers on the chain and specially orderbook exchanges which is what makes Solana unique. Right now, most MMs have left the chain which is one reason why I can have the majority of all orderbook liquidity on Solana.
Right, I didn't know you are making the fee dynamic, honestly that too sounds bad for market-making on Solana if it ends up increasing the fee too much specially that marking-making involves executing multiple transactions in one TX. I meant that right now increasing the compute limit (without adding priority fee) beyond the default still costs the same default fee which shouldn't be the case.
makes sense. I've asked other people, cris being one, to add their opinion too. Hopefully we will get more of the perspective of exchange owners and traders since those are the most active apps on Solana. |
|
are block CU limits based on CU used or CU requested? i was under the impression it's the former, but could be wrong. |
If we're in a market regime where blockspace demand is so low, such that the price is low enough for people to spam funny logs or drastically over-request CUs, I don't think this is an issue. The market's response--if there is actually more demand that is being serviced--is to increase the price by the appropriate amount. If folks (users, bots, etc.) sending transactions aren't getting them included at a satisfactory rate, they need to increase the priority fee. note: the UX with this type of priority fee will always be quite poor, since you will always be guessing how much priority fee you need to add. A 1559-like mechanism should make a lot more sense. |
In general sense, leaders do extra work for TXs with priority, prioritization fee compensates that extra work. Less prioritization, less fee. As for CU limits, I think there are room for improvement. Randomly pick a tx, it only used half of request CU. It is understood TX has to cover worse scenario, but we can work out better ideas such as SIMD 49 |
Currently there is a QoS adjustment logic (issue #31379 ) to pack block with actual used CU. But the main reason this extra logic is there is because requested CU are way off. In case of bankless leader, it will have to fully rely on request CU to make packing decision. |
Exactly what you said: you have to cover worst case scenario. Most of the time, 150k is used, but TX do hit over 250k sometimes and you'll lose money when that happens. SIMD 49, as far as I understand, would make it so that I place orders up to the remaining CU i.e. some orders at the end might not make it to the book. It's more of a trade-off than a solution IMO because it does remove liquidity from the book specially the backstop orders at the end which are important for liquidations to work properly on margin protocols like Mango. |
Without understanding what takes to trade successfully, and with full respect, I'm wondering if possible such transaction be broken up into smaller TXs, coordinated by signals generated off-chain? Akin to traditional trading/MM-ing app builds order books locally via feeds, sending orders via signals generated from local order books. |
We don't see a ton of this today, but having lots of logic on-chain (so state can be read at execution time) creates difference in CU requested (max CU used) vs. actual CU used. For example, I may have a smart contract where I ping it with what I think the fair price is. The contract's logic decides whether it should cancel orders, place new orders, etc., depending on the input fair price + the current state of the market, sometimes using a lot of CUs and sometimes using not that many. |
I'm not sure if I understand correctly. My point above was to comment on the proposal (SIMD 49) that would allow the TX to specify "process instructions up to compute limit N" rather than running out of compute, maybe I misunderstood it though. What I was trying to say is that if the compute requirement suddenly jumps (since it's dynamic, I cannot know for sure that a TX will require only 150k and not 250k), and I use that new feature of "process inx up to" then some orders will not be placed because their inxs will be omitted, specially the ones at the end with bigger sizes which are needed for liquidations to work properly on e.g. Mango (liquidations usually require big orders because it's a big market order that needs to be filled). About splitting orders over multiple TXs in general though: |
Right, this is model is common that creates large variance in requesting CU limits. Would be great if the decision could be made off-chain, so you have one small and static tx to frequently pull data from chain to build local cache; then ping it with price locally, based on decision to send new/update/cancel orders, which are pretty much single-function transactions that'd be constant in requested CU. Not sure if it is feasible to all cases, but maybe there are successful stories out there? |
yes, the problem is you really care about the state at run time, not at tx send time, especially in a HFT setting. Extra troubling when you don't even know when your tx is going to land |
You are correct that SIMD 49 allows transactions to adjust execution path in runtime based on CUs consumed. leading to better CU utilization. Removes some need of "over requesting".
Number of transactions per second isn't a reliable indicator for congestion. If one transaction requests 300_000 CUs, but can break into 2 transactions, each request 100_000 CUs, then we are better off. |
Convos are zooming into reducing request CU limit, which is good. But just to keep big picture in perspective, this issue (and SIMD 50) is solely for changing prioritization fee units (which combines with request CU limits impacts prioritization fee). For HFT and MM, when PRAW fee (SIMD 16) is implemented, I imagine there would be even less need to prioritize their transactions. (BTW, it might worth pointing out PRAW fee is also requested CU based). |
I don't like this proposal in its current form. It seems to have negative side effects and I don't think it will accomplish all of its goals. If you make the prioritization fee increment 1 lamport/cu, I'd expect:
Looking at the list of problems that SIMD-50 aims to address:
Well, people could already use 1 lamport/cu prioritization. But no one does because they can outbid enough people with much less. This could be because the demand isn't large enough to drive prices higher. But yeah, increasing the prio market's tick size would likely help - but something like 10k microlamports/cu should be enough without wiping out most current usage.
I don't get this at all. If there's no contention, why is someone without priority fee at a disadvantage? Isn't the whole point of the priority fee for users to be able to pay fees to get into blocks more quickly when there's contention for that?
I expect this proposal would not change that, because people who use zero priority fee don't have an incentive to care about compute units. (though other proposals are addressing that) |
Thanks for chiming in @ckamm, Ideally user should selectively/carefully utilize priority. As example, updating orders might not justify additional prioritization fee under normal condition, but mass cancel would be OK to pay higher fee to increase certainty of its timely execution. The main goal of this proposal is to get close to this ideal scenario, that is no tx need to pay priority when no contention (not on block nor on account), but should pay when urgency raises that justifies the additional cost. My argument is if this "additional cost" is too insignificant, TXs needn't pay will continue set priority, say to lock an account, which practically raise cost unnecessarily for anyone wants to wrote that account. Some data collected at #31755 shows only 3.6% non-vote transactions set cu-price in MNB today, assuming these are from active users who cares about both send-to-confirm time and overall cost. I figured this is the ideal group to start working out cu cost, and perhaps, as nice side effect, figuring out ways to request more accurate cu limit. |
I want to understand this. Currently, when there is no contention, all TXs are processed anyway so there shouldn't be an increase in cost for others. However, your point is if you implement dynamic (CU-based) fees then this will change and it will become more costly for others. Is that correct?
I'd say decreasing current granularity by 1000x to milli-lamports (0.001) is the right choice in my experience so far. |
An example would be, say one always sets From this example, my point is, to improve local fee market integrity, user should be incentivized enough to use prioritization fee close to its original intention. (sorry for repeating this over) |
Changes to priority fee impacts that 3.6% priority-setting-txs', implementing cu-based fee will impact all transactions. I'm working this proposal as precursor of dynamic base fee. (Just to clarify, changing base fee will not impact prioritization, nor local fee market). |
Well, the vote counting so far:
|
About no one using How about upgrading client libraries (including anchor) updating the solana cookbook and other documentation (just checked, doesn't mention this RPC call yet), announcing the feature regularly on social and just waiting a few more weeks until people adopt and then make a conclusion? This seems more reasonable to me, then to release a feature gate, that has negative economic impact on nearly every stakeholder in the ecosystem. |
Guess it's time to use it now 😄 Otherwise how would one know what prioritization fee should be |
Current base fee is 5000 lamps/signature (let's say lamps/tx for simplicity) Current min prio fee is 1 lamp/tx (assuming 1m CU requested) It seems reasonable to increase the min prio fee to be a higher portion of the base fee--at least 1%. But many multiples of the base fee (the original proposal would bump this to 1m lamps or 200x the base fee) seems way too large and likely to cause unintended effects with priority fees (like people choosing to never use them). Increasing by a factor of 1000 (min prio fee becomes 20% of base fee) seems reasonable as @SpaceMonkeyForever suggests. Is there a reason we need to change the units we denominate priority fees in, rather than just the minimum value? Does it make the implementation much more complicated? |
Did some manually simulation over 500 slots of MNB ledger here, setting incremental unit to 1_000 micro-lamport does look better than proposed 1_000_000 micro-lamport. I updated this issue description to adapt using "1_000 micro-lamport" as tick size for compute-unit price. Will update PR #31469 accordingly, will ask for review/approval when ready. |
Issue no longer valid, nor has plan for further development, close it |
Description
Problem
Data from mainnet-beta shows the
compute_unit_price
is too granular, that:compute_unit_price
scatters from range [0, 200_000+] in rather random fashion. Ideally user should set the value based on recent blocks' and/or accounts' min priority fee (available via RPC), only only when needed;compute_unit_price
andcompute_unit_limit
, are too small; the side effect is user request lot more CU than what is actually needed for transaction.Proposed solution
Regulate
compute_unit_price
by rounding down to its nearest 1_000 micro-lamport. The effect is user should setcompute_unit_price
in increment of 1_000. Transaction has less then 1_000compute_unit_price
will have no priority nor be charged a priority fee.Examples:
For a transaction with default 200_000 CU
compute_unit_limit
:compute_unit_price
in micro-lamportsProposed change needs to be feature gated because it changes actually priority fee changed to payer account.
Feature ID
6J6GS57v5q4CnPrLgDMsyZLFfpPEBPZ66h8efDEpesPk
Activation Method
Single Core Contributor
Minimum Beta Version
1.16.0
Minimum Stable Version
No response
Testnet Activation Epoch
No response
Devnet Activation Epoch
No response
Mainnet-Beta Activation Epoch
No response
The text was updated successfully, but these errors were encountered: