-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
No longer charge for transaction size #1763
Conversation
But then people can make huge tx paying the same, delaying the tps |
The larger the transaction, the lower the priority. Consensus nodes can prioritize small transactions. |
if you prioritize lower you can consume more memory because they will be in the memory pool, I think that user should pay for each byte, it's the only chance to force the optimization, also you can store content in the blockchain for free, because you can avoid the storages, just add |
Hi all, charging for transaction size is aligned with removing Turing completness from Verification. Please give us 1-2 days to discuss it further. |
@erikzhang can you clarify how can we trim transactions? I know we can trim the witnesses, as we know (by tx header) that they were once approved, and since they don't have side effects on storage or during Invocation (cannot read anymore these fields on smart contract), it's safe to remove them. This part I get. But regarding the InvocationScript itself, I believe it's a permanent attachment to the blockchain, right? We can only store past txs or past full storage state, in a perspective where storage tends to be bigger than the code itself (due to loops...). That said, the advantages of not charging by size are quite interesting as well, as they solve recursive calculation problems (where you put more information on tx and the price goes higher every round...). I suppose users will still have to respect a maximum size per tx, right?
I think that we could try to check for unreachable codes in the invocation part of witness (these would NEVER make sense), and also in the EntryScript (this could perhaps happen, depending on output of some invoked contract...). This is a possible workaround, and another one that I'm fully in favor of, is to at least parse the Entry code... if it's malformed (crazy things after RET), just ignore the script (user could still intentionally PUSH some large string after a somewhat untraceable RET, so this one I don't think we can prevent). |
There is a 100k limitation for transactions. 10000 transactions consume 1GB memory. I think it is acceptable. |
You can. But it is unreliable. Because the transaction can be trimmed later. neo/src/neo/Network/P2P/Payloads/Transaction.cs Lines 131 to 135 in 9d996e0
And another problem is, you can't modify it in smart contract. |
A transaction is a description of a state change. We store both transactions and states in the database because we need to execute these transactions to verify the states. If we create a checkpoint in the future, then all previous transactions can be deleted. |
This is true, you can do it with a checkpoint. Regarding script parsing, do you think it's a bad idea @erikzhang ? At least we avoid intentional junk... and honestly speaking, I think we should even reject all Entry scripts that doesn't comply with basic optimization rules, e.g., PUSH DROP PUSH DROP. If some basic pre-optimization techniques are not complied by the script generators, these tx could also be downrated by p2p nodes. |
It's acceptable, but it's reasonable that if you spend and use more network or resources, you should pay more for it. |
What are the benefits? |
If some resources are close to unlimited, there is no need to charge, such as air. |
By ensuring a minimal consistency on the intended operations, we make life easier for the tools in ecosystem, since malformed scripts may perhaps break independent parsers, or leave doubts on the real intention of the tx submitter. Of course, if new opcodes/syscalls are introduced in time, things would also change... but at least we would know for sure that, for every tx that exists in blockchain, its Entry script made complete sense at some point in history (not just random bytes). |
Q1: Then a lot of layer2 projects may have some problem, like dex, one transation contrains a lot of Q2: If we want to trim transactions in the future, how do we prevent the old txs from being packaged up again? |
I can only see this disadvantage. |
It won't be a problem. The final priority is equal to neo/src/neo/Ledger/PoolItem.cs Lines 36 to 46 in 54784e0
neo/src/neo/Network/P2P/Payloads/Transaction.cs Lines 273 to 276 in 54784e0
|
Thanks, It's good for me.
Finally, hope to turn it into a configuration parameter of |
It's quite a big and complicated topic that intersects both with blockchain economics (and I'll try not to touch this part as much as I can) and technical challenges, but I think transaction size still matters and let me try to explain why that is. Quick current overview Just to quickly remind everyone, at the moment we have two types of fees: network fee and system fee. We don't have free transactions in Neo 3 and it's a feature because it easily solves the problem of network abusing, you can send as many transactions as you want as long as pay for all of them. System fee The system fee is quite simple --- it's an execution limit for the script included. This script is executed by every full node on the network after the transaction containing it is successfully included in a block. So we're paying for computing power here and it seems to be fair. Network fee Network fee is a more complicated, at the moment it's computed based both on transaction size and its witness checking execution time. The witness is being checked by every relaying node in the network, transaction size influences network transmission time and blockchain storage, so paying for both seems to be reasonable. At the same it should be noted that networking and storage costs are being paid during whole blockchain lifetime, while witness checking technically could be done only by CNs and regular nodes receiving transactions may not spend their CPU time rechecking them (although neo-go does that just because we love "belts and suspenders" approach). Potential problems If we're to change anything in this scheme, we need to understand the reason for doing so. The only thing I see is #840, the pain of network fee calculation. There can also be various economic motives, but in general I'm assuming that for the economic part we want our fees to be proportional to the network resources being used for transaction processing. Storage vs. networking vs. computing As this PR only leaves execution part of network fee it effectively considers networking and storage to be free. But to me it doesn't look so, quite the contrary, computing power in general is cheaper than networking or storage. And even though processor speed growth rate is getting lower and lower it still is higher than storage density growth rate or network speed growth rate. Network fee is being spent on signature verifications mostly and my laptop can easily do something close to 12K ECDSA verifications per second, so I'll assume 10000 ECDSA verifications per second to be a typical expected value, so the time cost to verify a typical transaction would be 0.00001 seconds. Getting it from the storage and especially transmitting it over the internet could easily take (up to hundreds of) milliseconds due to latencies involved, and 1ms is already 100 times more than one signature verification cost. I can't agree that 10000 of 100K transactions occupying 1GB of memory is nothing to worry about. It still is a gigabyte of memory, on every relaying node. If I'm to host a node in some cloud, I could easily see this 1 GB difference in my bill. And again, checking 10000 signatures is 1 second, while transmitting 1GB over the internet is easily minutes. Dollar value comparison But to really get the relative difference in computing/storage/network I think we can refer to some widely used storage/computing platforms. They usually are also good at counting money. I'm going to use Amazon as a reference, obviously there a lot of subtle details in their offerings and there are a lot of other offerings on the net, but they're an easy pick for this comparison and we mostly care about relative values here rather than absolute prices. Computing So if I'm to take Amazon Lambda prices (which are somewhat comparable to transaction executions), I can run some lambdas just for 0.0000002083 USD per 100 ms with 128M memory available. So for one ECDSA verification I'd pay 2.083e-11 USD. Storage S3 standard price is 0.023 USD per gigabyte or if we're to use a calculator, 2.13e-11 USD per byte. Which means that even a typical 250 byte transaction would cost 5.36e-9 USD to store. 1K and it's 2.19e-8. 100K and it's 2.19e-6. Networking Taking EC2 as an example here, it has 1 GB free allowance, but then it costs 0.09 USD per GB which is 8.38e-11 USD per byte. 250 byte transaction and it's 2.1e-08 USD. 1K for 8.59e-08 USD, 100K for 8.59e-06 USD. Overall difference We can see that a typical transaction computing (signature check) cost is about 100 times lower than storage or transmittion cost if we're to use some popular cloud provider as a reference. That roughly corresponds to our initial timing calculations. Even if we're to delete old transactions it doesn't change their transmission overhead and even temporary storage is still a concern because with the current policy we'll be storing these transactions for about a year. And not all nodes can cut the tail, whole chain history has to be preserved somewhere at least for archival purposes, that's storage too. Potential effects of this PR Based on everything written above I think this PR makes our fee system less balanced, treating 250-byte transaction the same way as 100K transaction is wrong as the overhead of processing 100K transaction is much bigger even though witness checking GAS cost would be the same and even real CPU time won't differ a lot. This PR also doesn't completely solve #840, even though it simplifies network fee calculation considerably, non-standard contracts are still a problem as you can't calculate their execution cost without executing. So I think we shouldn't do it. We can leave things as is and our network fee would be correlating to real processing cost of transaction. Possible improvement At the same time #840 is still a problem and we may want to have simpler rules for fee calculation. This would mean removing some components of the equation and if we can't do that with size we may probably do that with computation involved as we're much more capable in this area and I also think that we have "nothing" defined for CPU processing. The definition of nothing After #1745 we have MaxVerificationGas that limits verification scripts and it can be interpreted as GAS cost we're OK to spend for doing nothing. Because verification can fail and this computing effort would be lost. At the moment it's set to 0.5 GAS and given that one ECDSA verification cost is 0.01 GAS it's 50 ECDSA verifications or 0.5 ms CPU time. If we're to estimate networking overhead to be in milliseconds range, MaxVerificationGas cost would still be about an order of magnitude lower. If we're to express that in Amazon Lambda prices that's roughly a 1e-9 USD, a nanobuck, and that also is an order of magnitude lower than storing and transmitting 250 bytes (obviously, for bigger transactions that could easily Removing verification fee out of the loop So if we're to accept that CPU time in general is much cheaper than storage and networking and if we're to accept that spending (a maximum of) 0.5 GAS per transaction is fine, then we can remove verification time from network fee calculation letting the size fee cover for that. This will simplify fee calculation and almost solve #840. The only problematic thing left is invocation script size, but luckily it's very predictable for standard contracts and could even be calculated for non-standard contracts in many cases just based on parameters metadata of a NEP-6 contract (if there are signatures there, we know their sizes). It will also make our fee system more clear, one pays for execution with system fee and for storage/networking with network fee, verification is assumed to be negligible compared to storage/networking. We may of course want to adjust size fee appropriately (to be representative of the relative processing overhead), but that's more of an economic problem (and it should be done at the same time with opcode price adjustments). |
What about removing |
Maybe he can pay this fee only if his tx it's bigger than X? |
I think charging for size still makes sense in general if we want our fee system to be correlating with resources being used. |
Regardless of whether this change is applied, the cost of this 1GB memory will not change. Because users can increase the transaction size by increasing the fee. And these fees will not be paid to you, but to the consensus nodes. |
Then he can send multiple small transactions. |
True, but size fees motivate people to use shorter transactions and even though there always is a potential for a spike in memory usage if we're to fill whole memory pool with valid 100K transactions, it won't typically happen exactly because one has to pay for them. Now if you don't charge for size, I may start sending 100K transactions just out of curiosity, to see how the network handles them and this won't cost much. I may even intentionally add some garbage to my regular transactions to fill them up to 100K, nothing prevents me from doing that. So we'll have some additional resource usage without appropriate compensation. |
Vote:
|
I vote for option2, and merge
Therefore, I prefer to merge |
I am in favor completely abolishing
This will become like NEO 2.0 almost. Perhaps size is not much relevant because of the trade-off of space-time (@igormcoelho recently emphasized). It is possible to spam the network with small transactions if price is not effective (loops are powerful). @erikzhang, even with checkpoints some couple of nodes should historically have a complete copy, yes? That is historically something that looks important. Anyway, it is a private option of those involved in the network. I prefer to remove both and we charge things accordingly, even if the transaction is just |
Nice summary @erikzhang , I think things are finally converging into a final and definitive strategy... I'll put my perspective. I've also supported Erik's proposal of complete removal of verification access from invocations, to allow "useless" (past witness) data pruning, and since we have that, I agree that verifications would be naturally cheaper (no really FUTURE costs, just current P2P management/routing costs). Since we have Turing-complete verifications, and verifications are supposed to be "cheap", I agree to completely remove
Finally, I agree to give some flexible reward to consensus nodes, but I think we could also try to give some fraction rewards to p2p nodes that routed that transaction, via a fair play strategy.. to not make it longer here, I opened another Issue just for that: #1773 |
We have |
We should increase it, at least, to 512. The worst problem is if the people send transactions of 16mb, when the network is overload, and with fee 0 (low priority) they will delay the network for free, because his tx will be discarded when enter in the pool. I think that free size it's dangerous. |
What do you mean? The max size of transactions is 100K. |
Since we can trim the blocks and transactions in the future, there is no need to charge for the transaction size.