-
Notifications
You must be signed in to change notification settings - Fork 3.8k
Transactions should commit to total CPU usage? (and friendlier default CPU usage billing) #1999
Comments
Deferred transactions will be billed actual usage. Unfortunately, this means replay will have to have instruction counting injection enabled or we would need two different versions of wavm. |
Partial work on this in commit: 18009d1 |
As a middle ground users can set a max in the transaction and the block can
commit to actual usage as part of the summary structure. The producers and
validators will make sure the actual is never wrong and we can calculate
that for deferred and signed transactions alike. Additionally this
relieves the pressure on tools to be as accurate as possible since the
users setting is just a limit to protect themselves.
…On Fri, Mar 30, 2018, 11:37 PM Daniel Larimer ***@***.***> wrote:
Partial work on this in commit: 18009d1
<18009d1>
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#1999 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/ACYR4oxYB27QnGb0wotwbDlbnZRgWG3yks5tjvn1gaJpZM4TCSNB>
.
|
Also this issue talks a lot about the pros and cons of the proposed changes
from a technical perspective but it only barely touches why this is a
preferred change.
So far I gather that we expect users to want a maximum capacity on their
transaction usage?
We further expect users to trade some of their usage (due to accuracy of
estimates) for replay performance aka syncing new nodes.
So the value for a user is
- basic protection from cpu usage that is not readily apparent like
notifications to other accounts
The cost for a user is
- increase in the amount of staked value needed to transact on chain
(considering over estimates as waste or tax)
The value for the chain infrastructure is
- faster replay
The cost for the chain infrastructure is
- increased cost of constructing tools and UX as they now need to simulate
or query a simulation server to get better estimates
I'm not convinced we have the cost benefit for the user in the right place
for this change (and maybe the concept of transaction header commitment to
resources in general)
…On Sat, Mar 31, 2018, 11:18 AM Bart Wyatt ***@***.***> wrote:
As a middle ground users can set a max in the transaction and the block
can commit to actual usage as part of the summary structure. The producers
and validators will make sure the actual is never wrong and we can
calculate that for deferred and signed transactions alike. Additionally
this relieves the pressure on tools to be as accurate as possible since the
users setting is just a limit to protect themselves.
On Fri, Mar 30, 2018, 11:37 PM Daniel Larimer ***@***.***>
wrote:
> Partial work on this in commit: 18009d1
> <18009d1>
>
> —
> You are receiving this because you are subscribed to this thread.
> Reply to this email directly, view it on GitHub
> <#1999 (comment)>, or mute
> the thread
> <https://github.com/notifications/unsubscribe-auth/ACYR4oxYB27QnGb0wotwbDlbnZRgWG3yks5tjvn1gaJpZM4TCSNB>
> .
>
|
We are instead going with the following design: The transaction header includes a field The payer will only pay for actual usage however. If the actual usage is greater than the upper bound (assuming one is set), then the transaction fails. In order to replay without needing to run context-free actions (which may not even be possible to run since the context free data may have been pruned), the actual billed cpu usage must be included in the block summary (specifically in the transaction receipt). |
PR #2084 takes care of most of the changes describe in the comment above. The actual billed cpu usage is still not in the block summary, but that work is in progress along with #1972. Also, we will probably not be doing the default billing to first authorizer of the transaction for RC1. Currently, all authorizers of the transaction are billed once (not double billed) for CPU and network bandwidth. The change to bill the first authorizer only and provide a way to change the default billing will be left to RC2. |
Currently, transaction headers include a field
context_free_kilo_cpu_usage
which holds the number of kilo CPU units to bill for processing signatures and context-free actions. The number of CPU units to bill for processing context-aware actions is currently not explicitly specified in the transaction and is instead calculated by running the context-aware actions with objective instruction counting.This issue is to track the investigation and, if necessary, implementation of the following change in design:
The field
context_free_kilo_cpu_usage
would be renamed to justkilo_cpu_usage
and instead be the exact number of kilo CPU units to bill for the transaction. This value acts as the threshold value that the objectively calculated CPU usage of the transaction should not exceed. The objectively calculated CPU usage of the transaction would still have preferential rates for context-free computations (computing context-free actions, signature verification, and authorization checks).Furthermore, by default the cpu usage would only be billed to the actor of the first authorization of the first context-aware action of the transaction (rather than all authorizing actors of the transaction as it is today). The default account to bill could be overridden with a special context-free action, but that is a task left for another issue.
Requiring transactions to commit an upper bound on context free CPU usage allowed nodes to replay irreversible blocks while skipping over the computation of context-free actions and without needing pruned data (such as context-free data and signatures). The change described in this issue would allow further optimization of the replay performance of nodes by allowing even the overhead of objective instruction counting to be removed.
In addition, committing to an upper bound on total CPU usage within the transaction gives assurances to the transaction signers (in particular the account which will be billed for CPU usage) that they will definitely not be billed more than that committed amount if the transaction is successfully executed. They do not have to worry that the contract code that the transaction's actions will be delivered to (which can sometimes be hard to determine prior to execution because of
require_notify
,send_inline
, andsend_context_free_inline
) will eat up a lot more of their CPU bandwidth than they expected.The downside of this approach is that the payer is billed for the full committed amount of CPU units even if their transaction ends up actually using less. Their client side tools would also need to estimate this upper bound; while CPU usage could be calculated perfectly for signature verification and context-free actions (at least if you exclude those dispatched inline by context-aware actions), it can sometimes be difficult to estimate the CPU usage for the context-aware actions.
Open question: How does this approach affect deferred transactions generated in a contract code? Is it too much to demand that contract code estimate the upper bound of the CPU usage of the deferred transaction? Or would deferred transactions be a special case where the payer gets charged whatever the calculated CPU usage of the executed deferred transaction (whether it succeeded or failed) happens to end up being (to the limit of what their CPU bandwidth resources allow)? And in that case could a non-zero value of
kilo_cpu_usage
be treated as an optional upper bound to limit the damage a deferred transaction can do to the payer's CPU bandwidth resources? (It is important to note that deferred transactions generated in a contract do not need to commit to any particular CPU usage upper bound for billing in order to allow replaying without pruned data; this is because the data for any context-free computations are generated by the contract code which must be executed by a validator.)The text was updated successfully, but these errors were encountered: