-
Notifications
You must be signed in to change notification settings - Fork 4.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement Budget DSL in terms of Call and PageTable #739
Implement Budget DSL in terms of Call and PageTable #739
Conversation
e6be112
to
40ac352
Compare
Why do you implement |
@garious |
@aeyakovenko, I'm just to get my head around how this should relate to @carllin's work to support multiple instructions in: #602 |
@garious The main change is that the |
11d808c
to
3946569
Compare
more fast ledger ideas porting transaction to call bank in terms of page_table page_table tests pass use PACKET_DATA_SIZE instead of 256 timestamp contract works! transfer works! more bank tests pass filter bad last ids fix experimental benches last_id_not_found error fixed replicate test thin client test renable benches retry thin client requests fix purge test shrink the call a bit clippy tests clippy store the last signature of the caller using the page dynamic works i think have thin client implement version retry rebase tests
3946569
to
f09f520
Compare
@aeyakovenko, close this PR since it's coming in as separate smaller PRs? |
#953 is taking over this pr |
PageTable
which is a data structure for mapping PublicKey -> PageAnother way to think of the model is a single address space OS with memory protection for processes.
decisions to be made
A version in the Page structure means that a spender has to know the current version, and only a single transaction that matches it will be executed.
if we use a version, it nearly impossible for the spender to asynchronously post a bunch of transactions and only retry the failed ones. Spender would send transactions with version 0, 1, 2, 3, 4
if 2 is dropped or arrives out of order, 3, and 4 are likely to be dropped as well.
This is how I currently implemented Page, but I am having second thoughts. Especially since asynchronous signals will not be able to predict what the version of the Page is.
PageTable locks memory per transaction, that means that the blobs output is parallelizable for the blob.
But performance is better when we have multiple blobs of transactions processed at the same time. this means that blobs can lock memory for the blob, and process transactions in series. maybe we have different queues, for contracts that are expensive or cheap to serialize?