Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add proposal for transactions v2 and address map program #17103
Add proposal for transactions v2 and address map program #17103
Changes from 7 commits
ae45cd4
9a5dabf
006608e
94d9a74
b77af6b
d29d11b
bdbc2c4
54d8fdf
ad42eb5
50dd231
03aaf4c
39e2cbd
70de74c
8a858f6
f76b243
d5db732
31d96c0
1a2432d
abada96
File filter
Filter by extension
Conversations
Jump to
There are no files selected for viewing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't grok how this is similar to page tables. We're not trying to address real pubkeys with something that looks like a pubkey, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The idea is that each "index account" provides a virtual address space to the transaction which maps a
u16
index to a Solana account. It's not a perfect analogy though. Happy to accept a suggested revision to the wording hereThere was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I thought we can adjust by 10k at a time with account resize?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Don't think that's a thing yet, but yeah that would be a nice way to avoid large pre-allocations
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not yet sadly. We can only create a new account with a CPI up to 10k. There's no resizing at all
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
also, how about charging extra lamports for each time adding index entry? (Later I think we can reduce...)
The idea is that preventing creating many dust index accounts. Otherwise, once-pre-allocated accounts will be populated rather quickly, especially if original creator and entry registered are different parties.
Also, the index account data is specially sanctioned by the runtime. so, charging more extra lamports isn't so wrong, i guess?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i mean, we need to maintain another on-memory hashmap...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Definitely, I added a brief section about using a different cost model. Details TBD. Open to suggestion
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ryoqun do we need a hashmap? look up should be O(1), the index value should compute the offset in the index account page which is cached.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@aeyakovenko yeah, I think it's possible with special AccountsDB wiring. :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Implement this on devnet/testnet first, without updates to the cost model to account for this, but don't activate on mainnet? Just trying to establish what would trigger "later" in the future.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should be before devnet/testnet. I felt that the cost structure could be hashed out as part of the ongoing cost model work rather than within the context of this proposal. But it is important to figure out so how about I kick off the discussion with a few rough ideas..
First, the main concern here is having many large indexes that all need to be cached in memory due to high usage. We need some way to cap memory usage and fairly price-in the increase of RAM requirements.
Ideas:
Each newly initialized index should have higher storage cost than a typical Solana account. This could be some multiple of the cost of standard rent which operates similar to rent-exemption and allows accounts to stay in RAM
Charge fees on index accounts whenever they are used in transactions. Once the index reserve is depleted, the index is deactivated and causes transactions to fail until more funds are added.
Hey @taozhu-chicago want to chime in here? Context is above, would love to hear your thoughts
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The indexes would currently be memory mapped like all other accounts by default? I feel like we need a more general model of evicting long-unused accounts from RAM, with probably a new instruction to warm an account back up again once it's moved off to cold storage. Frequently used programs and other large accounts would have the same issue really.
But yep, removing the cost model design from here makes sense to me. Just want to ensure we don't forget about it somehow!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Awesome, sounds good. Hoping this issue gets solved in a general way like that
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One more for you: What if we change the
account_keys
format such that it is limited to 127 keys and if the top bit is set, each of those keys points to an account containing aVec<Pubkey>
. And if so, concatenate all thoseVec<Pubkey>
s to formaccount_keys
. That would allow everything else in the transaction to stay as it is today.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice idea! I described this as a special case of the current proposal here: https://github.com/solana-labs/solana/pull/17103/files#diff-a5f377d5958450d0b26fd0847562b36cb93fed8dba00fe153f3d00a61de20978R253