-
Notifications
You must be signed in to change notification settings - Fork 432
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Provide a StorageVec
datastructure
#1682
Comments
From another discussion; it could also be useful to provide some kind of "reference" data structure (akin to So, the approach here would be:
@xgreenx WDYT? |
I didn't get how the "reference" should work. For me, it is similar to the
Hmm, but |
You are right, it's very similar. I think what we had in mind was to give the contract authors some way of trying to encode (or decode) the type, so that the error can be handled in case the |
We shouldn't allow storing values bigger than the buffer to decode them=) I still didn't get how you want to use I think it is better to create a new type - In this case, we have only one problem: the single elements may exceed the buffer, but it is unlikely. |
Currently, we only provide a
Mapping
. However, storing things in a Vector (Array) on contract storage is also as thing our users need. Using the rustVec
from the prelude has a fundamental issue: It exhibits packed layout. This makes it a footgun when used on contract storage, easily leading to various sorts of DoS vulnerabilities.There once was a dedicated storage vec data structure. This data structure would still use classical data structures from the
prelude
, but used its ownStorageEntry
struct to wrap it's data in and lazily read or write to/from storage. The approach would be to re-work it to function with the new storage API. On a high level, this should work. It will generate some amount of work implement though (I think it is more involved than just copying over the old code and make some minor changes).Another thing is, for example, iterating over a Vec with 1000 elements on it, will still cause 1000 storage reads to the contracts pallet, and this will be costly. So I thought about whether we should try to be smart and do some further caching by designing a Lazy data structure that can read and write data in chunks. But it implies some drawbacks as well, e.g. additional complexity and it is not clear to me how we should determine an "optimal" chunk size.
The text was updated successfully, but these errors were encountered: