-
Notifications
You must be signed in to change notification settings - Fork 21
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feat/objectarium compression #205
Conversation
Codecov Report
@@ Coverage Diff @@
## main #205 +/- ##
==========================================
+ Coverage 96.30% 96.45% +0.15%
==========================================
Files 30 31 +1
Lines 3952 4318 +366
==========================================
+ Hits 3806 4165 +359
- Misses 146 153 +7
📣 We’re building smart automated test selection to slash your CI/CD build times. Learn more |
8e8c5ca
to
cb71cb8
Compare
size-limit report 📦
|
a5ee4bf
to
0ed75c5
Compare
0ed75c5
to
4fb8515
Compare
9725c77
to
85b3e85
Compare
85b3e85
to
905b06e
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice, that seems good ! Thanks.
I'm just opening a discussion about limits of bucket. I'm wondering about the max_object_size
and max_total_size
limits attributes that is related to the size of the uncompressed object.
Is it relevant to limit the total max size on uncompressed objects size ? Maybe on the size of one object could be useful but for the total bucket size I will use the compressed size to check limits (by changing limits attribute name max_compressed_total_size
or adding a new limits attributes or just add a comment on this attribute documentation)
@bdeneux ah, yes it's a good point, and I had wondered about it. I am quite divided on the question. 🤔 Let's just say that from what I see of compressed volumes usually, the limits apply on the uncompressed data because through the volume, the files are seen for their actual size, and that's the approach I've taken. Also, limiting the total size by compressed size requires compressing first, which is costly (in gas) if the resulting size turns out to be unfortunatly larger than the limit. @amimart your thoughts? |
For me both behaviors can be justified, but I'll have a preference with considering the size the uncompressed object. Because even it is compressed at the storage level, it doesn't reflect the actual size of the object at the interface level when storing or querying an object. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me! 😉
Thanks for the review guys. 👍 |
🎉 This PR is included in version 2.0.0 🎉 The release is available on GitHub release Your semantic-release bot 📦🚀 |
This PR addresses #192, which enables the support of compression of objects, through the Snappy compression algorithm as the primary supported algorithm.
The implementation introduces a design for the smart contract interface that enables the expression of object compression and control over the supported algorithms in a given bucket. This development paves the way for experimentation that will provide insights into the gas consumption associated with object storage in various contexts (cf. #191).