-
-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Clarify maximum block size #3104
Comments
Reasons for limitation:
the big DOS problem with huge leaves is that malicious nodes can serve bogus stuff for a long time before a node can detect the problem (imagine having to download 4GB before you can check whether any of it is valid). this was super harmful for bittorrent (when people started choosing huge piece sizes), attackers would routinely do this, very cheaply-- just serve bogus random data. smaller chunks are very important here. the way i would approach what you describe (which is pretty cool) is that canonical should both:
you raise a good point though that we need to address how to look at objects and know whether to pull them when they may be too big. |
Could this issue be moved to ipfs/notes please? |
Just one tangential issue: when trying to add a big block, ipfs stops in the middle of the "add", like this:
Maybe |
Yes agreed |
Is the blocksize limit enforced in the network layer, or just in the chunker? |
@ethernomad Both. As seen here, the chunker limits blocks to 1MB, but libp2p limits blocks to 2MB. The second limit would be hit for directories with large amounts of files prior to directory sharding being implemented. |
I am guessing as network bandwidth, processor speeds, RAM sizes increases, the optimum block size should also scale as it becomes easier to do all the things that are mentioned in #3104 (comment). Further, the volume of data/size of individual files that people want to share with ipfs will also scale. Is there a way to increase/tune the blocksize for future? If not, wont the minimum block size become a bottleneck? |
No, maximum blocksize won't become bottleneck as IPFS is not a PoW blockchain, there is no limitation on how many blocks can be sent per second. As @jbenet said: smaller blocksize allows for more deduplication and security but we are working on rising it either way to to characteristics of few version addressed ecosystems (git as one of the example) but it has to be done without any security trade off. |
@alphaCTzo7G if you want to better understand this problem, take a look at this discussion on the IPFS discourse. |
Closing this issue for now, please move further discussion to a new ipfs/notes issue, or discourse. |
The current implementation specifies a fixed maximum block size of 1MiB.
This is a pretty fundamental limitation, it should be explained better why this is the case (IPLD does not seem to be limited in the same way).
Motivation
When ipfs/specs#130 will land, it will be straightforward to automatically convert many hashes already used for integrity checks.
For example, Canonical could construct the "ubuntu releases" object (it would contain links to the historical live cd ISOs) and publish it in IPNS. It will just need to convert these hashes in the CIDv1 format, using the "raw data" codec.
This object would have a perfectly legitimate purpose (i.e. proving the integrity of the live cds), but ipfs will not be able to handle its leaves.
The text was updated successfully, but these errors were encountered: