Skip to content

Releases: ipfs/kubo

v0.5.0

28 Apr 17:10
v0.5.0
36789ea
Compare
Choose a tag to compare

We're excited to announce go-ipfs 0.5.0! This is by far the largest go-ipfs release with ~2500 commits, 98 contributors, and over 650 PRs across ipfs, libp2p, and multiformats.

Highlights

Content Routing

The primary focus of this release was on improving content routing. That is, advertising and finding content. To that end, this release heavily focuses on improving the DHT.

Improved DHT

The distributed hash table (DHT) is how IPFS nodes keep track of who has what data. The DHT implementation has been almost completely rewritten in this release. Providing, finding content, and resolving IPNS records are now all much faster. However, there are risks involved with this update due to the significant amount of changes that have gone into this feature.

The current DHT suffers from three core issues addressed in this release:

  • Most peers in the DHT cannot be dialed (e.g., due to firewalls and NATs). Much of a DHT query time is wasted trying to connect to peers that cannot be reached.
  • The DHT query logic doesn't properly terminate when it hits the end of the query and, instead, aggressively keeps on searching.
  • The routing tables are poorly maintained. This can cause search performance to slow down linearly with network size, instead of logarithmically as expected.

Reachability

We have addressed the problem of undialable nodes by having nodes wait to join the DHT as server nodes until they've confirmed that they are reachable from the public internet.

To ensure that nodes which are not publicly reachable (ex behind VPNs, offline LANs, etc.) can still coordinate and share data, go-ipfs 0.5 will run two DHTs: one for private networks and one for the public internet. Every node will participate in a LAN DHT and a public WAN DHT. See Dual DHT for more details.

Dual DHT

All IPFS nodes will now run two DHTs: one for the public internet WAN, and one for their local network LAN.

  1. When connected to the public internet, IPFS will use both DHTs for finding peers, content, and IPNS records. Nodes only publish provider and IPNS records to the WAN DHT to avoid flooding the local network.
  2. When not connected to the public internet, nodes publish provider and IPNS records to the LAN DHT.

The WAN DHT includes all peers with at least one public IP address. This release will only consider an IPv6 address public if it is in the public internet range 2000::/3.

This feature should not have any noticeable impact on go-ipfs, performance, or otherwise. Everything should continue to work in all the currently supported network configurations: VPNs, disconnected LANs, public internet, etc.

Query Logic

We've improved the DHT query logic to more closely follow Kademlia. This should significantly speed up:

  • Publishing IPNS & provider records.
  • Resolving IPNS addresses.

Previously, nodes would continue searching until they timed out or ran out of peers before stopping (putting or returning data found). Now, nodes will now stop as soon as they find the closest peers.

Routing Tables

Finally, we've addressed the poorly maintained routing tables by:

  • Reducing the likelihood that the connection manager will kill connections to peers in the routing table.
  • Keeping peers in the routing table, even if we get disconnected from them.
  • Actively and frequently querying the DHT to keep our routing table full.
  • Prioritizing useful peers that respond to queries quickly.

Testing

The DHT rewrite was made possible by Testground, our new testing framework. Testground allows us to spin up multi-thousand node tests with simulated real-world network conditions. By combining Testground and some custom analysis tools, we were able to gain confidence that the new DHT implementation behaves correctly.

Provider Record Changes

When you add content to your IPFS node, you advertise this content to the network by announcing it in the DHT. We call this providing.

However, go-ipfs has multiple ways to address the same underlying bytes. Specifically, we address content by content ID (CID) and the same underlying bytes can be addressed using (a) two different versions of CIDs (CIDv0 and CIDv1) and (b) with different codecs depending on how we're interpreting the data.

Prior to go-ipfs 0.5.0, we used the content id (CID) in the DHT when sending out provider records for content. Unfortunately, this meant that users trying to find data announced using one CID wouldn't find nodes providing the content under a different CID.

In go-ipfs 0.5.0, we're announcing data by multihash, not CID. This way, regardless of the CID version used by the peer adding the content, the peer trying to download the content should still be able to find it.

Warning: as part of the network, this could impact finding content added with CIDv1. Because go-ipfs 0.5.0 will announce and search for content using the bare multihash (equivalent to the v0 CID), go-ipfs 0.5.0 will be unable to find CIDv1 content published by nodes prior to go-ipfs 0.5.0 and vice-versa. As CIDv1 is not enabled by default so we believe this will have minimal impact. However, users are strongly encouraged to upgrade as soon as possible.

Content Transfer

A secondary focus in this release was improving content transfer, our data exchange protocols.

Refactored Bitswap

This release includes a major Bitswap refactor, running a new and backward compatible Bitswap protocol. We expect these changes to improve performance significantly.

With the refactored Bitswap, we expect:

  • Few to no duplicate blocks when fetching data from other nodes speaking the new protocol.
  • Better parallelism when fetching from multiple peers.

The new Bitswap won't magically make downloading content any faster until both seeds and leaches have updated. If you're one of the first to upgrade to 0.5.0 and try downloading from peers that haven't upgraded, you're unlikely to see much of a performance improvement.

Server-Side Graphsync Support (Experimental)

Graphsync is a new exchange protocol that operates at the IPLD Graph layer instead of the Block layer like bitswap.

For example, to download "/ipfs/QmExample/index.html":

  • Bitswap would download QmFoo, lookup "index.html" in the directory named by
    QmFoo, resolving it to a CID QmIndex. Finally, bitswap would download QmIndex.
  • Graphsync would ask peers for "/ipfs/QmFoo/index.html". Specifically, it would ask for the child named "index.html" of the object named by "QmFoo".

This saves us round-trips in exchange for some extra protocol complexity. Moreover, this protocol allows specifying more powerful queries like "give me everything under QmFoo". This can be used to quickly download a large amount of data with few round-trips.

At the moment, go-ipfs cannot use this protocol to download content from other peers. However, if enabled, go-ipfs can serve content to other peers over this protocol. This may be useful for pinning services that wish to quickly replicate client data.

To enable, run:

> ipfs config --json Experimental.GraphsyncEnabled true

Datastores

Continuing with the of improving our core data handling subsystems, both of the datastores used in go-ipfs, Badger and flatfs, have received important updates in this release:

Badger

Badger has been in go-ipfs for over a year as an experimental feature, and we're promoting it to stable (but not default). For this release, we've switched from writing to disk synchronously to explicitly syncing where appropriate, significantly increasing write throughput.

The current and default datastore used by go-ipfs is FlatFS. FlatFS essentially stores blocks of data as individual files on your file system. However, there are lots of optimizations a specialized database can do that a standard file system can not.

The benefit of Badger is that adding/fetching data to/from Badger is significantly faster than adding/fetching data to/from the default datastore, FlatFS. In some tests, adding data to Badger is 32x faster than FlatFS (in this release).

Enable Badger

In this release, we're marking the badger datastore as stable. However, we're not yet enabling it by default. You can enable it at initialization by running: ipfs init --profile=badgerds

Issues with Badger

While Badger is a great solution, there are some issues you should consider before enabling it.

Badger is complicated. FlatFS pushes all the complexity down into the filesystem itself. That means that FlatFS is only likely to lose your data if your underlying filesystem gets corrupted while there are more opportunities for Badger itself to get corrupted.

Badger can use a lot of memory. In this release, we've tuned Badger to use ~20MB of memory by default. However, it can still produce spikes as large as 1GiB of data in memory usage when garbage collecting.

Finally, Badger isn't very aggressive when it comes to garbage collection, and we're still investigating ways to get it to more aggressively clean up after itself.

We suggest you use Badger if:

  • Performance is your main requirement.
  • You rarely delete anything.
  • You have some memory to spare.

Flatfs

In the flatfs datastore, we've fixed an issue where temporary files could be left behind in some cases. While this release will avoid leaving behind temporary files, you may want to remove any left behind by previous releases:

> rm ~/.ipfs/blocks/*...
Read more

v0.5.0-rc4

25 Apr 09:21
v0.5.0-rc4
116999a
Compare
Choose a tag to compare
v0.5.0-rc4 Pre-release
Pre-release

Since RC3:

  • Reduce duplicate blocks in bitswap by increasing some timeouts and fixing the use of sessions in the ipfs pin command.
  • Fix some bugs in the ipfs dht CLI commands.
  • Ensure bitswap cancels are sent when the request is aborted.
  • Optimize some bitswap hot-spots and reduce allocations.
  • Harden use of the libp2p identify protocol to ensure we never "forget" our peer's protocols. This is important in this release because we're using this information to determine whether or not a peer is a member of the DHT.
  • Fix some edge cases where we might not notice that a peer has transitioned to/from a DHT server/client.
  • Avoid forgetting our observed external addresses when no new connections have formed in the last 10 minutes. This has been a mild issue since 2016 but was exacerbated by this release as we now push address updates to our peers when our addresses change. Unfortunately, combined, this meant we'd tell our peers to forget our external addresses (but only if we haven't formed a single new connection in the last 10 minutes).

Release v0.5.0-rc3

22 Apr 07:25
v0.5.0-rc3
8cb67ab
Compare
Choose a tag to compare
Release v0.5.0-rc3 Pre-release
Pre-release

Since RC2:

  • Many typo fixes.
  • Merged some improvements to the gateway directory listing template.
  • Performance tweaks in bitswap and the DHT.
  • More integration tests for the DHT.
  • Fixed redirects to the subdomain gateway for directory listings.
  • Merged some debugging code for QUIC.
  • Update the WebUI to pull in some bug fixes.
  • Update flatfs to fix some issues on windows in the presence of AVs.
  • Updated go version to 1.13.10.
  • Avoid adding IPv6 peers to the WAN DHT if their only "public" IP addresses aren't in the public internet IPv6 range.

Release v0.5.0-rc2

15 Apr 07:09
v0.5.0-rc2
b6dfe07
Compare
Choose a tag to compare
Release v0.5.0-rc2 Pre-release
Pre-release

Release issue: #7109

Changes between RC1 and RC2

Other than bug fixes, the following major changes were made between RC1 and RC2.

QUIC Upgrade

In RC1, we downgraded to a previous version of the (experimental) QUIC transport so we could build on go 1.13. In RC2, our QUIC transport was patched to support go 1.13 so we've upgraded back to the latest version.

NOTE: The latest version implements a different and incompatible draft (draft 27) of the QUIC protocol than the previous RC and go-ipfs 0.4.23. In practice, this shouldn't cause any issues as long as your node supports transports other than QUIC (also necessary to communicate with the vast majority of the network).

DHT "auto" mode

In this RC, the DHT will not enter "server" mode until your node determines that it is reachable from the public internet. This prevents unreachable nodes from polluting the DHT. Please read the "New DHT" section in the issue body for more info.

AutoNAT

IPFS has a protocol called AutoNAT for detecting whether or not a node is "reachable" from the public internet. In short:

  1. An AutoNAT client asks a node running an AutoNAT service if it can be reached at one of a set of guessed addresses.
  2. The AutoNAT service will attempt to "dialback" those addresses (with some restrictions, e.g., we won't dial back to a different IP address).
  3. If the AutoNAT service succeeds, it will report back the address it successfully dialed and the AutoNAT client will now know that it is reachable from the public internet.

In go-ipfs 0.5, all nodes act as AutoNAT clients to determine if they should switch into DHT server mode.

As of this RC, all nodes (except new nodes initialized with the "lowpower" config profile) will also run a rate-limited AutoNAT service by default. This should have minimal overhead but we may change the defaults in RC3 (e.g., rate limit further or only enable the AutoNAT service on DHT servers).

In addition to enabling the AutoNAT service by default, this RC changes the AutoNAT config options around:

  1. It removes the Swarm.EnableAutoNATService option.
  2. It Adds an AutoNAT config section (empty by default). This new section is documented in docs/config.md along with the rest of the config file.

LAN/WAN DHT

As forwarned in the RC1 release notes, RC2 includes the split LAN/WAN DHT. All IPFS nodes will now run two DHTs: one for the public internet (WAN) and one for their local network (LAN).

  • When connected to the public internet, IPFS will use both DHTs for finding peers, content, and IPNS records, but will only publish records (provider and IPNS) to the WAN DHT to avoid flooding the local network.
  • When not connected to the public internet, IPFS will publish provider and IPNS records to the LAN DHT.

This feature should not have any noticeable (performance or otherwise) impact and go-ipfs should continue to work in all the currently supported network configurations: VPNs, disconnected LANs, public internet, etc.

In a future release, we hope to use this feature to limit the advertisement of private addresses to the local LAN.

Release v0.5.0-rc1

07 Apr 08:20
v0.5.0-rc1
1a2c88b
Compare
Choose a tag to compare
Release v0.5.0-rc1 Pre-release
Pre-release

This is the first RC for go-ipfs 0.5.0. See #7109 for details.

Release v0.4.23

30 Jan 06:55
v0.4.23
Compare
Choose a tag to compare

Would sir/madam care for another patch release while they wait?

Yes that's right, the next feature release of go-ipfs (0.5.0) is, well, running a tiny bit behind schedule. In the mean time though we have patches, and I'm not talking pirate eye patches, I'm talking bug fixes. We're hunting these bugs like they're Pokemon, and jeez, do we come across some rare and difficult to fix ones? - you betcha.

Alright, enough funny business, what's the deal? Ok so, I don't want to alarm anyone but this release has some critical fixes and if you're using go-ipfs or know someone who is then you and your friends need to slide into your upgrade pants and give those IPFS nodes a good wipe down ASAP.

If you're a busy person and are feeling like you've read a little too much already, the TLDR; on the critical fixes is:

  1. We fixed a bug in the TLS transport that would (very rarely) cause disconnects during the handshake. You really should upgrade or you'll see this bug more and more when TLS is enabled by default in go-ipfs 0.5.0.
  2. We patched a commonly occurring bug in the websocket transport that was causing panics because of concurrent writes.

🔦 Highlights

🤝 Fixed Spontaneous TLS Disconnects

If this isn't reason enough to upgrade I don't know what is. Turns out, a TLS handshake may have accidentially been unintentionally aborted for no good reason 😱. Don't panic just yet! It's a really rare race condition and in go-ipfs 0.4.x the TLS transport is experimental (SECIO is currently the default).

Phew, ok, that said, in go-ipfs 0.5.0, TLS will be the default so don't delay, upgrade today!

😱 Fixed Panics and Crashes

Panicing won't help, in life, and also in golang. Stay calm and breathe slowly. We patched a number of panics and crashes that were uncovered, including a panic due to concurrent writes that you probably saw quite a lot if you were using the websocket transport. High ten 🙌?

🔁 Fixed Resursive Resolving of dnsaddr Multiaddrs

dnsaddrs can be recursive! That means a given dnsaddr can resolve to another dnsaddr. Not indefinitely though, don't try to trick us with your circular addresses - you get 32 goes on the ride maximum.

We found this issue when rolling out a brand spanking new set of bootstrap nodes only to discover their new addresses were, well, what's the opposite of recursive? It's not cursive...non-recursive I guess. Basically they resolved one time and then not again. I know right - bad news bears 🐻!?

Ok, "bear" this in mind: you want to keep all your DNS TXT records below 512 bytes to avoid UDP fragmentation, otherwise you'll get a truncated reply and have to connect with TCP to get all the records. If you have lots of dnsaddr TXT records then it can be more efficient to use recursive resolving than to get a truncated reply and go through the famous 18-way SYN, SYN-ACK ACK, ACK-SYN, ACK-ACK (...etc, etc) TCP handshake, not to mention the fact that go-ipfs will not even try to fallback to TCP 😅.

Anyway, long story short. We fixed recursive dnsaddr resolving so we didn't have to deal with UDP fragmentation. You're welcome.

📻 Retuned Connection Manager

The Connection Manager has been tuned to better prioritise existing connections by not counting new connections in the "grace" period (30s) towards connection limits. New connections are like new friends. You can't hang out with everyone all the time, I mean, it just gets difficult to book a resturant after a while.

You also wouldn't stop being friends with Jane just because you met Sarah once on the train. You and Jane have history, think of everything you've been through. Remember that time when Jane's dog, Dave, ran away? I know, it's a weird name for a dog, I mean who gives a human name to a dog anyway, but I guess that's one of the reasons you like Jane. Anyway, she lost her dog and you both looked all around town for it, you were about to give up but then you heared faint wimpering as you were walking back to the house. Dave had somehow managed to fall into the old abandoned well!

You see?! History! ...and, erh, what was I saying? Oh yeah, Connection Manager - new connections don't cause us to close useful, existing connections (like Jane). More specifically though, this change solves the problem of your peer receiving more inbound connections than the HighWater limit, causing it to disconnect from Jane, as well as all your other good friends (peers not in the grace period) in favor of connections that might not even work out. No-one wants to be friendless, and this fix avoids that awkward situation. Though, it does mean you'll keep more connections in total. Maybe consider reducing the HighWater setting in your config.

🍖 Reduced Relay Related DHT Spam

When AutoRelay was enabled, and your IPFS node was unreachable behind a NAT or something, go-ipfs would search the DHT for 3 relays with RelayHop enabled, connect to them and then advertise them as relays.

The problem is that many of the public relays had low connection limits and were overloaded. There's a lot of IPFS nodes in the network, and a lot of unreachable nodes trying their best to hop around via relays. So relay nodes were being DDoSed and they were constantly killing connections. Nodes trying to use the relays were on a continuous quest for better ones, which was causing 95% of the DHT traffic. Eek!

So, instead of spamming the DHT the whole time trying to find random, potentially poor relays, IPFS is now using a pre-defined set of autorelays. I mean, try to tell me that doesn't make sense.

🐾 Better Bitswap

Joe has the rare shiny collectable card you've been hunting for forever (since yesterday). You've spotted him, right over there on the other side of the playground. But now that you've found what you're looking for, you're so excited you forget what you were doing and start looking again.

This is exactly what bitswap is like when you have a bug where you stop trying to connect to providers once you've found enough of them. Specifically, if we found enough providers (100) or timed out the provider request, bitswap would cancel any in-progress connection attempts to providers and walk away.

We're also now marking frequently used peers as "important" in the connection manager so those connections do not get dropped. This is like, erm, you and Joe being besties. Joe has all the good cards and is surprisingly willing to part with them. Ok, I'll admit, card trading is probably not a great analogy to bitswap 😛

🦄 And More!

  • Fixed build on go 1.13
  • New version of the WebUI to fix some issues with the peers map

❤️ Contributors

Contributor Commits Lines ± Files Changed
Steven Allen 52 +1866/-578 102
vyzo 12 +167/-90 22
whyrusleeping 5 +136/-52 7
Roman Proskuryakov 7 +94/-7 10
Jakub Sztandera 3 +58/-13 7
hucg 2 +31/-11 2
Raúl Kripalani 2 +7/-33 6
Marten Seemann 3 +27/-10 5
Marcin Rataj 2 +26/-0 5
b5 1 +2/-22 1
Hector Sanjuan 1 +11/-0 1
Yusef Napora 1 +4/-0 1

Would you like to contribute to the IPFS project and don't know how? Well, there are a few places you can get started:

⁉️ Do you have questions?

The best place to ask your questions about IPFS, how it works and what you can do with it is at discuss.ipfs.io. We are also available at the #ipfs channel on Freenode, which is also accessible through our Matrix bridge.

Release v0.4.22

14 Aug 07:04
v0.4.22
Compare
Choose a tag to compare

We're releasing a PATCH release of go-ipfs based on 0.4.21 containing some critical fixes.

The IPFS network has scaled to the point where small changes can have a wide-reaching impact on the entire network. To keep this situation from escalating, we've put a hold on releasing new features until we can improve our release process (which we've trialed in this release) and testing procedures.

This release includes fixes for the following regressions:

  1. A major bitswap throughput regression introduced in 0.4.21 (ipfs/go-ipfs#6442).
  2. High bitswap CPU usage when connected to many (e.g. 10,000) peers. See ipfs/go-bitswap#154.
  3. The local network discovery service sometimes initializes before the networking module, causing it to announce the wrong addresses and sometimes complain about not being able to determine the IP address (ipfs/go-ipfs#6415).

It also includes fixes for:

  1. Pins not being persisted after ipfs block add --pin (ipfs/go-ipfs#6441).
  2. Panic due to concurrent map access when adding and listing pins at the same time (ipfs/go-ipfs#6419).
  3. Potential pin-set corruption given a concurrent ipfs repo gc and ipfs pin rm (ipfs/go-ipfs#6444).
  4. Build failure due to a deleted git tag in one of our dependencies (ipfs/go-ds-badger#64).

Thanks to:

Release v0.4.22-rc1

22 Jul 18:26
v0.4.22-rc1
Compare
Choose a tag to compare
Release v0.4.22-rc1 Pre-release
Pre-release

Track progress on #6506.

Release v0.4.21

30 May 23:22
v0.4.21
8ca278f
Compare
Choose a tag to compare

We're happy to announce go-ipfs 0.4.21. This release has some critical bug fixes and a handful of new features so every user should upgrade.

Key bug fixes:

  • Too many open file descriptors/too many peers (#6237).
  • Adding multiple files at the same time doesn't work (#6254).
  • CPU utilization spikes and then holds at 100% (#5613).

Key features:

  • Experimental TLS1.3 support (to eventually replace secio).
  • OpenSSL support for SECIO handshakes (performance improvement).

IMPORTANT: This release fixes a bug in our security transport that could potentially drop data from the channel. Note: This issue affects neither the privacy nor the integrity of the data with respect to a third-party attacker. Only the peer sending us data could trigger this bug.

ALL USERS MUST UPGRADE. We intended to introduce a feature this release that, unfortunately, reliably triggered this bug. To avoid partitioning the network, we've decided to postpone this feature for a release or two.

Specifically, we're going to provide a minimum one month upgrade period. After that, we'll start testing the impact of deploying the proposed changes.

If you're running the mainline go-ipfs, please upgrade ASAP. If you're building a separate app or working on a forked go-ipfs, make sure to upgrade github.com/libp2p/go-libp2p-secio to at least v0.0.3.

Contributors

First off, we'd like to give a shout-out to all contributors that participated in this release (including contributions to ipld, libp2p, and multiformats):

Contributor Commits Lines ± Files Changed
Steven Allen 220 +6078/-4211 520
Łukasz Magiera 53 +5039/-4557 274
vyzo 179 +2929/-1704 238
Raúl Kripalani 44 +757/-1895 134
hannahhoward 11 +755/-1005 49
Marten Seemann 16 +862/-203 44
keks 10 +359/-110 12
Jan Winkelmann 8 +368/-26 16
Jakub Sztandera 4 +361/-8 7
Adrian Lanzafame 1 +287/-18 5
Erik Ingenito 4 +247/-28 8
Reid 'arrdem' McKenzie 1 +220/-20 3
Yusef Napora 26 +98/-130 26
Michael Avila 3 +116/-59 8
Raghav Gulati 13 +145/-26 13
tg 1 +41/-33 1
Matt Joiner 6 +41/-30 7
Cole Brown 1 +37/-25 1
Dominic Della Valle 2 +12/-40 4
Overbool 1 +50/-0 2
Christopher Buesser 3 +29/-16 10
myself659 1 +38/-5 2
Alex Browne 3 +30/-8 3
jmank88 1 +27/-4 2
Vikram 1 +25/-1 2
MollyM 7 +17/-9 7
Marcin Rataj 1 +17/-1 1
requilence 1 +11/-4 1
Teran McKinney 1 +8/-2 1
Oli Evans 1 +5/-5 1
Masashi Salvador Mitsuzawa 1 +5/-1 1
chenminjian 1 +4/-0 1
Edgar Lee 1 +3/-1 1
Dirk McCormick 1 +2/-2 2
ia 1 +1/-1 1
Alan Shaw 1 +1/-1 1

Bug Fixes And Enhancements

This release includes quite a number of critical bug fixes and performance/reliability enhancements.

Error when adding multiple files

The last release broke the simple command ipfs add file1 file2. It turns out we simply lacked a test case for this. Both of these issues (the bug and the lack of a test case) have now been fixed.

SECIO

As noted above, we've fixed a bug that could cause data to be dropped from a SECIO connection on read. Specifically, this happens when:

  1. The capacity of the read buffer is greater than the length.
  2. The remote peer sent more than the length but less than the capacity in a single secio "frame".

In this case, we'd fill the read buffer to it's capacity instead of its length.

Too many open files, too many peers, etc.

Go-ipfs automatically closes the least useful connections when it accumulates too many connections. Unfortunately, some relayed connections were blocking in Close(), halting the entire process.

Out of control CPU usage

Many users noted out of control CPU usage this release. This turned out to be a long-standing issue with how the DHT handled provider records (records recording which peers have what content):

  1. It wasn't removing provider records for content until the set of providers completely emptied.
  2. It was loading every provider record into memory whenever we updated the set of providers.

Combined, these two issues were trashing the provider record cache, forcing the DHT to repeatedly load and discard provider records.

More Reliable Connection Management

Go-ipfs has a subsystem called the "connection manager" to close the least-useful connections when go-ipfs runs low on resources.

Unfortunately, other IPFS subsystems may learn about connections before the connection manager. Previously, if some IPFS subsystem tried to mark a connection as useful before the connection manager learned about it, the connection manager would discard this information. We believe this was causing #6271. It no longer does that.

Improved Bitswap Connection Management

Bitswap now uses the connection manager to mark all peers downloading blocks as important (while downloading). Previously, it only marked peers from which it was downloading blocks.

Reduced Memory Usage

The most noticeable memory reduction in this release comes from fixing connection closing. However, we've made a few additional improvements:

  • Bitswap's "work queue" no longer remembers every peer it has seen indefinitely.
  • The peerstore now interns protocol names.
  • The per-peer goroutine count has been reduced.
  • The DHT now wastes less memory on idle peers by pooling buffered writers and returning them to the pool when not actively using them.

Increased File Descriptor Limit

The default file descriptor limit has been raised to 8192 (from 2048). Unfortunately, go-ipfs behaves poorly when it runs out of file descriptors and it uses a lot of file descriptors.

Luckily, most modern kernels can handle thousands of file descriptors without any difficulty.

Commands

This release brings no new commands but does introduce a few changes, bugfixes, and enhancements. This section is hardly complete but it lists the most noticeable changes.

Take note: this release also introduces a few breaking changes.

[DEPRECATION] The URLStore Command Deprecated

The experimental ipfs urlstore command is now deprecated. Please use ipfs add --nocopy URL instead.

[BREAKING] The DHT Command Base64 Encodes Values

When responding to an ipfs dht get command, the daemon now encodes the returned value using base64. The ipfs command will automatically decode this value before returning it to the user so this change should only affect those using the HTTP API directly.

Unfortunately, this change was necessary as DHT records are arbitrary binary blobs which can't be directly stored in JSON strings.

[BREAKING] Base32 Encoded v1 CIDs By Default

Both js-ipfs and go-ipfs now encode CIDv1 CIDs using base32 by default, instead of base58. Unfortunately, base58 is case-sensitive and doesn't play well with browsers (see #4143.

Human Readable Numbers

The ipfs bitswap stat and and ipfs object stat commands now support a --humanize flag that formats numbers with human-readable units (GiB, MiB, etc.).

Improved Errors

This release improves two types of errors:

  1. Commands that take paths/multiaddrs now include the path/multiaddr in the error message when it fails to parse.
  2. ipfs swarm connect now returns a detailed error describing which addresses were tried and why the dial failed.

Ping Improvements

The ping command has received some small improvements and fixes:

  1. It now exits with a non-zero exit status on failure.
  2. It no longer succeeds with zero successful pings if we have a zombie but non-functional connection to the peer being pinged (#6298).
  3. It now prints out the average latency when canceled with ^C (like the unix ping command).

Features

This release is primarily a bug fix release but it still includes two nice features from libp2p.

Experimental TLS1.3 support

G...

Read more

Release 0.4.20

16 Apr 18:55
v0.4.20
8efc825
Compare
Choose a tag to compare

We're happy to release go-ipfs 0.4.20. This release includes some critical
performance and stability fixes so all users should upgrade ASAP.

This is also the first release to use go modules instead of GX. While GX has
been a great way to dogfood an IPFS-based package manager, building and
maintaining a custom package manager is a lot of work and we haven't been able
to dedicate enough time to bring the user experience of gx to an acceptable
level. You can read #5850 for
some discussion on this matter.

Docker

As of this release, it's now much easier to run arbitrary IPFS commands within
the docker container:

> docker run --name my-ipfs ipfs/go-ipfs:v0.4.20 config profile apply server # apply the server profile
> docker start my-ipfs # start the daemon

This release also reverts a change that
caused some significant trouble in 0.4.19. If you've been running into Docker
permission errors in 0.4.19, please upgrade.

WebUI

This release contains a major
WebUI release with some
significant improvements to the file browser and new opt-in, privately hosted,
anonymous usage analytics.

Commands

As usual, we've made several changes and improvements to our commands. The most
notable changes are listed in this section.

New: ipfs version deps

This release includes a new command, ipfs version deps, to list all
dependencies (with versions) of the current go-ipfs build. This should make it
easy to tell exactly how go-ipfs was built when tracking down issues.

New: ipfs add URL

The ipfs add command has gained support for URLs. This means you can:

  1. Add files with ipfs add URL instead of downloading the file first.
  2. Replace all uses of the ipfs urlstore command with a call to ipfs add --nocopy. The ipfs urlstore command will be deprecated in a future
    release.

Changed: ipfs swarm connect

The ipfs swarm connect command has a few new features:

It now marks the newly created connection as "important". This should ensure
that the connection manager won't come along later and close the connection if
it doesn't think it's being used.

It can now resolve /dnsaddr addresses that don't end in a peer ID. For
example, you can now run ipfs swarm connect /dnsaddr/bootstrap.libp2p.io to
connect to one of the bootstrap peers at random. NOTE: This could connect you to
an arbitrary peer as DNS is not secure (by default). Please do not rely on
this except for testing or unless you know what you're doing.

Finally, ipfs swarm connect now returns all errors on failure. This should
make it much easier to debug connectivity issues. For example, one might see an
error like:

Error: connect QmYou failure: dial attempt failed: 6 errors occurred:
	* <peer.ID Qm*Me> --> <peer.ID Qm*You> (/ip4/127.0.0.1/tcp/4001) dial attempt failed: dial tcp4 127.0.0.1:4001: connect: connection refused
	* <peer.ID Qm*Me> --> <peer.ID Qm*You> (/ip6/::1/tcp/4001) dial attempt failed: dial tcp6 [::1]:4001: connect: connection refused
	* <peer.ID Qm*Me> --> <peer.ID Qm*You> (/ip6/2604::1/tcp/4001) dial attempt failed: dial tcp6 [2604::1]:4001: connect: network is unreachable
	* <peer.ID Qm*Me> --> <peer.ID Qm*You> (/ip6/2602::1/tcp/4001) dial attempt failed: dial tcp6 [2602::1]:4001: connect: network is unreachable
	* <peer.ID Qm*Me> --> <peer.ID Qm*You> (/ip4/150.0.1.2/tcp/4001) dial attempt failed: dial tcp4 0.0.0.0:4001->150.0.1.2:4001: i/o timeout
	* <peer.ID Qm*Me> --> <peer.ID Qm*You> (/ip4/200.0.1.2/tcp/4001) dial attempt failed: dial tcp4 0.0.0.0:4001->200.0.1.2:4001: i/o timeout

Changed: ipfs bitswap stat

ipfs bitswap stat no longer lists bitswap partners unless the -v flag is
passed. That is, it will now return:

> ipfs bitswap stat
bitswap status
	provides buffer: 0 / 256
	blocks received: 0
	blocks sent: 79
	data received: 0
	data sent: 672706
	dup blocks received: 0
	dup data received: 0 B
	wantlist [0 keys]
	partners [197]

Instead of:

> ipfs bitswap stat -v
bitswap status
	provides buffer: 0 / 256
	blocks received: 0
	blocks sent: 79
	data received: 0
	data sent: 672706
	dup blocks received: 0
	dup data received: 0 B
	wantlist [0 keys]
	partners [203]
		QmNQTTTRCDpCYCiiu6TYWCqEa7ShAUo9jrZJvWngfSu1mL
		QmNWaxbqERvdcgoWpqAhDMrbK2gKi3SMGk3LUEvfcqZcf4
		QmNgSVpgZVEd41pBX6DyCaHRof8UmUJLqQ3XH2qNL9xLvN
        ... omitting 200 lines ...

Changed: ipfs repo stat --human

The --human flag in the ipfs repo stat command now intelligently picks a
size unit instead of always using MiB.

Changed: ipfs resolve (ipfs dns, ipfs name resolve)

All of the resolve commands now:

  1. Resolve recursively (up to 32 steps) by default to better match user
    expectations (these commands used to be non-recursive by default). To turn
    recursion off, pass -r false.
  2. When resolving non-recursively, these commands no longer fail when partially
    resolving a name. Instead, they simply return the intermediate result.

Changed: ipfs files flush

The ipfs files flush command now returns the CID of the flushed file.

Performance And Reliability

This release has the usual collection of performance and reliability
improvements.

Badger Memory Usage

Those of you using the badger datastore should notice reduced memory usage in
this release due to some upstream changes. Badger still uses significantly more
memory than the default datastore configuration but this will hopefully continue
to improve.

Bitswap

We fixed some critical CPU utilization regressions in bitswap for this release.
If you've been noticing CPU regressions in go-ipfs 0.4.19, especially when
running a public gateway, upgrading to 0.4.20 will likely fix them.

Relays

After AutoRelay was introduced in go-ipfs 0.4.19, the number of peers connecting
through relays skyrocketed to over 120K concurrent peers. This highlighted some
performance issues that we've now fixed in this release. Specifically:

  • We've significantly reduced the amount of memory allocated per-peer.
  • We've fixed a bug where relays might, in rare cases, try to actively dial a
    peer to relay traffic. By default, relays only forward traffic between peers
    already connected to the relay.
  • We've fixed quite a number of performance issues that only show up when
    rapidly forming new connections. This will actually help all nodes but will
    especially help relays.

If you've enabled relay hop (Swarm.EnableRelayHop) in go-ipfs 0.4.19 and it
hasn't burned down your machine yet, this release should improve things
significantly. However, relays are still under heavy load so running an open
relay will continue to be resource intensive.

We're continuing to investigate this issue and have a few more patches on the
way that, unfortunately, won't make it into this release.

Panics

We've fixed two notable panics in this release:

  • We've fixed a frequent panic in the DHT.
  • We've fixed an occasional panic in the experimental QUIC transport.

Content Routing

IPFS announces and finds content by sending and retrieving content routing
("provider") records to and from the DHT. Unfortunately, sending out these
records can be quite resource intensive.

This release has two changes to alleviate this: a reduced number of initial
provide workers and a persistent provider queue.

We've reduced the number of parallel initial provide workers (workers that send
out provider records when content is initially added to go-ipfs) from 512 to 6.
Each provide request (currently, due to some issues in our DHT) tries to
establish hundreds of connections, significantly impacting the performance of
go-ipfs and crashing some
routers
.

We've introduced a new persistent provider queue for files added via ipfs add
and ipfs pin add. When new directory trees are added to go-ipfs, go-ipfs will
add the root/final CID to this queue. Then, in the background, go-ipfs will walk
the queue, sequentially sending out provider records for each CID.

This ensures that root CIDs are sent out as soon as possible and are sent even
when files are added when the go-ipfs daemon isn't running.

By example, let's add a directory tree to go-ipfs:

> # We're going to do this in "online" mode first so let's start the daemon.
> ipfs daemon &
...
Daemon is ready
> # Now, we're going to create a directory to add.
> mkdir foo
> for i in {0..1000}; do echo do echo $i > foo/$i; done
> # finally, we're going to add it.
> ipfs add -r foo
added QmUQcSjQx2bg4cSe2rUZyQi6F8QtJFJb74fWL7D784UWf9 foo/0
...
added QmQac2chFyJ24yfG2Dfuqg1P5gipLcgUDuiuYkQ5ExwGap foo/990
added QmQWwz9haeQ5T2QmQeXzqspKdowzYELShBCLzLJjVa2DuV foo/991
added QmQ5D4MtHUN4LTS4n7mgyHyaUukieMMyCfvnzXQAAbgTJm foo/992
added QmZq4n4KRNq3k1ovzxJ4qdQXZSrarfJjnoLYPR3ztHd7EY foo/993
added QmdtrsuVf8Nf1s1MaSjLAd54iNqrn1KN9VoFNgKGnLgjbt foo/994
added QmbstvU9mnW2hsE94WFmw5WbrXdLTu2Sf9kWWSozrSDscL foo/995
added QmXFd7f35gAnmisjfFmfYKkjA3F3TSpvUYB9SXr6tLsdg8 foo/996
added QmV5BxS1YQ9V227Np2Cq124cRrFDAyBXNMqHHa6kpJ9cr6 foo/997
added QmcXsccUtwKeQ1SuYC3YgyFUeYmAR9CXwGGnT3LPeCg5Tx foo/998
added Qmc4mcQcpaNzyDQxQj5SyxwFg9ZYz5XBEeEZAuH4cQirj9 foo/999
added QmXpXzUhcS9edmFBuVafV5wFXKjfXkCQcjAUZsTs7qFf3G foo

In 0.4.19, we would have sent out provider records for files foo/{0..1000}
before sending out a provider record for foo. If you were ask a friend to
download /ipfs/QmUQcSjQx2bg4cSe2rUZyQi6F8QtJFJb74fWL7D784UWf9, they would
(baring other issues) be able to find it pretty quickly as this is the first CID
you'll have announced to the netwo...

Read more