Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Peer storage for nodes to distribute small encrypted blobs. #1110

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

adi2011
Copy link

@adi2011 adi2011 commented Sep 15, 2023

This pull request proposes peer storage, inspired by @t-bast's peer backup storage. We create two separate messages, which users can use to distribute encrypted 'blobs' of up to 64 KB to their peers.

Users can distribute a 'blob' and update it whenever their configuration changes by sending a 'peer_storage' message. In return, they will always receive a 'your_peer_storage' message, which will serve as an acknowledgment.

Nodes can also decide whether they want to provide this service in exchange for some sats.

You can check out the implementation in CLN here

Thank you so much, @rustyrussell, for all the help and guidance!

01-messaging.md Outdated

Upon receiving `peer_storage` nodes should immediately respond with the latest `your_peer_storage`, which could work as an ack to the peer.

Nodes with `option_provide_storage` should also regularly send `your_peer_storage` to verify that they store the latest and correct form of `blob`. Nodes may want to ignore `peer_storage` messages with peers they don’t have a channel with to avoid spam.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This part is unclear to me. Based on my reading, there are two instances when a node (running option_provide_storage) will send your_peer_storage message:

  • When it gets a peer_storage message and has stored the data (like an ack)
  • When it reconnects with the peer for which it has stored data (to ensure that peer has a chance to update stale data)

The point about "regularly" sending your_peer_storage seem to be specified loosely.

  • What's the right frequency to send this to your peer?
  • Does it create a spam vector for your peer?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed. You don't need to specify this, since you already said in requirements that "MAY send peer_storage whenever necessary (usually, everytime their configuration changes)".

And I made a suggestion for spam above...

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, the two instances that you mentioned are the only time when a peer needs to send your_peer_storage.

01-messaging.md Outdated Show resolved Hide resolved
01-messaging.md Outdated Show resolved Hide resolved
01-messaging.md Outdated Show resolved Hide resolved
01-messaging.md Outdated Show resolved Hide resolved
01-messaging.md Outdated

Upon receiving `peer_storage` nodes should immediately respond with the latest `your_peer_storage`, which could work as an ack to the peer.

Nodes with `option_provide_storage` should also regularly send `your_peer_storage` to verify that they store the latest and correct form of `blob`. Nodes may want to ignore `peer_storage` messages with peers they don’t have a channel with to avoid spam.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed. You don't need to specify this, since you already said in requirements that "MAY send peer_storage whenever necessary (usually, everytime their configuration changes)".

And I made a suggestion for spam above...

Copy link

@saubyk saubyk left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ack

@t-bast
Copy link
Collaborator

t-bast commented Nov 7, 2023

As stated during the spec meeting, I think this should be moved to the bLIP repository: https://github.com/lightning/blips
This feels like an extension/add-on, not something that really needs to be in the BOLTs (especially if we're trying to somewhat reduce complexity here).
I wanted to move my encrypted storage proposal to a bLIP, but didn't do it because I was rather waiting for someone else to work on it first, which is happening now.

@adi2011
Copy link
Author

adi2011 commented Nov 7, 2023

I think in order to make peer storage reliable, it should have maximum coverage across the network. This approach attempts to address the backup issue, which is essential for enhancing the usability of the Lightning Network. Many small node operators may be reluctant to invest in RAID systems to protect their nodes; thus, this could serve as one of the safety nets for the network. That is why we should consider adding it to the main spec imo.

01-messaging.md Outdated
- If it offered `option_provide_storage`:
- if it has an open channel with the sender:
- MUST store the message.
- MAY store anyway (e.g. might get paid for the service).

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(e.g. might get paid for the service)

Could provide a bit more context here w.r.t being paid for the purpose of storing blobs

Is this why if it has an open channel with the sender is a requirement?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, this needs more explanation, I'll add to it when I get back to my laptop.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this why if it has an open channel with the sender is a requirement?

I'd imagine this also helps with spam prevention. Since in order to do a drive-by to store data you'd have to commit capital.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Open channel is not a requirement. Some nodes might provide this service in exchange for some sats that is what might get paid for the service means.

The sender of `peer_storage`:
- MUST offer `option_want_storage`.
- MAY send `peer_storage` whenever necessary (usually, everytime their configuration changes)
- MUST limit its `blob` to 65531 bytes.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

could also consider padding so that it is always fixed to 65531 bytes

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why would you want to pad?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the reasoning would be privacy related: ie so that receiver cant deduce anything about the peer sending the data based on size changes.

That being said: I think all of that detail doesnt need to be specced out on this layer. How the blob is structured is the responsibility of the layer above.

Especially if the layer above is a "storage for payment" layer where they are charging per byte (or byte bucket) then we really dont want to force them to use the full 65k here.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We already handle this with the randomization of ping/pong responses. The actual messages themselves are encrypted as well. Padding the message out to 65531 actually negatively impacts privacy as best I can tell since now we can do partial inference and filter out all non-max-length messages from the anonymity set.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oops I misread this. Indeed the receiver may come to learn about the contents of the payload through size analysis. My commentary on ping/pong length randomization is about 3p privacy, not 2p privacy.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah my initial thinking was that certain data sizes can be associated with certain types of backups or data in general

That being said: I think all of that detail doesnt need to be specced out on this layer. How the blob is structured is the responsibility of the layer above.

Yes I see what you're saying here. In this case I assume the "layer above" lies within the node implementation itself? It's not like external software would have direct access to this field

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Regardless, is it the BOLT's responsibility to document the need for padding?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree, a SHOULD requirement to pad could make sense here.

01-messaging.md Show resolved Hide resolved
Copy link
Contributor

@vincenzopalazzo vincenzopalazzo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I concur with @t-bast that this feature could be a BLIP.

Since this is not a one-size-fits-all solution, we can create different BLIPs for various backup use cases (if any others are proposed in the future).

If we want to add this to the BOLTs, we should address some concerns like this one ElementsProject/lightning#6652

01-messaging.md Outdated Show resolved Hide resolved
01-messaging.md Outdated
Requirements:

The sender of `peer_storage`:
- MUST offer `option_want_storage`.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

dont think i fully understand why we need the option_want_storage feature bit..?

isnt it enough that im connecting to a node with option_provide_storage and sending peer_storage to it? Or is there a case where someone would want to seek out a node advertising option_want_storage?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm inclined to agree. I think in practice it is very likely that people will implement this protocol wholesale or not at all (it's pretty simple). That said, I can imagine scenarios where you may specifically not want to provide this feature but still would like to consume it. At 65k per peer though, idk what the big deal would be with providing it anyway.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Big nodes with large number of peers might not be interested in giving this service to all of their peers.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are you suggesting they will only want to offer it to some of their peers or rather in that situation they wouldn't want to offer it to any of them?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Big nodes with large number of peers might not be interested in giving this service to all of their peers.

I dont think having/not having option_want_storage changes this though?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the only reason you would advertise option_want_storage is so that nodes offering the service could seek you out. but that doesnt really make sense to me. Makes more sense that if you want storage - you would seek out nodes that can offer it to you.


Nodes ask their peers to store data using the `peer_storage` message & expect peers to return them latest data using `your_peer_storage` message:

1. type: 7 (`peer_storage`)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should we not use even types for these messages? since we are only sending these if the peer advertises option_provide_storage

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

does IOKTBO apply to message types?? I thought it was just TLVs

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think this applies to the low message type numbers though afaict. If it did then things like accept_channel would be treated the same right? I would support using even messages, no issue with that at all, but I think I'm not quite understanding the intricacies of IOKTBO

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah i think for some of the very first messages this was sort of overlooked but it wasnt a problem since nothing would work if a node didnt understand something fundamental like accept_channel

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IOKTBO definitely applies to all messages, its just in the very initial protocol there's no distinction cause everyone supports everything :). Its definitely nice for these to be odd because then nodes can send peer_storage without bothering to check if their peer supports the feature at all.

01-messaging.md Outdated Show resolved Hide resolved
01-messaging.md Outdated

- If it does store the message:
- MAY delay storage to ratelimit peer to no more than one update per second.
- MUST respond with `your_peer_storage` containing `blob` after it is stored.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Lets not respond with the full 64KiB blob every time data gets stored. That seems incredibly wasteful. If you want an ACK, lets include an ID.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we can also ACK with a sha256 of the data provided.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Don't make me hash the data, either. Honestly, I'm not really sure if we need to have an ACK at all....its useful if you trust the counterparty and want to use it as a robust data store (but, even then, I think only LDK has support for fully async persistence?, and I'm not convinced our users would prefer this over a traditional storage layer) but I'm not sure that's worth doing here when the whole design space is around "you may persist, if you want, but peers shouldn't rely on it". Maybe @t-bast has other thoughts?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree with @TheBlueMatt here, our proposal for this feature (#881) does not have an ACK because I didn't find that useful.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It serves as assurance to the sender, although not foolproof, that it won't be excessively wasteful. Additionally, updating peer_storage won't be a frequent occurrence.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The sole reason for ACK was to assure the sender, is their any other efficient method to do this?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But does it really assure the sender of anything meaningful? If used to store every channel state change (which is what we've been doing for years), we don't want to pause the channel's operation waiting for an ACK, it adds unnecessary complexity and risk of force-closing. This is mostly being used by nodes who will regularly disconnect (e.g. mobile phones), who have an opportunity of checking the back-up returned on every connection reestablishment, which is IMO enough assurance that the storing node behaves correctly?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd like to remind us that we didn't need ACKs for every channel update either, and yet we are nudging ourselves towards a simplified update protocol. Granted, the purpose of this is far less consequential than ACK'ing the channel updates themselves, so we can afford to be like "🤷🏻‍♂️ who knows if they got it, we'll just assume so".

My take here is that at minimum it helps with debugging, and the cost of the ACK seems rather trivial so avoiding it seems silly. Yes, it doesn't provide much, we can always send back the packet, or the hash, or some ID, and then just yeet the data into oblivion, but does it really cost us much to just ACK it?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The sole reason for ACK was to assure the sender, is their any other efficient method to do this?

To echo @t-bast's comment - to assure the sender of what? And what should the sender do with that assurance. If a given message is not actually handled by a peer/it doesn't change their behavior in some meaningful way, we should drop it.

I'd like to remind us that we didn't need ACKs for every channel update either,

Not sure what you're referring to - do you mean we don't need revoke_and_ack? We absolutely do, the protocol doesn't work without it.

My take here is that at minimum it helps with debugging, and the cost of the ACK seems rather trivial so avoiding it seems silly. Yes, it doesn't provide much, we can always send back the packet, or the hash, or some ID, and then just yeet the data into oblivion, but does it really cost us much to just ACK it?

Yes, more protocol. We have a lot of protocol, anywhere we can have just a bit less protocol is great. Its not clear to me why an ACK will help with debugging - a peer can always give us an old blob, whether they ACK'ed it or not, and if you get an old state you need to go try to figure out if the state is old and which one it is. You can always check if there was a ping/pong after the sent storage message and use that as an ack/they received it indication.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removed ACK as a requirement. Still mentioned in the rationale so that implementations can decide for themselves.

@adi2011 adi2011 force-pushed the peer-storage branch 2 times, most recently from 4fa753f to 3ce41b2 Compare May 8, 2024 21:12
01-messaging.md Outdated Show resolved Hide resolved
The sender of `peer_storage`:
- MUST offer `option_want_storage`.
- MAY send `peer_storage` whenever necessary (usually, everytime their configuration changes)
- MUST limit its `blob` to 65531 bytes.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah my initial thinking was that certain data sizes can be associated with certain types of backups or data in general

That being said: I think all of that detail doesnt need to be specced out on this layer. How the blob is structured is the responsibility of the layer above.

Yes I see what you're saying here. In this case I assume the "layer above" lies within the node implementation itself? It's not like external software would have direct access to this field

- MAY store anyway.

- If it does store the message:
- MAY delay storage to ratelimit peer to no more than one update per second.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a lot of data storage. Even with no overhead 64KiB/second will destroy almost any consumer SSD in a year. ISTM one update per second is much too much if you're not charging for it.

Agree and I believe directly charging is too involved for a low level feature like this. Could indirectly relate this rate to a cost, something like rate limiting to 1 update per htlc that has been routed over the channel.

The sender of `peer_storage_retrieval`:
- MUST include the last `blob` it stored for that peer.
- when all channels with that peer are closed:
- SHOULD wait at least 2016 blocks before deleting the `blob`.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what's the thinking here?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To make it a bit more reliable. So that user might have a chance to get the PeerStorage back from recent old peers as well.

01-messaging.md Show resolved Hide resolved
The sender of `peer_storage`:
- MUST offer `option_want_storage`.
- MAY send `peer_storage` whenever necessary (usually, everytime their configuration changes)
- MUST limit its `blob` to 65531 bytes.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Regardless, is it the BOLT's responsibility to document the need for padding?

might provide this service in exchange for some sats so they may store `peer_storage`.

`peer_storage_retrieval` should not be sent after `channel_reestablish` because then the user
wouldn't have an option to recover the node and update its state in case they lost data.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

wouldn't have an option to recover the node and update its state in case they lost data.

can you elaborate a bit more here? don't really get why sending peer_storage_retrieval hurts here

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If data is lost, a node would need to see their storage before they can process the reestablish message.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In that case, we want "peer_storage_retrieval MUST sent before channel_reestablish", but there is nothing problematic about sending it again after channel_reestablish, it's just unnecessary.

Comment on lines +508 to +509
their data at each reconnection, by comparing the contents of the
retrieved data with the last one they sent. However, nodes should not expect
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If nodes can't rely on their peer having the latest data, presumably they shouldn't be checking against the latest :)

Suggested change
their data at each reconnection, by comparing the contents of the
retrieved data with the last one they sent. However, nodes should not expect
their data at each reconnection, by comparing the contents of the
retrieved data with one they sent. However, nodes should not expect

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you should expect your peer to send you the latest data and check against the latest. However you shouldn't rely on it because your peer may be trying to cheat you. If the data your peer sent you back is not the latest you should mark them as unreliable.


Nodes ask their peers to store data using the `peer_storage` message & expect peers to return them latest data using `your_peer_storage` message:

1. type: 7 (`peer_storage`)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IOKTBO definitely applies to all messages, its just in the very initial protocol there's no distinction cause everyone supports everything :). Its definitely nice for these to be odd because then nodes can send peer_storage without bothering to check if their peer supports the feature at all.

- MAY store anyway.

- If it does store the message:
- MAY delay storage to ratelimit peer to no more than one update per second.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it the IOPS themselves that will destroy the SSD

Yes, over one year 64KiB/second is 2 TB, which is more than most consumer SSD's write lifetimes.

rate limiting to 1 update per htlc

that's even worse :)

ISTM a rate-limit of one/minute makes a lot more sense.

Copy link

@thomash-acinq thomash-acinq left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is no word here about what this will be used for. If we intend to use it to store critical data that will allow us to recover from a loss of channel state on our side, then we should make that explicit.
For that use-case, it's important that we get the latest blob and we should ensure that an honest peer always gives us the latest blob.

Comment on lines +512 to +513
Nodes ask their peers to store data using the `peer_storage` message & expect
peers to return the latest data to them using the `peer_storage_retrieval` message:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seem to imply that peer_storage_retrieval is a reply to peer_storage but the requirements below only say that peer_storage_retrieval must be sent after a reconnection, not as a reply to a peer_storage message.

Comment on lines +508 to +509
their data at each reconnection, by comparing the contents of the
retrieved data with the last one they sent. However, nodes should not expect

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you should expect your peer to send you the latest data and check against the latest. However you shouldn't rely on it because your peer may be trying to cheat you. If the data your peer sent you back is not the latest you should mark them as unreliable.

- MAY store anyway.

- If it does store the message:
- MAY delay storage to ratelimit peer to no more than one update per second.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The problem with rate limiting like this combined with no ACK is that you can't guarantee that an honest peer will store the latest data which I think is a very desirable property.

might provide this service in exchange for some sats so they may store `peer_storage`.

`peer_storage_retrieval` should not be sent after `channel_reestablish` because then the user
wouldn't have an option to recover the node and update its state in case they lost data.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In that case, we want "peer_storage_retrieval MUST sent before channel_reestablish", but there is nothing problematic about sending it again after channel_reestablish, it's just unnecessary.

@Roasbeef
Copy link
Collaborator

There is no word here about what this will be used for. If we intend to use it to store critical data that will allow us to recover from a loss of channel state on our side, then we should make that explicit.

I think that's by design, at the surface level, it's just a blob that you send to the other party and can maybe retrieve in the future. Any higher layer that uses this feature should make minimal assumptions.

For that use-case, it's important that we get the latest blob and we should ensure that an honest peer always gives us the latest blob.

There's no way to ensure that. All we can rely on is a best effort attempt by the peer to maybe retrieve the blob, or not.

thomash-acinq added a commit to ACINQ/lightning-kmp that referenced this pull request Oct 23, 2024
Replaces the custom channel data backup with the new peer storage (lightning/bolts#1110)
thomash-acinq added a commit to ACINQ/lightning-kmp that referenced this pull request Oct 30, 2024
Replaces the custom channel data backup with the new peer storage (lightning/bolts#1110)
thomash-acinq added a commit to ACINQ/lightning-kmp that referenced this pull request Oct 30, 2024
Replaces the custom channel data backup with the new peer storage (lightning/bolts#1110)
thomash-acinq added a commit to ACINQ/lightning-kmp that referenced this pull request Nov 5, 2024
Replaces the custom channel data backup with the new peer storage (lightning/bolts#1110)
thomash-acinq added a commit to ACINQ/lightning-kmp that referenced this pull request Nov 5, 2024
Replaces the custom channel data backup with the new peer storage (lightning/bolts#1110)
thomash-acinq added a commit to ACINQ/lightning-kmp that referenced this pull request Nov 5, 2024
Replaces the custom channel data backup with the new peer storage (lightning/bolts#1110)
thomash-acinq added a commit to ACINQ/lightning-kmp that referenced this pull request Nov 13, 2024
Replaces the custom channel data backup with the new peer storage (lightning/bolts#1110)
@@ -46,6 +46,7 @@ The Context column decodes as follows:
| 26/27 | `option_shutdown_anysegwit` | Future segwit versions allowed in `shutdown` | IN | | [BOLT #2][bolt02-shutdown] |
| 28/29 | `option_dual_fund` | Use v2 of channel open, enables dual funding | IN | | [BOLT #2](02-peer-protocol.md) |
| 38/39 | `option_onion_messages` | Can forward onion messages | IN | | [BOLT #7](04-onion-routing.md#onion-messages) |
| 42/43 | `option_provide_storage` | Can store other nodes' encrypted backup data | IN | | [BOLT #1][bolt01-Peer-storage] |
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The link to Bolt 1 doesn't work, I think it should rather be (untested):

Suggested change
| 42/43 | `option_provide_storage` | Can store other nodes' encrypted backup data | IN | | [BOLT #1][bolt01-Peer-storage] |
| 42/43 | `option_provide_storage` | Can store other nodes' encrypted backup data | IN | | [BOLT #1](01-messaging.md#peer-storage) |

retrieved data with the last one they sent. However, nodes should not expect
their peers to always have their latest data available.

Nodes ask their peers to store data using the `peer_storage` message & expect
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit:

Suggested change
Nodes ask their peers to store data using the `peer_storage` message & expect
Nodes ask their peers to store data using the `peer_storage` message and expect

The sender of `peer_storage`:
- MUST offer `option_want_storage`.
- MAY send `peer_storage` whenever necessary (usually, everytime their configuration changes)
- MUST limit its `blob` to 65531 bytes.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree, a SHOULD requirement to pad could make sense here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.