Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Keep others’ IPNS records alive #1958

Open
ion1 opened this issue Nov 11, 2015 · 16 comments
Open

Keep others’ IPNS records alive #1958

ion1 opened this issue Nov 11, 2015 · 16 comments
Labels
kind/enhancement A net-new feature or improvement to an existing feature topic/ipns Topic ipns

Comments

@ion1
Copy link

ion1 commented Nov 11, 2015

ipfs name keep-alive add <friend’s node id>

Periodically get and store the IPNS record and keep serving the latest seen version to the network until the record’s EOL.

@ghost
Copy link

ghost commented Nov 11, 2015

You'll be able to pin IPNS records like anything else once we have IPRS

@ion1
Copy link
Author

ion1 commented Nov 11, 2015

Awesome

@daviddias daviddias added the topic/ipns Topic ipns label Jan 2, 2016
@koalalorenzo
Copy link
Member

Waiting for this feature 👍

@Falsen
Copy link

Falsen commented Aug 4, 2018

But doesn't it make more sense if they are automatically pinned by nodes? Or would it be resource heavy,?

@koalalorenzo
Copy link
Member

Consider that if pinned those have to be updated constantly via signatures etc etc...

@Stebalien
Copy link
Member

Stebalien commented Aug 6, 2018

The issue here is that the signature on IPNS records currently expires and random nodes won't be able to re-sign them as they'd need the associated private key. We expire them because the DHT isn't persistent and will eventually forget these records anyways. When it does, an attacker would be able to replay an old IPNS record from any point in time.

@lockedshadow
Copy link

lockedshadow commented Dec 4, 2018

When it does, an attacker would be able to replay an old IPNS record from any point in time.

Is it really considered more dangerous than possibility of practically disappearing whole materials published under certain IPNS key if one (just one!) publisher node with its private key once disappears too? Doesn't this publisher node look like the central point of failure? Outdated, but valid records are really worse than no records at all?

I think that ability to replay is not an critical security issue, at least in condition that user is explicitly notified that the obtained result could be outdated. After all, «it will always return valid records (even if a bit stale)», as mentioned in 0.4.18 changelog.

So what do you think about --show-publish-time flag on ipfs name resolve command? Do the IPNS records itself contain this data?

@Stebalien Stebalien added the kind/enhancement A net-new feature or improvement to an existing feature label Mar 22, 2019
@Stebalien
Copy link
Member

@lockedshadow I've been thinking about (and discussing this) this and, well, you're right. Record authors should be able to specify a timeout but there's no reason to remove expired records from the network. Whether or not to accept an expired record would be up to the client.

@T0admomo
Copy link

@Stebalien What is the best way to go about introducing this change to the protocol?

@aschmahmann
Copy link
Contributor

aschmahmann commented Jan 31, 2022

@T0admomo since this is a client and UX change rather than a spec one mostly I would propose what the UX should be along with the various changes that would need to happen in order to enable it.

Some of the work here is in ironing out the UX and then there's some in implementation. By discussing your proposed plan in advance it makes it easier to ensure that your work is likely to be reviewed and accepted.

Some related issues: #7572 #4435 #3117

@2color
Copy link
Member

2color commented Aug 4, 2022

The issue here is that the signature on IPNS records currently expires and random nodes won't be able to re-sign them as they'd need the associated private key.

According to the IPNS spec, the signature contains the concatenated value, validity, and validityType fields.

That means that as long as validity is in the future, there's no reason why nodes wouldn't republish the IPNS record.

Moreover, since validity is controlled by the key holder when they sign the record, they have the flexibility to pick any validity at the potential cost of users getting an expired/stale record (in the case of a new record published within the validity period that isn't propagated to all nodes holding the previous one). This is arguably better than getting no resolution as pointed out by @lockedshadow

Am I understanding this correctly?

@bertrandfalguiere
Copy link

bertrandfalguiere commented Aug 4, 2022

That means that as long as validity is in the future, there's no reason why nodes wouldn't republish the IPNS record.

I think this could be an attack vector as a malicious node could publish a lot of signed records with near infinite validity. They will accumulate on the DHT and clog it sooner or later, and never be flushed out.

So other clients needs to reject very old records, even if the original publisher wanted them to have very long validity.

(An attacker could also spawn many nodes and publish records from them, with the same effect)

@2color
Copy link
Member

2color commented Aug 4, 2022

I think this could be an attack vector as a malicious node could publish a lot of signed records with infinite validity. They will accumulate on the DHT and clog it sooner or later, and never be flushed out.

I recently read that DHT nodes will drop stored values after ~24 hours, no matter what Lifetime and TTL you set. So it's not really possible to clog the DHT or use this as an attack vector.

As far as I understand, clients don't reject old records as they have no way of knowing a record's age, they just drop them after 24 hours, when a newer sequence comes or once they expire (the earliest of the three).

(An attacker could also spawn many nodes and publish records from them, with the same effect)

I believe that this is what Fierro allows you to do, though without any malicious intent.

@bertrandfalguiere
Copy link

bertrandfalguiere commented Aug 4, 2022

As far as I understand, clients don't reject old records as they have no way of knowing a record's age, they just drop them after 24 hours, when a newer sequence comes or once they expire (the earliest of the three).

Yes, you're right. Droping records is not based on age, I oversimplified. The point is that they are not in the DHT after some time if they are not republished, so they can't accumulate.

I believe that this is what Fierro allows you to do, though without any malicious intent.

Yes, but since records are droped by clients after abiut 24 hours, they still can't accumulate

@cornwarecjp
Copy link

When keeping someone else's IPNS record alive, what do you do when you learn about a new record for the same name? I see these possibilities:

  • Replace the old record with the new one, and keep the new one alive. On the surface, this seems the most reasonable behavior, since normally in IPNS, you want people to learn (only) the new record. I'd call this "pinning the name".
  • Keep the old record and ignore the new one. I'd call this "pinning the record". Sounds insane, and I'm not sure whether it's a protocol violation, but I'll give a use case below.
  • Keep the old and the new, combining the best of both worlds. Disadvantage: anyone requesting an IPNS lookup needs to specify which record they want.

An IPNS record is typically of little use without the data to which it points. I guess, in many applications, someone keeping the IPNS name alive might also want to (recursively) keep the pointed-to data alive ("recursive pinning"). If you've recursively pinned a name, and you receive an update for that name, that'd make you unpin the old pointed-to data, and pin the new pointed-to data. One potential issue with this is that the new data might be arbitrarily large, and therefore much larger than the storage space you'd be willing to spend on it. "Pinning the record" does not have this issue.

There are applications where receiving old data isn't harmful, and where receiving old data is always better than receiving no data. For such applications, "pinning the record" might be the preferred choice, in combination with an application process that gets to decide what to do with a record update. It might, for instance, make an application-level choice to pin only certain parts of the pointed-to DAG, to stay below a storage quotum. And only once the pointed-to data is (partially) downloaded and pinned, the application will replace the old pinned-to record with the new one.

@cornwarecjp
Copy link

As a poor man's solution, wouldn't it be possible to have an application run besides Kubo, which periodically polls Kubo for the name? If I understand correctly, Kubo caches objects for 24 hours after the last time they were touched, so if the application asks, say, every 12 hours for the name, it'll always stay in Kubo's cache.

As a bonus, the application could store a copy of the latest record received for the name. If Kubo somehow still loses the name, the application can re-upload the last-known record to Kubo[*]. This would double the storage requirement for names, but name records shouldn't be that big.

[*] apparently /api/v0/routing/put or ipfs routing put can do that, see #10484

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/enhancement A net-new feature or improvement to an existing feature topic/ipns Topic ipns
Projects
None yet
Development

No branches or pull requests