-
Notifications
You must be signed in to change notification settings - Fork 103
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ipns: only use cache for prefetching then do confirmation with actual dht #545
Comments
For people interested to look at fixing it: Line 34 in 76d9292
Through interfaces it then callback into: Line 156 in 76d9292
Here is when the cache is checked: Lines 175 to 181 in 76d9292
See |
When one publishes IPNS record with What was the TTL of the record you used in your test? My mental model is that IPNS caching based on TTL is a feature similar to TTL in DNS records.
Are you suggesting we should always do lookup for updated records, even when we have a valid record cached with TTL still being in effect? Or did I miss some nuance around rollback? cc @hacdias since we've made cache changes related to TTL around Kubo 0.24, I believe our intention was to avoid doing lookup for updates if we have had a valid result in cache for less than TTL window. |
No, however this is not controlled by the user in question, they are running a gateway and their users are complaining that the IPNS records don't update as fast as ipfs.io, my guess is that their users are being loadbalanced to different ipfs.io instances and thus not experiencing caching.
Then
Yes, maybe a 10s~1m hard cache could be useful, but yes. |
Yes, that was the goal.
This has already been discussed: ipfs/specs#371. The idea was to make it similar to what is already expected from DNS, for example. I don't think the default is bad. The publisher of the website should be responsible for setting the best TTL for their use case. If they are changing the website all the time, they should also create records with a very short TTL. |
Triage notes (updated by @lidel):
|
I had a report from a user complaining about stale IPNS records.
After trying out a small repro on my machine, it is fairly easy to have 1h+ sync times because we will cache IPNS records for 1h without ever challenging them.
The IPNS code is surprising, it implements rollback based resolution, so it can get bad candidates early, start next recursive resolution, then when it get better candidates it can rollback and reconciliate the previous name resolution with the newer better results (or not if better candidates lines up in the existing timeline).
This allows to optimistically start with bad candidates and validate or cancel and restart with better ones later.
I think the cache should be used as a first untrusted candidate, so when a cache entry is found, we add as the first candidate in the rollback resolution process, but then we proceed with the rollback resolution as usual.
This would also combine nicely with #397 (altho any of the two is still a good improvement)
The text was updated successfully, but these errors were encountered: