Chat > Tech Chat

Another daft idea - musing again - DNS precaching

(1/4) > >>

Weaver:
Say you are a big ISP. You run a DNS relay proxy cache for your customers. Knowing what the pattern of requests in the cache is, you could decide on a ‘greatest hits’ ‘top ten’ or whatever, a list of most requested domain names. Using this, you could then actively re-query these domain names from an upstream source or from the authoritative server even, just before the cache entry expires. That way, when a user re-queries the domain name she will get an answer instantly as you will already have done the work.

A second option, a further improvement would be to query the domain name from the authoritative servers at half the time to live, thus getting an extension on the time to live, that would mean a better, longer-lived, result for users who re-query the domain name later within the original time-to-live validity period. This of course would be unkind to the authoritative servers by putting unnecessary extra load on them, but since it is only large ISPs doing this, not millions of individual users, the net effect would be a reduction in load that more than offsets the extra imposed by this small trick.

Do you think it would work? Would it be any good?

If someone had knowledge that certain domain names are important then that user could pre-query such names in advance, just to make sure that the cache is always populated.

I realise that I don’t know what happens when a domain name is unnecessarily re-queried within the validity time. Do you get a new response with the same end time, that is a shortened validity time? (Because the server is not re-querying the authoritative server and so there is less and less of the original validity period left.) I have absolutely no idea at all of what I am talking about. Of course it seems like a really bad idea to start giving out shorter and shorter validity time values. If you as a server just ignore the whole thing, and just give out certain fixed validity times then you are at least not going with the madness of eventually shortening validity times down to nothing, but then the idea of a certain actual defined validity time window with a real start and end point goes out of the window, but perhaps a very approximate concept which is vague is very much more than good enough.

chenks:

--- Quote from: Weaver on October 15, 2018, 01:08:42 AM ---Do you think it would work? Would it be any good?

--- End quote ---

possibly, and no

niemand:
Probably a topic to do some more research on. Musing over things you, in your own words, 'have absolutely no idea at all of what I am talking about it' is a fairly rapid route to a head-spinning experience  :)

Just as a thought, though, how much of a time saving are you expecting from this? The only time people querying a caching DNS server are actually going to hit the delay you want to avert is in between TTL expiry and the first query that goes to the authoritative, and you've said you want a 'greatest hits' list so the percentage of requests this would actually benefit is miniscule.

Weaver:
That is a very good point about the greatest hits thing. I was just thinking, or rather not thinking properly, about limiting the amount of load, the thing not going too crazy, but this turns out to be rather a backwards approach.

Chrysalis:
on my phone so shortened reply

lookup dns prefetch existing tech
also lookup unbound serve-expired < killer feature

also major dns providers like google will have so many hits from clients that major dns names will stay cached a lot

Navigation

[0] Message Index

[#] Next page

Go to full version