Kitz ADSL Broadband Information
adsl spacer  
Support this site
Home Broadband ISPs Tech Routers Wiki Forum
 
     
   Compare ISP   Rate your ISP
   Glossary   Glossary
 
Please login or register.

Login with username, password and session length
Advanced search  

News:

Author Topic: Slow start caching  (Read 1022 times)

Weaver

  • Addicted Kitizen
  • *****
  • Posts: 8782
  • Retd sw dev; A&A; 4 × 7km ADSL2; IPv6; Firebrick
Slow start caching
« on: October 28, 2019, 06:32:07 AM »

(Serious apologies if I have raised this before)

TCP slow start is a necessity, but a horrible performance killer. Let’s say you start on a wireless LAN of a few hundred Mbps, or on a Gbps or 10Gbps LAN. That is no guide at all to what the bottleneck hop, say the next hop or next hop but one possibly might me in the case if upstream. If you are a server out on the internet you could be sending to a nice that is behind a DSL bottleneck which is n hops away where n us unknown and your own first link gives no information as to what the whole path is like.

I have been thinking about improvements to slow start which involve caching and other mechanisms. Caching the results of earlier slow start results is only really valuable if internet addresses can be classified into like ‘zones’ according to routers and therefore bottlenecks. Consulting the routing table will help with this so that multiple slow start-derived bottleneck rate cache values can be held per routing table entry, so that if a destination address is given and a zone is found which will have the same route or same bottleneck as that in which a previously probed destination address lies then this knowledge can be used to shortcut the slow start procedure safely and start communications at full but safe speed. Another idea occurs to me; dhcp or similar mechanisms could be used to propagate this kind of knowledge - knowledge of slow-start parameters such as prefixes and bottleneck values. How to describe bottleneck scenarios - prefixes and nth-hop bottleneck positions.
Logged

Weaver

  • Addicted Kitizen
  • *****
  • Posts: 8782
  • Retd sw dev; A&A; 4 × 7km ADSL2; IPv6; Firebrick
Re: Slow start caching
« Reply #1 on: January 14, 2020, 09:04:19 AM »

Do you think this would work ?

Do you think it would be very beneficial?

Consider a stupid web protocol where a TCP connection is repeatedly dropped and reconnected instead of just keeping the connection intact all the time, wasn’t this the ridiculous http/1.0 ? Wouldn’t my caching idea be beneficial to this as it would get rid of all the repeated slow starts ?

What about difficulties of designing and implementing the cache-engine core software?
Logged

Alex Atkin UK

  • Kitizen
  • ****
  • Posts: 1293
    • My Broadband History
Re: Slow start caching
« Reply #2 on: January 14, 2020, 12:12:50 PM »

I thought slow-start was largely redundant with new TCP congestion control algorithms (most specifically TCP CUBIC which is standard in Linux and Windows 10) and ECN (Explicit Congestion Notification) is meant to allow routers to identify where congestion is occurring?

Am I misunderstand what you mean here?
Logged
Exchange: INTAKE (ECI) ISP/Modems: Zen (Home Hub 5A running OpenWrt) + Plusnet (VMG-3925-B10B) + Three (Hauwei B535-232)
Router: pfSense (i5-7200U) WiFi: Ubiquiti nanoHD

CarlT

  • Kitizen
  • ****
  • Posts: 1614
  • Next generation network design and deployment
Re: Slow start caching
« Reply #3 on: January 14, 2020, 03:38:28 PM »

Wouldn't work, couldn't work. TCP is between end points, routing is on a per-hop basis, and by that I don't mean per-device I mean per-logical interface, and can change per packet even if source and destination are the same.

You would need to hold a table of data many orders of magnitude larger than the ones routers with a full IPv4 routing table hold and constantly update it. Just keeping that information up to date would overwhelm your broadband connection and would be a massive waste of bandwidth all around, networks being choked passing telemetry to millions of end users.

On the other matter dropping TCP connections isn't stupid. Each connection uses RAM on the server, each server has a limited amount and can handle a limited number of connections. When a transfer is completed the connection can and should be promptly closed, not left open on the off-chance another request is incoming. Better statistical contention and use of resources that way.
Logged
WiFi: Nighthawk® AX12 RAX120 - 5Gb uplink
Routing: Ubiquiti UDM Pro - 10Gb uplink
Switching: 2 * Mikrotik CRS305-1G-4S-IN, 10Gb uplinks, various cheap and cheerful
Exchange: Wakefield
ISP: BT Full Fibre 900. Yes, BT, I know.

Alex Atkin UK

  • Kitizen
  • ****
  • Posts: 1293
    • My Broadband History
Re: Slow start caching
« Reply #4 on: January 15, 2020, 08:58:34 AM »

Not to mention that pipelining already exists to avoid dropping the connection if its known that several pieces of data are required.
Logged
Exchange: INTAKE (ECI) ISP/Modems: Zen (Home Hub 5A running OpenWrt) + Plusnet (VMG-3925-B10B) + Three (Hauwei B535-232)
Router: pfSense (i5-7200U) WiFi: Ubiquiti nanoHD

CarlT

  • Kitizen
  • ****
  • Posts: 1614
  • Next generation network design and deployment
Re: Slow start caching
« Reply #5 on: January 15, 2020, 10:06:24 AM »

Not to mention that pipelining already exists to avoid dropping the connection if its known that several pieces of data are required.

From the client side a single initial connection to pull the base code then the browser makes multiple requests simultaneously for the resources required to render the page triggering creation of a bunch of TCP flows - a nicely optimised page where the initial request allows parallel requests for everything else, no dependencies on any of the secondary resources to load others. Zoom zoom.

From the server side it sends the FIN when it knows its work is done. It knows its work is done when its HTTP daemon tells it its work is done. If the browser is parsing what it receives in a timely fashion and, as you said, making a series of requests via that TCP connection, pipelining, the server's HTTP daemon will service them.

A well programmed browser dedicated to being as fast as possible does a combination of both - has a maximum simultaneous connection limit to a server it's permitted to try and open and pipelines requests along all of them.

Produces efficiency gains all around - it's fine for the TCP connections to be open as long as they are doing something. Servers waiting for timers to run down aren't spending their time wisely.

That is something I have a lot of experience with professionally but is potentially outside of this thread.
Logged
WiFi: Nighthawk® AX12 RAX120 - 5Gb uplink
Routing: Ubiquiti UDM Pro - 10Gb uplink
Switching: 2 * Mikrotik CRS305-1G-4S-IN, 10Gb uplinks, various cheap and cheerful
Exchange: Wakefield
ISP: BT Full Fibre 900. Yes, BT, I know.

Alex Atkin UK

  • Kitizen
  • ****
  • Posts: 1293
    • My Broadband History
Re: Slow start caching
« Reply #6 on: January 15, 2020, 10:17:08 AM »

Honestly the biggest problem with websites today is that most of their content is out of their control.  They just slap advertiser code all over the page and it inevitably ends up with the browser waiting for those to finish before rendering the final page.

And don't even get me started on how many websites have adverts that reload and in doing so reformat the whole page as they do, making the content shift when you're trying to read it.  Particularly popular with news sites that like to embed several adverts in the page content.
Logged
Exchange: INTAKE (ECI) ISP/Modems: Zen (Home Hub 5A running OpenWrt) + Plusnet (VMG-3925-B10B) + Three (Hauwei B535-232)
Router: pfSense (i5-7200U) WiFi: Ubiquiti nanoHD
 

anything