Kitz Forum

Internet => General Internet => Topic started by: Weaver on September 15, 2018, 09:25:19 PM

Title: SCTP unfriendliness
Post by: Weaver on September 15, 2018, 09:25:19 PM
I am wondering whether or not some domestic CPE or firewalls, or some corporate firewalling policies might hate SCTP traffic and frustrate anyone trying to deploy apps that speak SCTP over the internet?

If desperate, an application developer could always as a last resort simply add a useless UDP header to the SCTP packets and disguise it that way, so that the kit in question won’t have a chance to object. That doesn’t help with crazy corporate or instiutional networks where no internet access is allowed, or maybe only http over TCP through some sort of proxy / gateway or proxy cache.
Title: Re: SCTP unfriendliness
Post by: sevenlayermuddle on September 15, 2018, 11:37:25 PM
@Weaver,

You’ve mentioned a few times you are a fan of SCTP.

I was distantly involved in SCTP from its beginning in the SIGTRAN working groups. So far as I recall, SCTP was designed to address specific shortfalls of TCP, such that it could provide a robust IP-based framework for carrying telecomms traffic over IP, providing similar reliability to the traditional dedicated TDM signalling links.

I don’t think it was ever intended to be “better than” TCP, rather just an alternative, that was better suited to telecomms applications.  Main differences, as far as I recall, were resilience in terms of multi homing, and failure detection by virtue of ‘heartbeat’ traffic.   It also claimed to address head of line blocking, I’m not sure that lived up to promise.    It certainly did facilitate migration of SS7 telecoms to IP protocols, but I don’t really see how any of that would be relevant to a typical domestic (or even business) dsl connection to public internet?

I have of course forgotten most of what I once knew, and what little I remember is far out of date.  So I’d be interested in understanding your enthusiasm for the protocol? :)
Title: Re: SCTP unfriendliness
Post by: Weaver on September 16, 2018, 01:32:49 AM
In my view everything about it is better, from the fix to the head-of-line-block problem, delivery of messages not bytes (hallelujah), multiple streams, a much better crc, and the biggest deal is support for multihomed/roaming machines designed in properly, particularly in today’s mobile world, where so very many machines are multihomed because they have 4G and LAN interfaces going simultaneously even if they never move to a completely different LAN. The feature list is long and it is not surprising that it had an obvious goal of being better than TCP in every respect.

One other not-so-obvious thing is that because SCTP has taken all the best recent advances in TCP that apply to transports in general you have all these features guaranteed in SCTP whereas if you are still using TCP you could easily encounter rubbish old servers that are lacking certain important things. This minimum feature guarantee thing is powerful and has been seen in a number of v2.0 technologies: IPv6 and AMD64 being examples that come to mind immediately.

I am concerned about whether or not crappy firewall setups or old evil networking kit or evil ISPs could cause a 1% or 0.1%-case nightmare for an app developer. Because of this I also wonder if it might be an idea for such a developer to use SCTP only over IPv6, if that makes sense in the particular scenario. If you have to test whether SCTP works, then like testing NAT, or firewall busting or analysing firewalls, the cost starts to get so high that you would wonder why on earth you are still thinking about SCTP at all. It might be that if you only use IPv6 then you can be more sure of not encountering any evil, thinking that at least because all the kit will be more modern, but that still does not mean that you cannot possibly have trouble, because there still remains the case of evil policy for firewalls. If you have to use IPv4 too, which is very likely for many applications other than specialist limited-scenario ones, then what do you do? Fall back to TCP anyway, or do (one-off) path testing and then selectively fall back, or what? More code, more cost, more testing and if you have to fall back to TCP then what does that say about your need to use SCTP in the first case? Switching to SCTP-over-UDP would be a great option in my view and possibly then you do not even bother with path testing when using IPv4.
Title: Re: SCTP unfriendliness
Post by: sevenlayermuddle on September 16, 2018, 09:03:10 AM
Thank you for that explanation of your enthusiasm.

My view is that  SCTP was designed to solve specific problems, providing a means of carrying SS7 trafic over IP, thus allowing SS7 to survive into the new world.  But I tend to think the problems that it solved are not problems that affect home networks, or even that affect most of the applications now using TCP/UDP. 

For that reason, I personally can’t see it replacing TCP/UDP, there’s not sufficient upside to justify it.   But time will tell. :)
Title: Re: SCTP unfriendliness
Post by: Weaver on September 16, 2018, 10:12:57 AM
You are right. There are a number of factors at work here aside from the superiority of SCTP which probably doesn’t matter since for most people TCP is considered good enough: availability in Windows and Apple platforms; the importance of roaming/multihomed-friendliness; lack of blocking factors; single-platform, controlled scenarios, where the app developer knows what kit they are talking to at the other end. Could also add spreading the word/awareness.

All the bad things about TCP which made it useless for SS7 never went away. Those critics of TCP who felt sufficiently strongly that they went off and designed something else were right then and they are still right now. But most of the time it either doesn’t matter or there are workarounds.


However, to return to the topic:

I would be interested if anyone knows how many systems hate protocols with less common IP proto values or anything else other than TCP and UDP, or uncommon ports, also even systems that only allow http and nothing else.
Title: Re: SCTP unfriendliness
Post by: sevenlayermuddle on September 16, 2018, 05:34:44 PM
This thread at least inspired me to head for my bookshelf.   There it was, “Stream Control Transmission Protocol (SCTP) A Reference Guide”.   A 351 page hardback, written by core members of the SIGTRAN working group that created SCTP.   It describes not just how the protocol works, but goes to some length to explain in relatively informal discussion terms why the protocol was needed.

What struck me was the beautiful, pristine, condition of my book.   Every page is crisp and white.  Thing is, its purchase not long predated cessation of my paid career, and transition to pursuit of things varying from App development to walking, gardening, renovating old things,  to beer festivals etc.   How times change.   If the SCTP book had covered “how to rebuild a Briggs&Stratton mower engine”, it’d be covered in coffee rings and oily fingerprints. :D
Title: Re: SCTP unfriendliness
Post by: kitz on September 19, 2018, 05:30:28 PM
Quote
the superiority of SCTP which probably doesn’t matter since for most people TCP is considered good enough:

Much of this is over my head and I should imagine most others here...  but aiui & as 7LM has already mentioned it was built to carry PSTN signalling over IP networks. 
TCP works perfectly well in most environments therefore domestic routers have no need to support it especially when so many apps and operating systems don't either.  SCTP's biggest drawback is that it doesnt play nice with NAT - and even if made to work causes performance overheads.

So whilst it may have a place in specialist networks- the fact remains that probably 99% of residential and a large chunk of SME networks use NAT, so its not something the average home network is going to be using.   

Quote
if anyone knows how many systems hate protocols with less common IP proto values or anything else other than TCP and UDP, or uncommon ports,

If there was a common need then I would imagine they would be added.   If you have a need for the more obscure or specialist type of protocols then you pays your money for higher end kit. :/


Title: Re: SCTP unfriendliness
Post by: Weaver on September 19, 2018, 11:55:54 PM
I think Kitz has answered my question - I forgot about plain NAT. (Having been NAT-free now for ten years, I keep forgetting they exist.) That is an important point: NAT translators generally won’t recognise SCTP, because they cannot know whether or not it has ‘ports’. Therefore I suspect that SCTP can never get going in that scenario until IPv6 starts to truly push IPv4 out. If I were intending to use SCTP over the public internet in a situation where you do not know who the end users are then sticking to IPv6-only might be a good option. The other easy case for using SCTP is across a LAN because then you do not have to worry about routers and firewalls.

However, I keep forgetting: you can use SCTP through NAT, but you would need to optionally do SCTP over UDP just to give the routers/NAT translators something they could recognise.

The one sentence thing about SCTP is that it sends messages reliably and in-order, whereas TCP sends a single long stream of bytes, which may or may not be what you want, also it can cope with devices moving to a different IP address and various other advantages.

I think I remember seeing something about sorting out how to get the web to work over SCTP instead of TCP.

I think it is a long term thing, where hopefully more developers will become aware of the possibilities, more barriers to adoption will fall, and then end of IPv4 and NAT will be part of it, as will the inclusion in the last major o/s’s.

In case anyone thinks it cannot ever happen, consider HTTP 2.0 which was not absolutely necessary, but people just got fed up with the vile performance of http 1.x. It could be that just one vast company with huge clout just makes the decision that it is time to start using something better than TCP in certain use cases and then it all spreads from there. The http2 thing was kind-or without warning, almost. SPDY and other proposals had been talked about for a couple of years, but we had to put up with the dreadful web for well over a decade longer than we should have had to. As far as I am aware, http2 does not do anything that you cannot do with http 1.x, is that true? But its performance is good and that is all it took to make it happen.
Title: Re: SCTP unfriendliness
Post by: Weaver on September 20, 2018, 12:07:15 AM
I recently saw an article about a proposed upgrade to SCTP that supports link aggregation.

There is already an excellent thing that is a layer over TCP: TCP multilink, where you can use two different links at once, and Apple uses this for Siri. Siri simultaneously exploits both 3G/4G and any current WLAN, in order to get maximum reliability and performance because they are desperate to get Siri response time down as low as possible and also DSL has rubbish upstream so that is another argument for using 3G/4G.

That is a perfect example of a situation where something like SCTP could come in first, where one company is in control of the kind of end-points.
Title: Re: SCTP unfriendliness
Post by: sevenlayermuddle on September 20, 2018, 12:31:18 AM
Scraping at memories here, but another interesting aspect of SCTP, it actually fell far short, in terms of reliability and resilience,  of the traditional TDM (copper) network it was emulating.

Twenty years ago, when SCTP was designed, that was accepted as a necessary evil.   Copper was indisputably optimal for performance, nobody denied that,  but an IP network was clearly the way forwards, even if technically inferior.     Nowadays I suspect, anybody under the age of 40-something, would struggle to comprehend the merits of a dedicated copper network vs shared IP. :)
Title: Re: SCTP unfriendliness
Post by: kitz on September 20, 2018, 03:33:30 PM
I think things move on.  In the late 90's ATM was considered more efficient than Ethernet due to it's smaller packet size.  ATM works better on ADSL(1) networks where the bottleneck is the link between the modem and the DSLAM.  Somewhere around 12Mbps is the point at which ATM ceases to become best solution, so as speeds increased (both for ethernet and DSL) then network solutions changed back to using the ethernet.   

The day's of ATM networks are now considered as being limited - even the prestigious ATM-forum acknowledges this and changed its name to Broadband-Forum.    It's rather ironic when at one time it was heralded that ATM was superior and it would be the death of Ethernet.

There will always be specialist protocols designed to work with various types of networks/link layers depending upon the medium. Some will come, some will go and nothing stays the same forever. 

Quote
I forgot about plain NAT. (Having been NAT-free now for ten years, I keep forgetting they exist.)

I quite like NAT - it just works for the vast majority of people.
It's not just because of the lack of IP(v4) addresses, but it simplifies networks and adds another layer of security.  Who wants to mess about assigning and keeping track of IP addresses - especially these days when we have so many devices connecting on our LAN.   NAT & DHCP makes it so easy.  OK I may have to port forward if I'm running server type software, but in all honesty how many people do that?  Most residential users quite happily run home networks using NAT.  I'm not ashamed to admit I use it.    I could've easily got an IP block years ago when PN dished them out for free, by saying I ran a dedicated ftp and http server (which I did back then) but I never felt the need.
 
Things may change when  IPv6 is fully implemented everywhere...  but as I said, technology moves on. :)
Title: Re: SCTP unfriendliness
Post by: Weaver on September 20, 2018, 06:31:12 PM
The reason why I have this irrational dislike of NAT is because I am thinking like a protocol designer. I think it may well have had an insidious stifling effect on the growth of more and more kinds of internet usage and development of new protocols because it can make peer-to-peer, that centralised server-free, networking such a pain. It was and is good enough for most people because all they ever do is web and email. But without things in the way we might have had additional opportunities with novel protocols and more efficient direct communication.

There, I have now confessed. Largely irrational as I said, but thinking up ideas as a designer I then have to keep saying to myself, "oh but then there is probably NAT". Mind you, exactly the same could be said about firewalls. It is a shame we all have them, or rather have to have them. Think how much easier things would be for direct communication if we did not have to worry about firewalls blocking direct commas and needing firewall-busting annoying measures to be taken.

Perhaps we need an official remote and local firewall-opening standardised protocol of some sort, with authentication, ACLs and friend-or-foe identification and databases of policy rules. I haven’t thought this through yet at all.
Title: Re: SCTP unfriendliness
Post by: niemand on September 22, 2018, 12:43:20 PM
some corporate firewalling policies might hate SCTP traffic and frustrate anyone trying to deploy apps that speak SCTP over the internet?

If desperate, an application developer could always as a last resort simply add a useless UDP header to the SCTP packets and disguise it that way, so that the kit in question won’t have a chance to object. That doesn’t help with crazy corporate or instiutional networks where no internet access is allowed, or maybe only http over TCP through some sort of proxy / gateway or proxy cache.

I am thinking this requires some explanation.

In my experience across hundreds of enterprises corporate firewalling policies have no particular distaste for SCTP, however as you should be aware the default rule at the end of access lists / firewall policies is 'deny'. Unless there's a business case to have IP protocol number 132 permitted it will not and shouldn't be.

Acronym soup time.

Regarding encapsulating it in UDP see above. SCTP has a well-known UDP port for NAT traversal, etc, documented in an RFC. Even if you try and be 'clever' and use ephemeral UDP ports or ports registered to other protocols DPI will note the SCTP headers and at best refer back to the policy for SCTP and at worst immediately deny the traffic as a possible attempt to exfiltrate data via a covert channel.

No need for any proxies or allowing HTTP/S via well known ports. Through a combination of simple ACLs, DPI, SPI and ALGs, usually based around traversal of zones, everything a business needs to do business can be permitted either explicitly or implicitly. If SCTP doesn't form part of that it should, as part of a responsible security policy, be denied.
Title: Re: SCTP unfriendliness
Post by: niemand on September 22, 2018, 01:04:12 PM
I recently saw an article about a proposed upgrade to SCTP that supports link aggregation.

There is already an excellent thing that is a layer over TCP: TCP multilink, where you can use two different links at once, and Apple uses this for Siri. Siri simultaneously exploits both 3G/4G and any current WLAN, in order to get maximum reliability and performance because they are desperate to get Siri response time down as low as possible and also DSL has rubbish upstream so that is another argument for using 3G/4G.

That is a perfect example of a situation where something like SCTP could come in first, where one company is in control of the kind of end-points.

Siri uses multipathing in an active-passive configuration to permit smoother failover and failback between WiFi and 3G/4G.

A mobile phone that insists on using your, usually limited, mobile data allowance as it sees fit for purely performance reasons despite your being connected to WiFi isn't going to amuse people.

There are going to be very few instances where a company is in full control over everything on both sides of a network. Far more common is to have control over the network edge, as companies usually own their own routers or at least control what they plug into them. With that in mind a better solution is maybe an application driven network that takes the complexity away from the individual devices at layer 3 and 4, abstracts the transport methods being used and runs as a software defined overlay network that can do what SCTP can do for all traffic - enforcing security policies, ensuring packet delivery, conducting path characteristic measurements at layer 7, not just 4, and selecting best paths on a per packet basis, dynamically reconstructing lost packets per policy.

Could even call it a Software Defined Wide Area Network.

Relying on people to client devices from a single company to connect to devices from a single company makes the open standard that is NAT look white box by comparison.

As far as doing it in software goes check out how much of your traffic is using vanilla HTTPS and how much is using the, probably superior, QUIC. Just the stuff to Google I suspect.

More and more of everything is foggy computing. Why increase networking complexity on client devices at a time when compute, storage, etc, are being moved off them? Put it in the router / network edge appliance / software, you cover everything connecting via it and you're done.
Title: Re: SCTP unfriendliness
Post by: kitz on September 22, 2018, 05:29:40 PM
Off topic alert, but not when talking about multiple streams & new protocols.

>> how much is using the, probably superior, QUIC


Interesting read - Meteoric Rise of Google QUIC (https://owmobility.com/blog/meteoric-rise-google-quic-worrying-mobile-operators/).
This bit caught my eye. Although hardly anyone other than google is using it right now, I wonder how problematic it could become for networks with limited bandwidth.

Quote
an encryption-based protocol, traffic is not visible at all to the mobile operator, meaning that they cannot use traditional traffic management tools.

/snip/

The concept seems ideal; the technology, however, introduces challenges to operators as ABR is an inherently ‘greedy’ protocol, consuming the highest bit rate that it can sustain unless video playback is interrupted.

I'm thinking back to the days when p2p (another 'greedy' protocol) could cripple [say] a BE* backhaul.  Try do something on http and you may only get 3Mbps, but open up p2p/nntp app with multiple streams and you could easily max out a 24Mbps which just made the problem worse for other types of traffic.
Title: Re: SCTP unfriendliness
Post by: Chrysalis on September 22, 2018, 06:37:30 PM
http/2 is nice and google is pushing it, but with that said its not taking over http 1.x anytime soon.
There is things that google adopted and even introduced themselves that got abandoned, one recent thing is public key pinning which is been removed in chrome after only a few years, so google are impatient to allow things to get adopted.  https://www.theregister.co.uk/2017/10/30/google_hpkp/

With that said http/2 is used on youtube and other google sites for performance reasons so their motivation will be higher than HPKP so I dont think that will get removed from chrome.

The domain hosting my line stats in my sig uses http/2 for reference.

Title: Re: SCTP unfriendliness
Post by: niemand on September 22, 2018, 07:36:21 PM
QUIC adapts to congestion better than normal TCP and is easily recognisable as QUIC so, although shaping it won't be clever, simple dropping of the packets has to suffice.
Title: Re: SCTP unfriendliness
Post by: niemand on September 23, 2018, 12:37:02 AM
As far as P2P goes the killer was the number of flows it opened. Very approximately each TCP flow on your connecting gets roughly the same bandwidth. When you've a hundred torrent flows on a 20Mb connection, eating 200kb/s each a 101st TCP flow isn't going to be getting much capacity, not least because congestion control will kick in before it can ramp up very much.

The UDP version of the protocol, UTP, was actually better behaved.
Title: Re: SCTP unfriendliness
Post by: kitz on September 23, 2018, 10:36:33 AM
Sorry, I don't think I explained it very well.  I was pondering on the effect of the backhaul and other types of traffic for other users. When I said networks I meant network operators & service providers.

I know things will have scaled up since but I was using the BE* example because it was simple and when they started up, some of the satellite exchanges only had a 100Mbps backhaul.     After a while you may get users seeing congestion and speeds drop to 5Mbps if a lot of those users were constantly downloading.  If everything was equal (and I know it never is because you also get bursty traffic), that means you have 20 users getting 5Mb per connection.    So there's user A using his 5Mbps, but along comes user B who opens up p2p with multiple streams thus still able to get his full 24Mbps, but has the effect of pushing down the available backhaul bandwidth for other traffic (and users) even further. 

The article implied that QUIC could be challenging to shape for network providers.   I think I recall reading something somewhere else that if UDP (or the ports) were blocked then it can fall back to TCP.  So whilst its only really google using it atm, but what's to stop other applications using it in future. 
I suppose the only saving grace is that bandwidth isnt cheap on mobile networks and is quite often limited.  So if someone invents a new p2p type file sharing system using QUIC, then your'e hardly likely to do so over a mobile connection. 
   
Title: Re: SCTP unfriendliness
Post by: niemand on September 23, 2018, 12:06:15 PM
Ah yes!

So with TCP the headers can be manipulated to shape the flows. Mess with window sizes / MSS / ECN / whatever. Obviously QUIC not so much so you just have to drop packets to shape them.

To be honest, though, with most networks packets are just buffered then dropped now. Flow counts are getting somewhat excessive to spend resources manipulating layer 4 unless you really, really want to invest in seriously expensive kit.

I remember the ham-fisted way upstream traffic was shaped by Virgin Media's implementation of Allot equipment. Setting MSS to 536 was the thing I, as initiator of the flow, could see. No idea what else was happening. Downstream shaping was much simpler.