Kitz ADSL Broadband Information
adsl spacer  
Support this site
Home Broadband ISPs Tech Routers Wiki Forum
 
     
   Compare ISP   Rate your ISP
   Glossary   Glossary
 
Please login or register.

Login with username, password and session length
Advanced search  

News:

Pages: 1 ... 5 6 [7] 8 9 ... 17

Author Topic: High packet loss on Virgin  (Read 32557 times)

Fezster

  • Member
  • **
  • Posts: 21
Re: High packet loss on Virgin
« Reply #90 on: June 01, 2020, 06:56:27 PM »

It was absolutely fine today. Thanks again for the tip!!
Logged

Ronski

  • Helpful
  • Kitizen
  • *
  • Posts: 4300
Re: High packet loss on Virgin
« Reply #91 on: June 01, 2020, 07:34:50 PM »

Thanks from me also, only a couple of spikes over the last 24 hours



Compared to Saturday's

Logged
Formerly restrained by ECI and ali,  now surfing along at 390/36  ;D

Chrysalis

  • Content Team
  • Addicted Kitizen
  • *
  • Posts: 7382
  • VM Gig1 - AAISP L2TP
Re: High packet loss on Virgin
« Reply #92 on: June 03, 2020, 01:02:14 PM »

I dont know what the cause of the issue Ronski discovered is.

But what I will say is this.

A while back (several years now), FreeBSD developers decided to modify PF the firewall imported from OpenBSD to better scale to extra cpu cores, at the time it was considered a great thing to increase PPS performance caps, but then because this was done, no one wanted to merge in future PF updates from OpenBSD as because of this patch, the code was no too different.  This standoff situation lasted for multiple years with more and more outstanding bugs in PF/ALTQ left unresolved, then another discussion happened, and it was decided to still not update PF *sigh* but at least the developers are now starting to maintain PF and do FreeBSD bug fixing for it.  OpenBSD developers maintain that the version of PF in FreeBSD is now considered an outdated buggy mess.

I have been using FreeBSD since the 4.x days, a long long time.  I migrated my servers to PF not long after it was ported over, as I considered it a large step forward from ipfw.  However in all the years I have been using PF it has become apparent there is a lot of bugs, many of these are either very minor or can be worked around, some cannot be worked around and are just there, ipv6 has a fair amount of buggy behaviour in FreeBSD PF, one of which PF will not pass on fragmented packets, and when running cloudflare's fragment tester tool, this will be evident if you are behind a PF firewall.

For as long as opnsense pfsense are based on FreeBSD they effectively have to adopt the buggy PF, opnsense has a couple of hardenedbsd developers now on their team, who have been porting over openbsd security features, I would absolutely love for opnsense to move to OpenBSD, and then they would have the proper fixed modern PF, and I think that move alone would drag over a big part of pfsense's userbase to them.

As for bugs that only affect pfsense but not opnsense, a possible explanation is that pfsense do patch parts of the kernel with custom networking code, this kernel source code is no longer publically available (a reason why many devs jumped ship to opnsense), so it is possible these patches have introduced even more bugs than base FreeBSD.

It is on my to do list to migrate to opnsense at home, I no longer use pfsense in datacentres, they all migrated to opnsense a long time ago.

There is also little niggly things that are not bugs but have been implented in opnsense but not pfsense.

So e.g. in opnsense I can block outbound dns requests not directed to my LAN ipv6 dns server, and actively reroute them to the LAN dns server, the same way that I do on ipv4, so basically I enforce all of my LAN to use my firewall DNS server on both stacks.  On pfsense this is only possible on ipv4, because even though its in pf command line, I think one one of the lead dev's massively aganst NAT66, NAT46 etc. and even though this is not technically NAT, it is seen as a NAT type feature so the rdr feature is ipv4 only on pfsense.
« Last Edit: June 03, 2020, 01:08:51 PM by Chrysalis »
Logged

PhilipD

  • Reg Member
  • ***
  • Posts: 591
Re: High packet loss on Virgin
« Reply #93 on: June 03, 2020, 01:29:41 PM »

ipv6 has a fair amount of buggy behaviour in FreeBSD PF, one of which PF will not pass on fragmented packets

I thought with IPv6 fragmented packets were a thing of the past and so support is not required by routers?  I've been using IPv6 with over 65% of all traffic leaving or arriving from the WAN is IPv6 and no issues here.

Quote
It is on my to do list to migrate to opnsense at home, I no longer use pfsense in datacentres, they all migrated to opnsense a long time ago.

I keep wanting to give OpnSense a go and will when I get chance, I like the fact it is all open source.

Quote
... dev's massively aganst NAT66, NAT46 etc. and even though this is not technically NAT, it is seen as a NAT type feature so the rdr feature is ipv4 only on pfsense.

I can understand why they are against it as the whole point of IPv6 was to remove the need for this sort of thing and it can break things.  At the end of the day they have to decide how much development work they spend on features and if hardly anyone is going to use it or really needs it, priority just gets pushed down and down.

As for the Virgin media chart and all the yellow and blues and spikes, that is down to VM and their network, I don't think I've ever seen one look much different.  My BQM chart is below, on pfSense 2.4.5, the only difference, it isn't Virgin Media.

Regards

Phil

« Last Edit: June 03, 2020, 01:33:39 PM by PhilipD »
Logged

Alex Atkin UK

  • Addicted Kitizen
  • *****
  • Posts: 5260
    • Thinkbroadband Quality Monitors
Re: High packet loss on Virgin
« Reply #94 on: June 03, 2020, 02:09:03 PM »

I dont know what the cause of the issue Ronski discovered is.

But what I will say is this.

A while back (several years now), FreeBSD developers decided to modify PF the firewall imported from OpenBSD to better scale to extra cpu cores, at the time it was considered a great thing to increase PPS performance caps, but then because this was done, no one wanted to merge in future PF updates from OpenBSD as because of this patch, the code was no too different.  This standoff situation lasted for multiple years with more and more outstanding bugs in PF/ALTQ left unresolved, then another discussion happened, and it was decided to still not update PF *sigh* but at least the developers are now starting to maintain PF and do FreeBSD bug fixing for it.  OpenBSD developers maintain that the version of PF in FreeBSD is now considered an outdated buggy mess.

I have been using FreeBSD since the 4.x days, a long long time.  I migrated my servers to PF not long after it was ported over, as I considered it a large step forward from ipfw.  However in all the years I have been using PF it has become apparent there is a lot of bugs, many of these are either very minor or can be worked around, some cannot be worked around and are just there, ipv6 has a fair amount of buggy behaviour in FreeBSD PF, one of which PF will not pass on fragmented packets, and when running cloudflare's fragment tester tool, this will be evident if you are behind a PF firewall.

For as long as opnsense pfsense are based on FreeBSD they effectively have to adopt the buggy PF, opnsense has a couple of hardenedbsd developers now on their team, who have been porting over openbsd security features, I would absolutely love for opnsense to move to OpenBSD, and then they would have the proper fixed modern PF, and I think that move alone would drag over a big part of pfsense's userbase to them.

As for bugs that only affect pfsense but not opnsense, a possible explanation is that pfsense do patch parts of the kernel with custom networking code, this kernel source code is no longer publically available (a reason why many devs jumped ship to opnsense), so it is possible these patches have introduced even more bugs than base FreeBSD.

It is on my to do list to migrate to opnsense at home, I no longer use pfsense in datacentres, they all migrated to opnsense a long time ago.

There is also little niggly things that are not bugs but have been implented in opnsense but not pfsense.

So e.g. in opnsense I can block outbound dns requests not directed to my LAN ipv6 dns server, and actively reroute them to the LAN dns server, the same way that I do on ipv4, so basically I enforce all of my LAN to use my firewall DNS server on both stacks.  On pfsense this is only possible on ipv4, because even though its in pf command line, I think one one of the lead dev's massively aganst NAT66, NAT46 etc. and even though this is not technically NAT, it is seen as a NAT type feature so the rdr feature is ipv4 only on pfsense.

I feel like some of that story is missing as you do not mention if OpenBSD went multi-core on PF eventually and how did they address it differently to FreeBSD?
Logged
Broadband: Zen Full Fibre 900 + Three 5G Routers: pfSense (Intel N100) + Huawei CPE Pro 2 H122-373 WiFi: Zyxel NWA210AX
Switches: Netgear MS510TXUP, Netgear MS510TXPP, Netgear GS110EMX My Broadband History & Ping Monitors

Chrysalis

  • Content Team
  • Addicted Kitizen
  • *
  • Posts: 7382
  • VM Gig1 - AAISP L2TP
Re: High packet loss on Virgin
« Reply #95 on: June 03, 2020, 03:26:29 PM »

You right its not the complete story, if it was it would have been a much much longer post.

But as I understand it the core performance of the latest PF is now in magnitudes faster which to some extent cancels out the effect of the multi threading addition to the FreeBSD code.  I don't know if they actually added their own core scaling as well as I haven't been following every change.  But the PF situation has been one of the biggest political issues on FreeBSD in recent years.

Quote
I thought with IPv6 fragmented packets were a thing of the past and so support is not required by routers?  I've been using IPv6 with over 65% of all traffic leaving or arriving from the WAN is IPv6 and no issues here.

The PF bug in question probably only affects one use case scenario I can think off, and its a very uncommon scenario, which is dnssec.  Which is why you haven't noticed any issues.  As I said a lot of these bugs are trivial things that wont break a connection by themselves.

In terms of the fragmentation, basically routers themselves are not supposed to fragment ipv6 packets, however if they receive packets that are already fragmented then its a question if they should forward those packets as normal.  PF in FreeBSD silently drops them with no way to change the behaviour other than editing the source code, PF in OpenBSD passes them on, Iptables passes them on.  Also I am not 100% sure of all the technical info on this issue, I think its with only certain fragments, but regardless it is an issue that the vast majority of people wont notice.

Quote
I can understand why they are against it as the whole point of IPv6 was to remove the need for this sort of thing and it can break things.  At the end of the day they have to decide how much development work they spend on features and if hardly anyone is going to use it or really needs it, priority just gets pushed down and down.

PF already supports it, so the code is there, its basically just the UI frontend, NAT on ipv6 is has split the industry, some people want it in, some are strongly against it.  There is people who say NAT was a hack to allow ipv4 to gain a decade before ip exhaustion, and that was its only purpose so it should be nowhere near IPv6, there is others who say they have designed their networks around private address space and want to continue doing so in the future.  Personally I see "some" merit in NAT, and I think the choice should be down to the administrator of the network, but it's probably only useful for corporate networks and edge cases.  There is however certainly a case for allowing redirect to be used in the firewall.

What was interesting though is I found a blog for one of the guys who has worked with the address space for many years, and he really opposes ipv6 NAT, but the article on his blog was an ipv6 NAT how to for iptables, so how did this come about?  Basically he ordered a service from a datacentre provider and it was supplied with a ipv6 /128, a single ipv6 address.  He ended up finding out there is situations where NAT might be desired, his other option he stated was tunneling, but NAT had the lower performance impact so he used it, and he published the how to.  I think it is really silly to give a customer just one ipv6 address, but sadly it does happen, there is providers who operate in that way.
« Last Edit: June 03, 2020, 03:32:46 PM by Chrysalis »
Logged

Alex Atkin UK

  • Addicted Kitizen
  • *****
  • Posts: 5260
    • Thinkbroadband Quality Monitors
Re: High packet loss on Virgin
« Reply #96 on: June 03, 2020, 05:25:05 PM »

You right its not the complete story, if it was it would have been a much much longer post.

But as I understand it the core performance of the latest PF is now in magnitudes faster which to some extent cancels out the effect of the multi threading addition to the FreeBSD code.  I don't know if they actually added their own core scaling as well as I haven't been following every change.  But the PF situation has been one of the biggest political issues on FreeBSD in recent years.

I'm dubious as pfSense have supposedly been trying to work towards able to route at 10Gig speeds and that's with the multi-threaded model.
Logged
Broadband: Zen Full Fibre 900 + Three 5G Routers: pfSense (Intel N100) + Huawei CPE Pro 2 H122-373 WiFi: Zyxel NWA210AX
Switches: Netgear MS510TXUP, Netgear MS510TXPP, Netgear GS110EMX My Broadband History & Ping Monitors

Ronski

  • Helpful
  • Kitizen
  • *
  • Posts: 4300
Re: High packet loss on Virgin
« Reply #97 on: June 03, 2020, 07:58:04 PM »

As for the Virgin media chart and all the yellow and blues and spikes, that is down to VM and their network, I don't think I've ever seen one look much different.  My BQM chart is below, on pfSense 2.4.5, the only difference, it isn't Virgin Media

Phil you are missing the point completely, the problem I and many others were having is clearly related to some interaction between pfSense and the SH3 which manifested itself after a firmware update on it back in January/February.

Yes the base yellow in the good graphs is down to Virgins network, but that does not affect the operation of the connection.

This is my connection after the SH3 firmware update, unfortunately I can't seem to access the chart for when it actually changed, but it literally went from like the second graph to the first. This was having a bad affect on our connectively, with my daughters regularly complaining, sometimes the graphs were much worse.



This is the SH3 in router mode - see the difference?



This is the SH3 in modem mode and using my old Zyxel as a router, again you see the difference from the top graph?



Now we are back to SH3 in modem mode, and pfSense with lots of various adjustments, see we still have the peaks all the way to the top of the graph, although clearly a lot better than the first graph.



Now we have a graph where I changed to  DNS Forwarding mode as suggested by another user who was having exactly the same problems as me.



I know of at least three other people that have changed to DNS Forwarding mode and it's cured the problem, two here and one over on the Virgin forums

This is the user on VM forums, a graph showing when they were having the problem - ignore the large red chunk.




And this is now after changing to DNS Forwarding mode.






« Last Edit: June 03, 2020, 08:01:11 PM by Ronski »
Logged
Formerly restrained by ECI and ali,  now surfing along at 390/36  ;D

PhilipD

  • Reg Member
  • ***
  • Posts: 591
Re: High packet loss on Virgin
« Reply #98 on: June 03, 2020, 09:57:41 PM »

Hi

Apologies I was reading it as the yellow being the issue still, I'd forgotten about the charts before that and the more service affecting issues.

Regards

Phil
Logged

Ronski

  • Helpful
  • Kitizen
  • *
  • Posts: 4300
Re: High packet loss on Virgin
« Reply #99 on: June 03, 2020, 10:18:39 PM »

Easy done, no problem.
Logged
Formerly restrained by ECI and ali,  now surfing along at 390/36  ;D

Chrysalis

  • Content Team
  • Addicted Kitizen
  • *
  • Posts: 7382
  • VM Gig1 - AAISP L2TP
Re: High packet loss on Virgin
« Reply #100 on: June 04, 2020, 09:11:40 AM »

Interesting Ronski, if you dont mind lets try this experiment.

The difference between DNS forwarder and DNS resolver, is by default DNS resolver will send uncached lookups direct to authoritive DNS servers, whilst DNS forwarder will send lookups to whatever DNS you have configured as your upstream DNS.

However the unbound DNS resolver can be configured quite easily to act in forwarder mode and I am very curious if when this is in forwarder mode it also stops the excessive spikes, if it does, then we know when your unit is sending lookups directly to authoritive servers it is for some reason causing delays to inbound ping's which given the hardware spec of your unit is very weird, but we would at least narrow it down to that.

So to do this, you would switch back to DNS resolver.
Then in services menu select DNS resolver.
On that page is a tick box for "Enable Forwarding Mode"
Tick that, hit save, and hit apply.

Then check later if it has the same effect.  If it has the same effect, I would run in this mode as unbound is newer and has better features than the forwarder service, but it also is useful for diagnosing this problem.

Also on the advanced section of DNS Resolver you can change the logging verbosity, as well which might be useful in showing errors that might be occurring but that will flood your log, so my request is just to switch it to forwarder mode.

Just remembered, also make sure "Register DHCP leases in the DNS Resolver" is unticked.  Really that should be unticked by default in my opinion.

When it is ticked, every time a device registers on your DHCP, then unbound will restart to add the Hostname, which is silly, so that box should be unticked, a unbound restart could potentially cause spikes for sure.  Especially when using large DNS lists, which I believe you do with pfblockerng right?, so this one may well be your problem.  Because with DNS forwarder service on instead of DNS resolver, your filter lists wont be working anymore (they will still work when using DNS resolver in forwarder mode).
« Last Edit: June 04, 2020, 09:26:22 AM by Chrysalis »
Logged

Ronski

  • Helpful
  • Kitizen
  • *
  • Posts: 4300
Re: High packet loss on Virgin
« Reply #101 on: June 04, 2020, 10:10:29 AM »

Hi Chrysalis,

I am using DNS Resolver in forwarding mode, with  "Register DHCP leases in the DNS Resolver"  unticked.

I thought at first they meant to use the DNS forwarder service, but someone explained in a post on the previous page how to set it up.
Logged
Formerly restrained by ECI and ali,  now surfing along at 390/36  ;D

Fezster

  • Member
  • **
  • Posts: 21
Re: High packet loss on Virgin
« Reply #102 on: June 04, 2020, 10:38:30 AM »

Chrysalis -

Same as Ronski. I am using unbound in forwarder mode. This has also resolved the problem for me.

FYI:

1. I have UNticked "Register DHCP leases in the DNS Resolver".

2. I have ticked "Register DHCP static mappings in the DNS Resolver".
3. I have also ticked "Use SSL/TLS for outgoing DNS Queries to Forwarding Servers".


Logged

underzone

  • Reg Member
  • ***
  • Posts: 442
Re: High packet loss on Virgin
« Reply #103 on: June 04, 2020, 03:44:50 PM »

Chrysalis, just to clarify do you mean to use DNS Forwarder (dnsmasq), rather than DNS Resolver (unbound) in pfsense?
Logged

Chrysalis

  • Content Team
  • Addicted Kitizen
  • *
  • Posts: 7382
  • VM Gig1 - AAISP L2TP
Re: High packet loss on Virgin
« Reply #104 on: June 04, 2020, 03:50:30 PM »

Underzone, so the dns forwarder service is dnsmasq.

Unbound is the dns resolver service, but in the dns resolver service settings if you enable forwarding, you will still be using unbound aka dns resolver service, but just in forwarding mode instead of resolver mode, I hope that makes sense.

What this changes is only what happens when it has to go out to the internet to make a lookup. Local caching, and internal dns lists work the same regardless if its in forwarding or resolving mode.  I recommend forwarder mode anyway because a busy dns server like cloudflare, google or isp dns, will have way more cache hits due to the amount of traffic they get.
« Last Edit: June 04, 2020, 03:54:23 PM by Chrysalis »
Logged
Pages: 1 ... 5 6 [7] 8 9 ... 17
 

anything