Broadband Related > ADSL Issues

Summer - Winter

(1/6) > >>

Weaver:
I had an idea. I have lost a lot of downstream speed, about 10%. Looking back, if I remember correctly, this could be true for earlier years when early summer downstream sync rates are compared with midwinter. Now the question is, is it due to the length of the day, or temperature, or dry-wet weather? Anyway, seasonal, just based on day length or otherwise environmental or neither.

It might be that I could find records of my own or in kitz postings to give me a comparison basis, but I can’t go back far because I changed modem models roughly 12 months ago, I forget when.

Would anyone else be interested in doing such a comparison in the future though ? Say we save comprehensive data now, say, on a dry day. Or better still on a wet day too. Then we do the same in winter and compare. We might be able to find some generalities - bit loadings, noise levels.

ejs:
I now have over 5 years of my own adsl stats data stored. Last time something like this was mentioned, I produced a graph showing the attenuation and speed over a whole year. Attenuation increases as temperature rises, reducing the speed.

burakkucat:

--- Quote from: ejs on May 20, 2019, 06:48:05 PM ---Attenuation increases as temperature rises, reducing the speed.

--- End quote ---

I do not regularly monitor the statistics for my own circuit but, from observations made over the last 12 years, I agree with the above statement.

Weaver:
Ah. Makes a lot of sense. And I do have a long length of copper. Temperature is about 7% higher here in summer than in winter, occasionally even more, even 9%.

I shall try to find ejs’ Previous thread. Sincere apol as usual for repeatedly asking same questions. The reason I asked now is because my deterioration seemed large and sudden. And temperature is probably the answer. There was snow in April and that would have cancelled out any possible effect associated with the rapidly shortening nights. Of course, so far north the difference in the length of the day is greater. But there are definitely kitizens further north than me.

Thanks for reminding me about the existence of the earlier thread.

[Moderator edited to merge three "tweet"-like posts into one.]

Weaver:
Recently I have lost a lot of downstream speed - down from 1.55 Mbps to 1.36 Mbps combined total TCP payload (I think) - anyway, wherever speedtest2.aa.net.uk and the testmy.net upload test report.

This has had a very noticeable bad effect on the quality of FaceTime video calls. Idiot FaceTime goes out to the internet and back (I am pretty certain) even when all it needs to do is cross the 300 Mbps WLAN. In fact the way I have the WLAN set up, I default to being on a different SSID than my wife, so if I FaceTime call her in the house then if FaceTime were only sane there would be a full 300Mbps allocation for each of us and we would not be experiencing contention between the two of us. (However, I presume that each of us would have to divide up the bandwidth between our own tx and rx, as I am telling myself that this is not a full duplex WLAN medium?) But incredibly I am having to make do with half of 1.3 Mbps as the FaceTime bottleneck. And there is the cost of all those bytes received and transmitted, cost incurred for no reason.

But in addition to the upstream loss there has been a loss of very very roughly 1 Mbps IPv4 TCP payload downstream, down from an earlier 10.6 Mbps to around 9.69 Mbps. Measurement s seem to be all over the place on this. The actual downstream sync rates now report we’re captured at around 2.2 - 3.0 dB downstream SNRM and upstream at 6 dB upstream SNRM. Sync rates are as follows:

Live sync rates:
  #1: down 2737 kbps, up 522 kbps
  #2: down 2716 kbps, up 505 kbps
  #3: down 2788 kbps, up 412 kbps
  #4: down 2883 kbps, up 499 kbps

For upstream, the egress rate limiters are set to IP PDU rates of
    upstream_sync * 0.884434 * 0.965

where :

* 0.884434 =‘protocol efficiency factor’ is my opinion of the protocol bloat - expansion in the number of bytes set due to the wasteful extra junk added by ATM and other protocol layers below IP (exclusive), ie PPP inclusive and below based on a 1500 byte packet being sent. This is the most optimistic least bad case. The waste will be far far worse in some scenarios, eg for short packets, and for an ATM AAL5 payload of a length that does not fit well into a multiple of 48 bytes certain number of ATM cells so leaving empty space in a cell. This should be the fastest theoretical possible rate that you can drive the modems at, expressed as IP PDU rate. This may well be impractical though and could overload the modem, because of considerations missing from my calculation and because in a realistic scenario not all packets will be of the length chosen here (1500 bytes in this case) which was the easiest packet to deal with as efficiency is optimal. If some packets have other lengths then that would mean that the modem would be overloaded and won’t be able to keep up.

and

* 0.965 =‘modem loading factor’ is my choice for the amount of stress put on the modem in the upstream direction, a multiplier which is applied to the sync rate times the protocol effieciency factor to give the IP PDU egress rate . This has been changed. It used to be: 0.70 or 0.75 for the slowest line (line 3), and ~0.95 for all other lines. For reasons unknown it now seems to be that that strategy of ‘low factor for slowest line’ does not give any advantage and in fact that strategy may give slightly worse results, but despite a large number of tests it is difficult to get accurate results as there is always a lot of statistical ‘noise’ in the upstream speed results, even with quite large uploads. So that strategy has now been abandoned and currently all lines use the same modem loading factor (MLF) since experiments show that now equal MLFs is optimal.

As far as I can work out the reason things are now the way they are is to do with the effect of the unequal speeds of the lines on the behaviour of TCP implementations at one end of the other. Because of the misbehaviour of line 3 upstream, which was only 330k u/s sync rate last week

In earlier posts, we were talking about temperature change seasonally. I think this might have to be the answer, conductivity vs temperature or else the state of some joints vs temperature. But regarding the latter, it can’t be dodgy occasional joints as the results are the same across all lines. It seems to me that the apparent frequency-independence of the phenomenon, because the fractional change is the same (bearing in mind the error bars) for upstream and downstream. I haven’t gone through the bits per bin allocations and compared with an old snapshot though, which I should do to check out any ideas of frequency independence.

* Two attached files - zipped up full stats on all modems:

I have attached complete stats for all four modems, see below .zip files simply containing zipped-up plain text taken straight from the modems’ Broadcom CLI raw output.

* The first one is straight raw command line stuff captured.
* The ‘Burakkucat’ file is the same stuff, still plain text, but formatted in a way that I hope will prove to be helpful to the Kuro Neko’s tools and save a bit of finger wear, assuming I have got the formatting correct, that is, should my friend want to peruse these stats.
[Moderator edited to remove the size zero files.]

Navigation

[0] Message Index

[#] Next page

Go to full version