Kitz ADSL Broadband Information
adsl spacer  
Support this site
Home Broadband ISPs Tech Routers Wiki Forum
   Compare ISP   Rate your ISP
   Glossary   Glossary
Please login or register.

Login with username, password and session length
Advanced search  


Pages: 1 2 [3] 4 5 ... 10
 on: Today at 08:43:51 AM 
Started by afrofish - Last post by afrofish
Hey all,

Thank you for your help. I thought I'd share an update to help anyone in similar situation.

I did receive a HomeHub2 which would not work with any non BT router/mesh system. I then bought an Openreach GFast modem off eBay.

When Openreach turned up, I mentioned I wanted to setup using my Openreach modem and turns out he had loads in the van and gave me a brand new one! The engineer was shocked that Gfast modems go for circa £100 on eBay as he has loads in the van and they happily give them out to anyone who asks!

Now happy with the setup (Openreach Gfast Modem an Asus Zen Wifi AImesh). Thank you everyone for your help!

 on: Today at 08:39:47 AM 
Started by daveesh1 - Last post by bogof
In my case, sub-optimal config in Windows (caused by some changes the Hamachi VPN client makes on install.... grrrr) were resulting in poor single thread results at 900mbit on my FTTP.  QOS and IDS settings in the router also hobbling the performance.
Once that was sorted (receive window autotune and RSS), it was possible it seems to get very fast single threaded performance.

This was single threaded measured on a Linux Ubuntu live disc, well over 800mbit/sec.

Windows was limiting single thread performance to about 60mbit/sec (still comparatively fast in UK internet terms). could only achieve around 500mbit/sec as it defaults to 8 threads (8x 60=480).

And of course, if you only use wireless, then your wireless throughput is another significant factor.

I think the moral of the story is that unless you're lucky and everything aligns, you might well not get 900mbit / sec out of a 900mbit connection, but there is a very large chance that the issues are in your network or computer config, and not in the service.  Unfortunately many folk will never work out this sort of thing as tuning a gigabit level network is beyond the reach of many.

 on: Today at 06:40:54 AM 
Started by Floydoid - Last post by Alex Atkin UK
My 32 bit never presented the update in windows update yet other updates downloaded and installed. I had to go to Windows Catalog and download the 32 bit updater manually. No boot problems after install though. 64 bit machines no probs.

In my experience if an update isn't presented over windows update, it means Microsoft aren't convinced its stable yet.  Quite ironic considering how many updates have proven to be broken that did auto update.

 on: Today at 06:10:50 AM 
Started by daveesh1 - Last post by Alex Atkin UK
Ordered 300 today, Zen say it will take 4 weeks... hopefully all goes well. I didn't see the point in the extra money for 900 especially hearing people only getting 400 on their 900 lines.

I've not seen anyone saying that unless you mean single-threaded, everyone getting over 900 multi-threaded and remember that even if a single client only gets 400, that means another client can do 400 at the same time with 100 to spare - so your latency will still be good.

In an ideal world we would always have slightly more bandwidth than we need to avoid ever getting bufferbloat/latency issues.  If you can do everything you need to do without having to enable QoS on the router, that's a win IMO and another reduction in latency.  Especially useful for audio and video chat.

Not saying it was worth 900 in your case, but if you chose 300 because you thought 900 wasn't delivering 900, you're mistaken.

 on: Today at 02:53:30 AM 
Started by Weaver - Last post by Weaver
I just did an overall combined upstream speed test with What is so great about this tool is the simple straightforward nature of its design (apparently); it seems to be just a simple TCP transfer, but also you can choose the size of the test file it sends in either direction. I chose a much upload file this time, 12 MB and then 25MB. On one of the 12 MB uploads it scored an 890kbps upload which is nearly twice the speed of a lot of earlier 400-500k tests. It seems that unsurprisingly a larger transfer gives a more reliable result, but before the tests done were not bogus because the results were fairly consistent across multiple speed testers; roughly speaking, within the noise present generally in each one tester. Iím now using these large test uploads to generate enough traffic so that I can see the whole thing clearly in the AA CQM clueless graphs because their duration is now long enough to show up properly.

 on: Today at 02:50:32 AM 
Started by Weaver - Last post by Weaver

In this later graph, on the right you can see some transfers made with the higher upstream target SNRM of 9dB and a lower modem loading factor of 94% instead of before and this causes a FB XML config Ďauto-percentí attribute = 83% (approx) instead of 84%, set for line 1 only. Unfortunately changing two things at the same time is a bad idea because you donít know which measure caused any change in results and I have knowingly committed that sin here. Looking at the graphs it seems that one or tíother change has cured the major dripping blood problem. I can soon reverse one of the changes to find out.

So extremely good news.

 on: Today at 02:42:40 AM 
Started by daveesh1 - Last post by thesmileyone
Ordered 300 today, Zen say it will take 4 weeks... hopefully all goes well. I didn't see the point in the extra money for 900 especially hearing people only getting 400 on their 900 lines.

Upgrading from FTTC which only runs at 38/12 so a rather big upgrade.

 on: Today at 01:46:51 AM 
Started by Weaver - Last post by Weaver
I will definitely reduce the auto-percent value and retest. And I need to do the quiet line test as advised by meritez.

Line 1 has been coming up with errors, sporadic packet loss as well as the aforementioned heavy packet loss whenever there is a flat out upload. This may actually be ok (or not) in the latter case if itís just necessary as a part of the workings of TCP. But packet loss when thereís no flat-out transfer is of course not ok and I have just very occasionally had a notification "line down - line up (7 s)" a period that is so short that itís not really a case of a modem dropping the sync at all, nothing went into the Ďdowní state, and a modem couldnít possibly reconnect that quickly, takes ~70-80 secs iirc. Despite having this 7 secs down thing, I am not seeing any ES in the modem stats. It seems to me that enough data is getting corrupted sometimes so that n successive LCP echo replies are getting lost, or alternatively the downstream is bad so the incoming LCP echo requests are getting corrupted, but I would put my money on the former.

The downstream side has a 6 dB target SNRM and L2ReTX (in the form of Broadcom PhyR) like G.INP so is very heavily protected against corruption, yet the upstream side doesnít have L2ReTX so if anything is going to get corrupted it would be on the upstream side anyway. Upstream was set to 6dB SNRM too, but in view of the bogus 7 sec reports I am increasing the upstream SNRM target of line 1 from 6dB to 9dB which should help. Itís a real shame thereís no L2ReTX, because it would speed up the upstream and/or make it vastly more reliable.

The 7 secs duration thing could be because I have set the upstream line probing from the Firebrick to be every 6 s; it sends out regular PPP LCP echo request probes to check that a line is really up and truly working. (On the other hand, in the opposite direction, AA servers send downstream PPP LCP echo requests as well, for a similar reason, and their success or failure to get a reply also shows up on the CQM graphs as bright red dots at the top of the graph - so-called "dripping blood" - meaning Ďfailureí.) Coming back to the Firebrick, if a certain number of upstream echo requests get no replies (I forget the exact details) then the Firebrick puts the link into the down state and where bonding is in use it takes that link out of the bonded set so its share of upstream traffic can go to the other links rather than being sent into a non-functioning link to go nowhere. Iíve set the duration to be as short as possible without triggering bogus reports when all is working properly 100%, because I want to switch out bad links from the bonded set as quickly as possible so the other links can take over. I can set the duration to be short only because I have multiple lines. If there were only one line then I would want the duration to be set longer so that the line doesnít go into the down state but hangs on until things improve.

If you look at the top of the graph very carefully, you will see the occasional faint bright red single pixel in the yellow; that is the sporadic packet loss problem. Where there is heavy activity you will also see the distinct larger patches of dripping blood and I think that this is only when there is upstream traffic activity ? (Key on the right; dark red is upstream traffic, darkish green is downstream). I can also separate these graphs out into multiple ones showing traffic in individual directions each alone, and ones showing latency only, so like that the display is not crowded as it is in this combined graph.

Iím supposed to be able to click on the display with my non-existent mouse to get a reading of all the graph data values at that point, but due to the iPad unfriendly nature of the design, clicking on the graph does bring up this data but also switches to a new page showing that line only rather than the current Ďall lines overviewí page, so I immediately canít see the data it has displayed. AA really needs to sort out that website for modern device agnostic behaviour, given the number of mouseless machines out there now.

The dark red upstream transfers are due to machines backing up their state to the Apple iCloud. On the right thereís a white marker shaped a bit like a balloon which shows the sync drop when I changed from 6dB upstream to 9dB. That caused the upstream line 1 sync rate to drop quite a bit, but itís still very fast. The upstream sync rate of line 1 was definitely the fastest before and I suspect that it was just running too hot.

It seems to me that without upstream L2ReTX, an upstream 6dB target SNRM is only just adequate if you want a really low error rate and perhaps 9dB is always required for total upstream reliability. If a user is always using TCP then total reliability is arguably not a sensible goal as you lose so much speed in trying to achieve it and TCP fixes any problems. Even a supposed Ďtotal TCPí user will use other protocols, such as DNS lookups, which may not be using TCP. Now Iím using Zoom all the time, a non-TCP application afaik, then suddenly Iím no longer in the all-TCP user category and I canít really use a 3dB downstream target SNRM any more I think, and it may also be the case that I have to look closely at upstream reliability and review whether 6 dB upstream is enough in every case.

 on: Today at 12:48:12 AM 
Started by GigabitEthernet - Last post by adslmax
I live in a G.Fast area but I'm really too far from the cabinet to make it worth my while.

My area was one of the first G.Fast enabled areas, will Openreach be enabling FTTP in these kinds of areas or will we be stuck for good? was not available in 43 Greenfield Road, Great Barr, Birmingham B43 5AR and available in 47 Greenfield Road, Great Barr, Birmingham B43 5AR. So, Openreach put FTTP on both houses there.

 on: April 15, 2021, 11:37:27 PM 
Started by Floydoid - Last post by banger
I'm at 19042.867 (32 bit version).

My 32 bit never presented the update in windows update yet other updates downloaded and installed. I had to go to Windows Catalog and download the 32 bit updater manually. No boot problems after install though. 64 bit machines no probs.

Pages: 1 2 [3] 4 5 ... 10