Referring to
https://forum.kitz.co.uk/index.php/topic,24614.msg414054.html, here’s a pretty ugly picture of the DSL blight which I named "the hollow curve phenomenon" for want of a better term. This has affected my lines many times over the last two years.
In reality I would be working on y = bitloading values of 0…15, not a curvy SNR graph.
Generality: there may well be non-zero values below x<32, unlike in the above example. In either case any values below x<32 are upstream values and are data to be ignored.
What I want to do is write some pseudo-code to start with, or C, to detect the presence of this illness by inspecting some graph and determining whether it looks like the above or looks normal, normal being with a single hump, but ignoring any sharp dips due to DSL weirdness such as a pilot tone or narrow interference. I’m hoping you all can contribute some all mathematical ideas that will make it highly robust and very fast. The finished code will be written in code for the iOS Shortcuts engine, which is painfully slow these days in iPadOS 15, so minimising the number of lines of code is the way to go.
There are various ways of detecting the double maximum (ill; hollow curve) versus single maximum (healthy) but these only work properly if you filter out the aforementioned sharp dips. I wonder if such filtering might be the way to go before anything else is done. At least after that life is simple. The other method is to try to design a single algorithm that is immune to such dips while detecting the ‘genuine’ double maxima.
The simplest (‘cheapo’) test I can think of is simply three ycooord tests, at x==40, x==60, x==85. I’m not sure that that is general enough. In the aforementioned example there’s a small sharp dip worryingly close to x = 85. Something like
hollow = y[60] < y[40] && y[60] < y[85];The mathematical ‘
heavyweight’ method I can think of would be:
- moving averaging filter to remove the sharp dips, affects the entire data though, which is a worry
- test for local maxima by calculating first derivative at x==40, 60, 85 and
- checking that first derivative goes from negative to positive in that region errm around x==60 [vague].
The problem with the latter ‘heavyweight’ method is where and how to apply the sign change test. Also the whole thing is too much code, and is overkill unless its potential robustness turns out to be needed because the first, cheap method proves too fragile.
One kind of sharp dip, which is extremely severe, is the pilot tone, which dips all the way down from y==whatever, 11, maybe, down to
y==bitloading of 2, x==unpredictable. The width is extremely narrow. This can really mess up the simple test and can score a false positive if the test is e.g. y[60] and the pilot tone is also at say x==60 even though there is otherwise only a single smooth hump overall, a single local maximum.
Comments, suggestions? Ways to ensure method 1 (‘cheapo’) can be made reliable?
The y data is presented as a list of ASCII decimal numbers separated by newlines. I also need to find a very cheap way of converting all those decimal strings into numbers in an array, as otherwise I can’t think of any way of finding line 40, 60, 85 say. I’m not sure I can afford a loop with 80 iterations or worse. I could do with some help with this problem, which despite sounding simple could be a showstopper given the limitations of the iOS Shortcuts engine.