45

Suppose you have a list of floating point numbers that are approximately multiples of a common quantity, for example

2.468, 3.700, 6.1699

which are approximately all multiples of 1.234. How would you characterize this "approximate gcd", and how would you proceed to compute or estimate it?

Strictly related to my answer to this question.

Community
  • 1
  • 1
Federico A. Ramponi
  • 43,319
  • 29
  • 102
  • 130
  • 2
    From your other question, you're detecting the frequency of piano tones. Note that pianos are not harmonic. The higher frequencies were never integer multiples of the fundamental to begin with: they're slightly sharp because the string behaves as if it is shorter at higher frequencies. Because of this, the piano tuner stretches the scale slightly to minimize beating between partials and maximize consonance: https://en.wikipedia.org/wiki/Inharmonicity#Pianos – endolith Jul 23 '12 at 14:00

8 Answers8

25

You can run Euclid's gcd algorithm with anything smaller then 0.01 (or a small number of your choice) being a pseudo 0. With your numbers:

3.700 = 1 * 2.468 + 1.232,
2.468 = 2 * 1.232 + 0.004. 

So the pseudo gcd of the first two numbers is 1.232. Now you take the gcd of this with your last number:

6.1699 = 5 * 1.232 + 0.0099.

So 1.232 is the pseudo gcd, and the mutiples are 2,3,5. To improve this result, you may take the linear regression on the data points:

(2,2.468), (3,3.7), (5,6.1699).

The slope is the improved pseudo gcd.

Caveat: the first part of this is algorithm is numerically unstable - if you start with very dirty data, you are in trouble.

David Lehavi
  • 1,176
  • 7
  • 15
14

Express your measurements as multiples of the lowest one. Thus your list becomes 1.00000, 1.49919, 2.49996. The fractional parts of these values will be very close to 1/Nths, for some value of N dictated by how close your lowest value is to the fundamental frequency. I would suggest looping through increasing N until you find a sufficiently refined match. In this case, for N=1 (that is, assuming X=2.468 is your fundamental frequency) you would find a standard deviation of 0.3333 (two of the three values are .5 off of X * 1), which is unacceptably high. For N=2 (that is, assuming 2.468/2 is your fundamental frequency) you would find a standard deviation of virtually zero (all three values are within .001 of a multiple of X/2), thus 2.468/2 is your approximate GCD.

The major flaw in my plan is that it works best when the lowest measurement is the most accurate, which is likely not the case. This could be mitigated by performing the entire operation multiple times, discarding the lowest value on the list of measurements each time, then use the list of results of each pass to determine a more precise result. Another way to refine the results would be adjust the GCD to minimize the standard deviation between integer multiples of the GCD and the measured values.

Sparr
  • 7,297
  • 27
  • 46
14

This reminds me of the problem of finding good rational-number approximations of real numbers. The standard technique is a continued-fraction expansion:

def rationalizations(x):
    assert 0 <= x
    ix = int(x)
    yield ix, 1
    if x == ix: return
    for numer, denom in rationalizations(1.0/(x-ix)):
        yield denom + ix * numer, numer

We could apply this directly to Jonathan Leffler's and Sparr's approach:

>>> a, b, c = 2.468, 3.700, 6.1699
>>> b/a, c/a
(1.4991896272285252, 2.4999594813614263)
>>> list(itertools.islice(rationalizations(b/a), 3))
[(1, 1), (3, 2), (925, 617)]
>>> list(itertools.islice(rationalizations(c/a), 3))
[(2, 1), (5, 2), (30847, 12339)]

picking off the first good-enough approximation from each sequence. (3/2 and 5/2 here.) Or instead of directly comparing 3.0/2.0 to 1.499189..., you could notice than 925/617 uses much larger integers than 3/2, making 3/2 an excellent place to stop.

It shouldn't much matter which of the numbers you divide by. (Using a/b and c/b you get 2/3 and 5/3, for instance.) Once you have integer ratios, you could refine the implied estimate of the fundamental using shsmurfy's linear regression. Everybody wins!

Darius Bacon
  • 14,405
  • 5
  • 50
  • 53
5

I'm assuming all of your numbers are multiples of integer values. For the rest of my explanation, A will denote the "root" frequency you are trying to find and B will be an array of the numbers you have to start with.

What you are trying to do is superficially similar to linear regression. You are trying to find a linear model y=mx+b that minimizes the average distance between a linear model and a set of data. In your case, b=0, m is the root frequency, and y represents the given values. The biggest problem is that the independent variables X are not explicitly given. The only thing we know about X is that all of its members must be integers.

Your first task is trying to determine these independent variables. The best method I can think of at the moment assumes that the given frequencies have nearly consecutive indexes (x_1=x_0+n). So B_0/B_1=(x_0)/(x_0+n) given a (hopefully) small integer n. You can then take advantage of the fact that x_0 = n/(B_1-B_0), start with n=1, and keep ratcheting it up until k-rnd(k) is within a certain threshold. After you have x_0 (the initial index), you can approximate the root frequency (A = B_0/x_0). Then you can approximate the other indexes by finding x_n = rnd(B_n/A). This method is not very robust and will probably fail if the error in the data is large.

If you want a better approximation of the root frequency A, you can use linear regression to minimize the error of the linear model now that you have the corresponding dependent variables. The easiest method to do so uses least squares fitting. Wolfram's Mathworld has a in-depth mathematical treatment of the issue, but a fairly simple explanation can be found with some googling.

shsmurfy
  • 1,994
  • 1
  • 12
  • 7
4

Interesting question...not easy.

I suppose I would look at the ratios of the sample values:

  • 3.700 / 2.468 = 1.499...
  • 6.1699 / 2.468 = 2.4999...
  • 6.1699 / 3.700 = 1.6675...

And I'd then be looking for a simple ratio of integers in those results.

  • 1.499 ~= 3/2
  • 2.4999 ~= 5/2
  • 1.6675 ~= 5/3

I haven't chased it through, but somewhere along the line, you decide that an error of 1:1000 or something is good enough, and you back-track to find the base approximate GCD.

Jonathan Leffler
  • 666,971
  • 126
  • 813
  • 1,185
3

The solution which I've seen and used myself is to choose some constant, say 1000, multiply all numbers by this constant, round them to integers, find the GCD of these integers using the standard algorithm and then divide the result by the said constant (1000). The larger the constant, the higher the precision.

quant_dev
  • 5,983
  • 1
  • 31
  • 53
  • This won't work if there's a very slight error in just one of the number -- for example, `1.234,2.468` gives `1.234` but `1.234,2.467` gives `0.001`. – user202729 Feb 23 '19 at 10:27
1

This is a reformulaiton of shsmurfy's solution when you a priori choose 3 positive tolerances (e1,e2,e3)
The problem is then to search smallest positive integers (n1,n2,n3) and thus largest root frequency f such that:

f1 = n1*f +/- e1
f2 = n2*f +/- e2
f3 = n3*f +/- e3

We assume 0 <= f1 <= f2 <= f3
If we fix n1, then we get these relations:

f  is in interval I1=[(f1-e1)/n1 , (f1+e1)/n1]
n2 is in interval I2=[n1*(f2-e2)/(f1+e1) , n1*(f2+e2)/(f1-e1)]
n3 is in interval I3=[n1*(f3-e3)/(f1+e1) , n1*(f3+e3)/(f1-e1)]

We start with n1 = 1, then increment n1 until the interval I2 and I3 contain an integer - that is floor(I2min) different from floor(I2max) same with I3
We then choose smallest integer n2 in interval I2, and smallest integer n3 in interval I3.

Assuming normal distribution of floating point errors, the most probable estimate of root frequency f is the one minimizing

J = (f1/n1 - f)^2 + (f2/n2 - f)^2 + (f3/n3 - f)^2

That is

f = (f1/n1 + f2/n2 + f3/n3)/3

If there are several integers n2,n3 in intervals I2,I3 we could also choose the pair that minimize the residue

min(J)*3/2=(f1/n1)^2+(f2/n2)^2+(f3/n3)^2-(f1/n1)*(f2/n2)-(f1/n1)*(f3/n3)-(f2/n2)*(f3/n3)

Another variant could be to continue iteration and try to minimize another criterium like min(J(n1))*n1, until f falls below a certain frequency (n1 reaches an upper limit)...

aka.nice
  • 8,465
  • 1
  • 25
  • 39
1

I found this question looking for answers for mine in MathStackExchange (here and here).

I've only managed (yet) to measure the appeal of a fundamental frequency given a list of harmonic frequencies (following the sound/music nomenclature), which can be useful if you have a reduced number of options and is feasible to compute the appeal of each one and then choose the best fit.

C&P from my question in MSE (there the formatting is prettier):

  • being v the list {v_1, v_2, ..., v_n}, ordered from lower to higher
  • mean_sin(v, x) = sum(sin(2*pi*v_i/x), for i in {1, ...,n})/n
  • mean_cos(v, x) = sum(cos(2*pi*v_i/x), for i in {1, ...,n})/n
  • gcd_appeal(v, x) = 1 - sqrt(mean_sin(v, x)^2 + (mean_cos(v, x) - 1)^2)/2, which yields a number in the interval [0,1].

The goal is to find the x that maximizes the appeal. Here is the (gcd_appeal) graph for your example [2.468, 3.700, 6.1699], where you find that the optimum GCD is at x = 1.2337899957639993 enter image description here

Edit: You may find handy this JAVA code to calculate the (fuzzy) divisibility (aka gcd_appeal) of a divisor relative to a list of dividends; you can use it to test which of your candidates makes the best divisor. The code looks ugly because I tried to optimize it for performance.

    //returns the mean divisibility of dividend/divisor as a value in the range [0 and 1]
    // 0 means no divisibility at all
    // 1 means full divisibility
    public double divisibility(double divisor, double... dividends) {
        double n = dividends.length;
        double factor = 2.0 / divisor;
        double sum_x = -n;
        double sum_y = 0.0;
        double[] coord = new double[2];
        for (double v : dividends) {
            coordinates(v * factor, coord);
            sum_x += coord[0];
            sum_y += coord[1];
        }
        double err = 1.0 - Math.sqrt(sum_x * sum_x + sum_y * sum_y) / (2.0 * n);
        //Might happen due to approximation error
        return err >= 0.0 ? err : 0.0;
    }

    private void coordinates(double x, double[] out) {
        //Bhaskara performant approximation to
        //out[0] = Math.cos(Math.PI*x);
        //out[1] = Math.sin(Math.PI*x);
        long cos_int_part = (long) (x + 0.5);
        long sin_int_part = (long) x;
        double rem = x - cos_int_part;
        if (cos_int_part != sin_int_part) {
            double common_s = 4.0 * rem;
            double cos_rem_s = common_s * rem - 1.0;
            double sin_rem_s = cos_rem_s + common_s + 1.0;
            out[0] = (((cos_int_part & 1L) * 8L - 4L) * cos_rem_s) / (cos_rem_s + 5.0);
            out[1] = (((sin_int_part & 1L) * 8L - 4L) * sin_rem_s) / (sin_rem_s + 5.0);
        } else {
            double common_s = 4.0 * rem - 4.0;
            double sin_rem_s = common_s * rem;
            double cos_rem_s = sin_rem_s + common_s + 3.0;
            double common_2 = ((cos_int_part & 1L) * 8L - 4L);
            out[0] = (common_2 * cos_rem_s) / (cos_rem_s + 5.0);
            out[1] = (common_2 * sin_rem_s) / (sin_rem_s + 5.0);
        }
    }
jmmurillo
  • 53
  • 5