6

Basically, given a function that produces outputs like this for different parameters:

enter image description here

I want to quickly find the first x at which the function equals 0. So with parameters that produce the blue curve over x, I want to find x=134. For the green curve, I want to find x=56, etc.

I think the function will always be monotonically decreasing until it hits zero, but I'm not totally sure.

The function is not necessarily monotonically decreasing.

I am sure that it will only hit 0 once, and then remain zero.

Currently I'm brute-forcing it by iterating through x values until I hit zero, but I want something that will be better at making educated guesses (based on slope?) and iterating.

Ideally I want to use something already-baked (since 90% of programmers can't even write a binary search correctly), like something from scipy.optimize, but it seems like those all want to find either a global minimum or a zero-crossing.

(This function is sort of a distance_to_the_wall of the RGB cube for a given chroma in Lch color space (so basically building a "sanely clip to RGB" function), but since the mapping between IRGB and LCh can vary by library and with parameters like illuminant etc. I think it's best to just try a few values until the right one is found rather than trying to reverse-calculate the value directly?)

endolith
  • 21,410
  • 30
  • 114
  • 183
  • 1
    Is the function integer-ranged (or similarly discrete)? Otherwise, how can you possibly iterate through x values? – abarnert Apr 11 '13 at 23:30
  • Also, if you have a global minimum algorithm, and you can't find a zero-crossing algorithm… you can always use `global_minimum(lambda x: abs(foo(x)))`. (I'm not saying that's the _right_ solution, of course.) – abarnert Apr 11 '13 at 23:31
  • I do not suppose the equations are such that you can just solve them using calculus. In other words, you must use numerical methods to approximate? – vossad01 Apr 11 '13 at 23:31
  • 3
    Also, if I understand you correctly, given "I am sure that it will only hit 0 once, and then remain zero", there's exactly one point that's both a 0 and an inflection point. In other words, if you solve for `f(x) == 0 and Df(x) == 0`, that's your answer. Right? (And you can fake that without derivatives, which is what Powell, Nelder-Mead, etc. are for.) – abarnert Apr 11 '13 at 23:35
  • @abarnert: Ah, true. I was iterating by integers and calling it good enough, but they're really floats. I replaced the image with a more representative one. Function takes floats as inputs and outputs floats. – endolith Apr 11 '13 at 23:45
  • @vossad01: They could be calculated directly, but they're going through 3 different transformations, some of which have variable conditions, so I think this is simpler. – endolith Apr 11 '13 at 23:48
  • @abarnert: Yes, I think you're right. – endolith Apr 11 '13 at 23:51
  • 1
    Now that I think about it… if you just use a stock zero-finding algorithm, then binary-search between 0 and whatever it finds until you find the first non-zero, that'll probably be efficient enough. And dead simple. – abarnert Apr 11 '13 at 23:53
  • Another approach to LCh-to-RGB is to build a table L, h -> Cmax, rgb. Cmax is on the surface of the rgb cube, so you can walk that and build a table once, e.g. 101 L x 180 h. Then given L, C, h, lookup Cmax: C > Cmax -> rgb, else the usual formulas -> rgb inside or very near the cube. (Ask a new question ~ "out-of-gamut LCh to RGB", with tag color-space ?) – denis Jun 12 '13 at 10:10
  • @denis: Yes, that was my previous approach, but it sometimes failed because "very near" could be outside the cube. Currently using a hybrid of this (as a first guess) and bisection. – endolith Jun 12 '13 at 13:37
  • Right. Fwiw, there's a simple constrain_rgb (better than clipping) under http://www.fourmilab.ch/documents/specrend . – denis Jun 12 '13 at 15:29
  • @denis: That sounds like what I'm doing, except in Lch space. My intent was to produce *only* colors on the surface of the cube, for a given lightness and hue: http://flic.kr/p/e3YdyZ – endolith Jun 12 '13 at 18:21

3 Answers3

2

Try bisection: Check if it's 0 in the middle of your interval; if it is, continue on the left, otherwise, on the right. Do the same thing with the reduced interval recursively until you're close enough. This method is of complexity O(log n) compared to yours, which is O(n).

Elmar Peise
  • 9,078
  • 3
  • 18
  • 39
  • 1
    Just to clarify - The function doesn't even need to be monotonically decreasing. The "once zero, always zero" means you essentially already have a "sorted" list where all the zeros are on one side and you're just searching for the first. – job Apr 16 '13 at 17:59
2

Here is some code to flesh out @ExP's bisection/binary search answer:

def find_first_zero(func, min, max, tol=1e-3):
    min, max = float(min), float(max)
    assert (max + tol) > max
    while (max - min) > tol:
        mid = (min + max) / 2
        if func(mid) == 0:
            max = mid
        else:
            min = mid
    return max
endolith
  • 21,410
  • 30
  • 114
  • 183
Bi Rico
  • 23,350
  • 3
  • 45
  • 67
0

If not for the fact that the right hand of the curve is 0 everywhere, Newton's method ( https://en.wikipedia.org/wiki/Newton's_method ) would work great. But I think a variant will still work fine:

1) Pick a point.

2) If we are on a slope, take the gradient of the slope locally and trace a line from there to the x intercept and take this as your new point (go to 1) ).

3) If we are on the flat plain (x = 0, derivative = 0), then if a bit to the left is a slope (would have to tune this to figure out how much left to check), then do a local search (probably binary search with tolerance) to find the point at which the function first equals zero. If not, then take the point that is the middle between this point and the last point on a slope we tried (go to 1 with this new point).

To estimate the derivative (to determine if you are on a slope or not), you can sample a point to the left and to the right, just far enough away that you're confident you'll get a smooth approximation of the derivative.

Patashu
  • 20,317
  • 3
  • 41
  • 50
  • Newton as it is won't work: It's gradient based (so you need to get that from somewhere in the first place) and the function's gradient is 0 once its value hits 0. This would lead to division by 0 --> failure. The wiki article you reference also shows other cases in which Newton won't work. – Elmar Peise Apr 12 '13 at 00:01
  • @ExP That is why I suggest a modified version of Newton's method. Read my answer :) – Patashu Apr 12 '13 at 00:05
  • "If we are on a slope" and "if we are on a plain" is not something that you know after picking a point, though, it needs to be calculated as a separate step, no? – endolith Jun 12 '13 at 13:39
  • 1
    @endolith You can determine it by sampling a point to your left and a point to your right at sufficient distance such that you're confident the graph is smooth on that scale. – Patashu Jun 12 '13 at 21:05
  • @Patashu: I mean that you should change your directions to specify this in detail. :) – endolith Jun 12 '13 at 21:46