Jumping off from a previous question I asked a while back:
Why is 1000000000000000 in range(1000000000000001) so fast in Python 3?
If you do this:
1000000000000000.0 in range(1000000000000001)
...it is clear that range
has not been optimized to check if float
s are within the specified range.
I think I understand that the intended purpose of range
is to work with int
s only - so you cannot, for example, do something like this:
1000000000000 in range(1000000000000001.0)
# error: float object cannot be interpreted as an integer
Or this:
1000000000000 in range(0, 1000000000000001, 1.0)
# error: float object cannot be interpreted as an integer
However, the decision was made, for whatever reason, to allow things like this:
1.0 in range(1)
It seems clear that 1.0
(and 1000000000000.0
above) are not being coerced into int
s, because then the int
optimization would work for those as well.
My question is, why the inconsistency, and why no optimization for float
s? Or, alternatively, what is the rationale behind why the above code does not produce the same error as the previous examples?
This seems like an obvious optimization to include in addition to optimization for int
s. I'm guessing there are some nuanced issues preventing a clean implementation of such optimization, or alternatively there is some kind of rationale as to why you would not actually want to include such an optimization. Or possibly both.
EDIT: To clarify the issue here a bit, all the following statements evaluated to False
as well:
3.2 in range(5)
'' in range(1)
[] in range(1)
None in range(1)
This seems like unexpected behavior to me, but so far there is definitely no inconsistency. However, the following evaluates to True
:
1.0 in range(2.0)
And as shown previously, constructions similar to the above have not been optimized.
This does seem inconsistent- at some point in the evaluation, the value 1.0
(or 1000000000001.0
as in my original example) is being coerced into an int
. This makes sense since it is a natural thing to convert a float
ending in .0
to an int
. However, the question still remains: if it is being converted an int
anyway, why has 1000000000000.0 in range(1000000000001)
not been optimized?