4

This isn't the first time that "can Math.random() equal" has been asked.

Will JavaScript random function ever return a 0 or 1?

Is it possible for Math.random() === Math.random()

https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Math/random - higher level explanation

https://hackernoon.com/how-does-javascripts-math-random-generate-random-numbers-ef0de6a20131 - lower level explanation

So; my question: can JavaScript's Math.random() ever exactly equal .5?

It fits the definition of >= 0 && < 1. But in practice, I've tried a few different approaches, last one being:

while (Math.random() != .5)

They all either time out or never exactly equal .5

Billions of attempts / several browser (firefox 60+ X64) crashes later. Is it possible? Is it browser/system dependent? Is it my lack of comprehension regarding statistical probabilities?

anothermh
  • 6,482
  • 3
  • 20
  • 39
PoorlyWrittenCode
  • 690
  • 1
  • 5
  • 9

1 Answers1

-2

Great question! I am not a statistician, so someone with more experience feel free to hop in and correct me.

That being said, as far as I know, the answer to your question:

So; my question: can JavaScript's Math.random() ever exactly equal .5?

Is no! This answer was somewhat surprising to me, as it is somewhat counter intuitive; but I feel that this website offers a good explanation. The reason why the answer is no is because you are constrained to the range (0,1) (exclusive). This is a continuous distribution. So your question is essentially asking:

What is the probability of selecting a specific value from a continuous distribution?

The answer is 0. There are infinitely many possibilities when sampling from a continuous distribution (by definition). The probability of getting a specified value is therefore: 1/infinity, which asymptotically approaches zero.

campellcl
  • 164
  • 2
  • 12
  • 2
    The range is [0, 1), and there are _finitely_ many IEEE-754 double-precision numbers (which is what JS is using). Also, in a continuous range, probability 0 does not imply impossible, so the answer “no” is definitely false. – Sebastian Simon Aug 28 '18 at 02:05
  • Appreciate your repose. Not sure the down votes, this question is open discourse. But I'll disagree with you due to Zeno's paradox. Infinity isn't a number, it's a concept. Therefore 1/infinity is also just a concept. And doesn't apply correctly to real world math. – PoorlyWrittenCode Aug 28 '18 at 02:16
  • @Xufox Good catch. I am wrong on both accounts. OP did specify a programming language, the answer should be yes. It is possible, but incredibly unlikely. You are looking at a probability of 1/(2^{1023}) for signed floating point 64 bit IEEE 754 without a loss in precision (I believe). – campellcl Aug 28 '18 at 02:18
  • @PoorlyWrittenCode Well the OP did specify a programming language. I was coming at it from a math standpoint instead of a CS standpoint. In IEEE 754, there are a finite amount of unique numbers. That changes the answer completely. I am not sure that we can say infinity doesn't apply to real math given its importance in the realm of computational feasibility [(see cantor's diagonalisation argument)](https://en.wikipedia.org/wiki/Cantor%27s_diagonal_argument). – campellcl Aug 28 '18 at 02:21
  • I think infinity has important implications for Computer Theory as well. Also notice that there are entire subsets of mathematics denoted to differentiating between types of infinities [(see Aleph number)](https://en.wikipedia.org/wiki/Aleph_number). – campellcl Aug 28 '18 at 02:23
  • @campellcl I randomly decided to teach myself JavaScript. That's the only reason I specified that language. If it works differently in other languages, that could be fun to discuss too. In my mind, the apogee of the bell curve would be at `.5`. But curiously that doesn't seem to be the case. – PoorlyWrittenCode Aug 28 '18 at 02:28
  • And 1/(2^{1023}) is definitely wrong as well, whoops. You would have to look up the bit allocation for IEEE 754 (which I have long sense forgotten). There are a certain number of bits used to represent the exponent. We can exclude half of the signed numbers because they are negative, but we do have to put an upper bound on the range (I forgot that the question was only constrained [0, 1)). – campellcl Aug 28 '18 at 02:29
  • @PoorlyWrittenCode It works pretty much the same in other languages as well. It depends on the data type (single, double, float, long), the number of bits used in the representation (32 vs 64), if it is signed or unsigned, and the standards that dictate how many of the bits go to the sign, mantissa, and exponent. I think IEEE 754 is a pretty common standard. So it is pretty much language agnostic. But again, it depends on the above. – campellcl Aug 28 '18 at 02:32
  • @PoorlyWrittenCode perhaps the apogee of the bell curve would be at .5 if we were sampling from a normal/Gaussian discrete distribution. I am unsure, though. I would love for someone to post a correct answer. – campellcl Aug 28 '18 at 02:35