I am trying to generate N sets of independent random numbers. I have a simple code that shows the problem for 3 sets of 10 random numbers. I notice that even though I use the tf.set_random_seed
to set the seed, the results of different runs do not look alike. Any help or comments are greatly appreciated.
(py3p6) bash-3.2$ cat test.py
import tensorflow as tf
for i in range(3):
tf.set_random_seed(1234)
generate = tf.random_uniform((10,), 0, 10)
with tf.Session() as sess:
b = sess.run(generate)
print(b)
This is the output of the code:
# output :
[9.604688 5.811516 6.4159 9.621765 0.5434954 4.1893444 5.8865128
7.9785547 8.296125 8.388672 ]
[8.559105 3.2390785 6.447526 8.316823 1.6297233 1.4103293 2.647568
2.954973 6.5975866 7.494894 ]
[2.0277488 6.6134906 0.7579422 4.6359386 6.97507 3.3192968 2.866236
2.2205782 6.7940736 7.2391043]
I want something like
[9.604688 5.811516 6.4159 9.621765 0.5434954 4.1893444 5.8865128
7.9785547 8.296125 8.388672 ]
[9.604688 5.811516 6.4159 9.621765 0.5434954 4.1893444 5.8865128
7.9785547 8.296125 8.388672 ]
[9.604688 5.811516 6.4159 9.621765 0.5434954 4.1893444 5.8865128
7.9785547 8.296125 8.388672 ]
Update 1: Indeed the reason I had put the seed initializer within the for loop, was because I want to set them differently (think of it as for different MCMC runs, for instance). This is my code which does the job but I am not sure if it's efficient. Basically I generate a couple random seeds between 0 and 2^32-1, and change the seed in each run. Any help or comments to make it more memory/RAM efficient are greatly appreciated.
import numpy as np
import tensorflow as tf
global_seed = 42
N_chains = 5
np.random.seed(global_seed)
seeds = np.random.randint(0, 4294967295, size=N_chains)
for i in range(N_chains):
tf.set_random_seed(seeds[i])
.... some stuff ....
kernel_initializer = tf.random_normal_initializer(seed=seeds[i])
.... some stuff
with tf.Session() as sess:
.... some stuff .....
.
.
.