Meaning to say if I have the following graph like:
images, labels = load_batch(...)
with slim.arg_scope(inception_resnet_v2_arg_scope()):
logits, end_points = inception_resnet_v2(images, num_classes = dataset.num_classes, is_training = True)
predictions = tf.argmax(end_points['Predictions'], 1)
accuracy, accuracy_update = tf.contrib.metrics.streaming_accuracy(predictions, labels)
....
train_op = slim.learning.create_train_op(...)
and in a supervisor managed_session
as sess
within the graph context, I run the following every once in a while:
print sess.run(logits)
print sess.run(end_points['Predictions'])
print sess.run(predictions)
print sess.run(labels)
Do they actually call in different batches for each sess run, given that the batch tensor must actually start from load_batch
onwards before they ever get to logits
, predictions
, or labels
? Because now when I run each of these sessions, I get a very confusing result in that even the predictions do not match tf.argmax(end_points['Predictions'], 1)
, and despite a high accuracy in the model, I do not get any predictions that remotely even match the labels to give that kind of high accuracy. Therefore I suspect that each of the result from sess.run
probably come from a different batch of data.
This brings me to my next question: Is there a way to inspect the results of different parts of a graph when a batch from load_batch goes all the way till a train_op, where the sess.run
is actually run instead? In other words, is there a way to do what I want to do without calling for another sess.run
?
Also, if I were to check the results using sess.run in such a way, would it affect my training in that some batches of data will be skipped and not reach the train_op?