4

Meaning to say if I have the following graph like:

images, labels = load_batch(...)

with slim.arg_scope(inception_resnet_v2_arg_scope()):
    logits, end_points = inception_resnet_v2(images, num_classes = dataset.num_classes, is_training = True)

predictions = tf.argmax(end_points['Predictions'], 1)
accuracy, accuracy_update = tf.contrib.metrics.streaming_accuracy(predictions, labels)

....

train_op = slim.learning.create_train_op(...)

and in a supervisor managed_session as sess within the graph context, I run the following every once in a while:

print sess.run(logits)
print sess.run(end_points['Predictions'])
print sess.run(predictions)
print sess.run(labels)

Do they actually call in different batches for each sess run, given that the batch tensor must actually start from load_batch onwards before they ever get to logits, predictions, or labels? Because now when I run each of these sessions, I get a very confusing result in that even the predictions do not match tf.argmax(end_points['Predictions'], 1), and despite a high accuracy in the model, I do not get any predictions that remotely even match the labels to give that kind of high accuracy. Therefore I suspect that each of the result from sess.run probably come from a different batch of data.

This brings me to my next question: Is there a way to inspect the results of different parts of a graph when a batch from load_batch goes all the way till a train_op, where the sess.run is actually run instead? In other words, is there a way to do what I want to do without calling for another sess.run?

Also, if I were to check the results using sess.run in such a way, would it affect my training in that some batches of data will be skipped and not reach the train_op?

kwotsin
  • 2,642
  • 8
  • 27
  • 55

1 Answers1

4

I realized there is a problem with running using separate sess.run in that the data loaded is always different. Instead, when I did something like:

logits, probabilities, predictions, labels = sess.run([logits, probabilities, predictions, labels])
print 'logits: \n', logits
print 'Probabilities: \n', probabilities
print 'predictions: \n', predictions
print 'Labels:\n:', labels

All the quantities coincide very well as what I had expected. I had also tried using tf.Print by writing something like:

logits = tf.Print(logits, [logits], message = 'logits: \n', summarize = 100)

immediately after defining the logits, so that they can get printed within the same session I run the train_op. However, the printing is rather messy and so I would prefer the first method of running everything in a session to obtain the values first and then printing them normally like numpy arrays.

kwotsin
  • 2,642
  • 8
  • 27
  • 55