15

I'm using the tf.Print op in a Jupyter notebook. It works as required, but will only print the output to the console, without printing in the notebook. Is there any way to get around this?

An example would be the following (in a notebook):

import tensorflow as tf

a = tf.constant(1.0)
a = tf.Print(a, [a], 'hi')
sess = tf.Session()
a.eval(session=sess)

That code will print 'hi[1]' in the console, but nothing in the notebook.

ldz
  • 2,172
  • 14
  • 20
fjhj2
  • 263
  • 2
  • 9

5 Answers5

8

Update Feb 3, 2017 I've wrapped this into memory_util package. Example usage

# install memory util
import urllib.request
response = urllib.request.urlopen("https://raw.githubusercontent.com/yaroslavvb/memory_util/master/memory_util.py")
open("memory_util.py", "wb").write(response.read())

import memory_util

sess = tf.Session()
a = tf.random_uniform((1000,))
b = tf.random_uniform((1000,))
c = a + b
with memory_util.capture_stderr() as stderr:
    sess.run(c.op)

print(stderr.getvalue())

** Old stuff**

You could reuse FD redirector from IPython core. (idea from Mark Sandler)

import os
import sys

STDOUT = 1
STDERR = 2

class FDRedirector(object):
    """ Class to redirect output (stdout or stderr) at the OS level using
        file descriptors.
    """ 

    def __init__(self, fd=STDOUT):
        """ fd is the file descriptor of the outpout you want to capture.
            It can be STDOUT or STERR.
        """
        self.fd = fd
        self.started = False
        self.piper = None
        self.pipew = None

    def start(self):
        """ Setup the redirection.
        """
        if not self.started:
            self.oldhandle = os.dup(self.fd)
            self.piper, self.pipew = os.pipe()
            os.dup2(self.pipew, self.fd)
            os.close(self.pipew)

            self.started = True

    def flush(self):
        """ Flush the captured output, similar to the flush method of any
        stream.
        """
        if self.fd == STDOUT:
            sys.stdout.flush()
        elif self.fd == STDERR:
            sys.stderr.flush()

    def stop(self):
        """ Unset the redirection and return the captured output. 
        """
        if self.started:
            self.flush()
            os.dup2(self.oldhandle, self.fd)
            os.close(self.oldhandle)
            f = os.fdopen(self.piper, 'r')
            output = f.read()
            f.close()

            self.started = False
            return output
        else:
            return ''

    def getvalue(self):
        """ Return the output captured since the last getvalue, or the
        start of the redirection.
        """
        output = self.stop()
        self.start()
        return output

import tensorflow as tf
x = tf.constant([1,2,3])
a=tf.Print(x, [x])

redirect=FDRedirector(STDERR)

sess = tf.InteractiveSession()
redirect.start();
a.eval();
print "Result"
print redirect.stop()
Yaroslav Bulatov
  • 53,323
  • 19
  • 126
  • 181
4

I ran into the same problem and got around it by using a function like this in my notebooks:

def tf_print(tensor, transform=None):

    # Insert a custom python operation into the graph that does nothing but print a tensors value 
    def print_tensor(x):
        # x is typically a numpy array here so you could do anything you want with it,
        # but adding a transformation of some kind usually makes the output more digestible
        print(x if transform is None else transform(x))
        return x
    log_op = tf.py_func(print_tensor, [tensor], [tensor.dtype])[0]
    with tf.control_dependencies([log_op]):
        res = tf.identity(tensor)

    # Return the given tensor
    return res


# Now define a tensor and use the tf_print function much like the tf.identity function
tensor = tf_print(tf.random_normal([100, 100]), transform=lambda x: [np.min(x), np.max(x)])

# This will print the transformed version of the tensors actual value 
# (which was summarized to just the min and max for brevity)
sess = tf.InteractiveSession()
sess.run([tensor])
sess.close()

FYI, using a logger instead of calling "print" in my custom function worked wonders for me as the stdout is often buffered by jupyter and not shown before "Loss is Nan" kind of errors -- which was the whole point in using that function in the first place in my case.

Eric Czech
  • 661
  • 8
  • 8
3

You can check the terminal where you launched the jupyter notebook to see the message.

import tensorflow as tf

tf.InteractiveSession()

a = tf.constant(1)
b = tf.constant(2)

opt = a + b
opt = tf.Print(opt, [opt], message="1 + 2 = ")

opt.eval()

In the terminal, I can see:

2018-01-02 23:38:07.691808: I tensorflow/core/kernels/logging_ops.cc:79] 1 + 2 = [3]
Dat
  • 3,512
  • 24
  • 28
1

A simple way, tried it in regular python, but not jupyter yet. os.dup2(sys.stdout.fileno(), 1) os.dup2(sys.stdout.fileno(), 2)

Explanation is here: In python, how to capture the stdout from a c++ shared library to a variable

0

The issue that I faced was that one can't run a session inside a Tensorflow Graph, like in the training or in the evaluation. That's why the options to use sess.run(opt) or opt.eval() were not a solution for me. The best thing was to use tf.Print() and redirect the logging to an external file. I did this using a temporal file, which I transferred to a regular file like this:

STDERR=2
import os
import sys
import tempfile

class captured:
    def __init__(self, fd=STDERR):
        self.fd = fd
        self.prevfd = None
    def __enter__(self):
        t = tempfile.NamedTemporaryFile()
        self.prevfd = os.dup(self.fd)
        os.dup2(t.fileno(), self.fd)
        return t
    def __exit__(self, exc_type, exc_value, traceback):
        os.dup2(self.prevfd, self.fd)

with captured(fd=STDERR) as tmp:
    ...
    classifier.evaluate(input_fn=input_fn, steps=100)

with open('log.txt', 'w') as f:
    print(open(tmp.name).read(), file=f)

And then in my evaluation I do:

a = tf.constant(1)
a = tf.Print(a, [a], message="a: ")
tsveti_iko
  • 3,976
  • 2
  • 32
  • 32