13

error_giving_notebook

non_problematic_notebook

As it can be seen that I have used tf.function decorator in the 'error_giving_notebook' and it throws a ValueError while the same notebook without any changes except for removing the tf.function decorator runs smoothly in 'non_problematic_notebook'. What can be the reason?

Gaurav Singh
  • 155
  • 1
  • 2
  • 8
  • 1
    If seems you're calling the functions more than once and the functions are trying to create new variables while they should only create new variables in the first call? --- By tje way, I never used `@tf.function` for training loops, is there a special reason you want to use it? – Daniel Möller Oct 12 '19 at 12:27
  • https://www.tensorflow.org/tutorials/customization/performance#variables --- I'm not sure what is creating a new var in your code inside these functions, but maybe the gradient tape is doing that.... – Daniel Möller Oct 12 '19 at 12:45
  • Thinking better about it, I believe that a graph version of the training should use `tf.gradients` instead of gradient tape. But for `tf.gradients` to work, the entire model from start to end must be a graph too. (Which seems ok in your case). Now, if your code is only what is in the notebook, you might really consider just using `model.fit()` with a callback instead of a custom training loop. – Daniel Möller Oct 12 '19 at 13:24

2 Answers2

37

As you are trying to use function decorator in TF 2.0, please enable run function eagerly by using below line after importing TensorFlow:

tf.config.experimental_run_functions_eagerly(True)

Since the above is deprecated(no longer experimental?), please use the following instead:

tf.config.run_functions_eagerly(True)

If you want to know more do refer to this link.

NelsonGon
  • 11,358
  • 5
  • 21
  • 44
Apoorv Mishra
  • 731
  • 5
  • 11
  • I was actually using a shared layer that was causing the error. This saved me from madness thank you! – Lamberto Basti Nov 12 '20 at 19:57
  • This error can happen when using Keras. In order to use this solution just do `import tensorflow as tf` then do `tf.config...` – YScharf Apr 05 '21 at 13:33
7

The problem here is in the return values of the call method of class conv2d:

if self.bias:
  if self.pad == 'REFLECT':
    self.p = (self.filter_size - 1) // 2
    self.x = tf.pad(inputs, [[0, 0], [self.p, self.p], [self.p, self.p], [0, 0]], 'REFLECT')
    return Conv2D(filters=self.filter_num, kernel_size=(self.filter_size, self.filter_size), strides=(self.stride, self.stride),
                                  padding='VALID', use_bias=True, kernel_initializer=self.w, bias_initializer=self.b)(self.x)
  else:
    return Conv2D(filters=self.filter_num, kernel_size=(self.filter_size, self.filter_size), strides=(self.stride, self.stride),
                                  padding=self.pad, use_bias=True, kernel_initializer=self.w, bias_initializer=self.b)(inputs)
else:
   if self.pad == 'REFLECT':
      self.p = (self.filter_size - 1) // 2
      self.x = tf.pad(inputs, [[0, 0], [self.p, self.p], [self.p, self.p], [0, 0]], 'REFLECT')
      return Conv2D(filters=self.filter_num, kernel_size=(self.filter_size, self.filter_size), strides=(self.stride, self.stride),
                                  padding='VALID', use_bias=False, kernel_initializer=self.w)(self.x)
   else:
      return Conv2D(filters=self.filter_num, kernel_size=(self.filter_size, self.filter_size), strides=(self.stride, self.stride),
                                  padding=self.pad, use_bias=False, kernel_initializer=self.w)(inputs)

By returning a Conv2D object tf.Variable(s) are created (weights, bias of conv layer) each time you call

predictions = model(images)

in your tf-decorated function. Hence, the exception.

One possible way to solve this problem is by changing the build and call method in your conv2d class as follow:

def build(self, inputs):
  self.w = tf.random_normal_initializer(mean=0.0, stddev=1e-4)
  if self.bias:
    self.b = tf.constant_initializer(0.0)
  else:
    self.b = None

  self.conv_a = Conv2D(filters=self.filter_num, kernel_size=(self.filter_size, self.filter_size), strides=(self.stride, self.stride), padding='VALID', use_bias=True, kernel_initializer=self.w, bias_initializer=self.b)
  self.conv_b = Conv2D(filters=self.filter_num, kernel_size=(self.filter_size, self.filter_size), strides=(self.stride, self.stride), padding=self.pad, use_bias=True, kernel_initializer=self.w, bias_initializer=self.b)
  self.conv_c = Conv2D(filters=self.filter_num, kernel_size=(self.filter_size, self.filter_size), strides=(self.stride, self.stride), padding='VALID', use_bias=False, kernel_initializer=self.w)
  self.conv_d = Conv2D(filters=self.filter_num, kernel_size=(self.filter_size, self.filter_size), strides=(self.stride, self.stride),padding=self.pad, use_bias=False, kernel_initializer=self.w)  

def call(self, inputs):
  if self.bias:
    if self.pad == 'REFLECT':
      self.p = (self.filter_size - 1) // 2
      self.x = tf.pad(inputs, [[0, 0], [self.p, self.p], [self.p, self.p], [0, 0]], 'REFLECT')
      return self.conv_a(self.x)
    else:
      return self.conv_b(inputs)
  else:
     if self.pad == 'REFLECT':
        self.p = (self.filter_size - 1) // 2
        self.x = tf.pad(inputs, [[0, 0], [self.p, self.p], [self.p, self.p], [0, 0]], 'REFLECT')
        return self.conv_c(self.x)
     else:
        return self.conv_d(inputs)

To better understand AutoGraph and how @tf.function works I suggest taking a look at this

user9342787
  • 151
  • 4