1

I'm trying to convert the flow output from an existing optical flow network from a numpy array back into a tensor, so that I can run it through a differentiable interpolation network.

PWC-Net code takes two input images of same size and computes a flow correspondence. If this is a single pair of images, I think the flow will be a pixelwise displacement in x and y. The dimensions being [1, h, W, 2]. However, there can be a different number of image pairs in a batch. So we can call it b for batch-size, so the 4D volume then becomes [b,h,w,2].

https://github.com/philferriere/tfoptflow/blob/master/tfoptflow/pwcnet_predict_from_img_pairs.py

to a tensor by using

pred_labels_tensor = tf.convert_to_tensor(pred_labels)

I have read these

Convert Python sequence to NumPy array, filling missing values

How to input a list of lists with different sizes in tf.data.Dataset

but I still don't understand what I need to do in order to get this to work.

I have also investigated code that this file runs, and it does use np.asarray.

The two links makes me think it is to do with it being a list of lists or maybe needing some zeros. How can I figure out what exactly the problem is? And how can I solve it ?

From this python file:- https://github.com/philferriere/tfoptflow/blob/master/tfoptflow/pwcnet_predict_from_img_pairs.py

To recreate the problem, you can use the existing samples that are provided in the github download, with this code instead of the for loop.

image_path1 = f'./samples/mpisintel_test_clean_ambush_1_frame_0001.png'
image_path2 = f'./samples/mpisintel_test_clean_ambush_1_frame_0002.png'
image1, image2 = imread(image_path1), imread(image_path2)
img_pairs.append((image1, image2))

image_path1 = f'./samples/mpisintel_test_clean_ambush_1_frame_0003.png'
image_path2 = f'./samples/mpisintel_test_clean_ambush_1_frame_0004.png'
image1, image2 = imread(image_path1), imread(image_path2)
img_pairs.append((image1, image2))

pred_labels = nn.predict_from_img_pairs(img_pairs, batch_size=1, verbose=False)

pred_labels_tensor = tf.convert_to_tensor(pred_labels)

I expected to get a tensor output, however, I get this error in the terminal of MSVSCode:-

ValueError: Argument must be a dense tensor: [array([[[ 0.32990038, -0.11566047],
        [ 0.35661912, -0.09227534],
        [ 0.38333783, -0.06889021],
        ...,
        [-0.1237613 ,  0.07946336],
        [-0.1237613 ,  0.07946336],
        [-0.1237613 ,  0.07946336]],

       [[ 0.34405386, -0.09286585],
        [ 0.36766803, -0.07679807],
        [ 0.39128217, -0.06073029],
        ...,
        [-0.10938472,  0.08551764],
        [-0.10938472,  0.08551764],
        [-0.10938472,  0.08551764]],

       [[ 0.35820735, -0.07007124],
        [ 0.37871695, -0.0613208 ],
        [ 0.39922655, -0.05257037],
        ...,
        [-0.09500815,  0.09157193],
        [-0.09500815,  0.09157193],
        [-0.09500815,  0.09157193]],

       ...,

       [[ 0.9003515 ,  1.0893728 ],
        [ 0.93065804,  1.0662789 ],
        [ 0.96096456,  1.0431851 ],
        ...,
        [ 0.43580785,  0.02744378],
        [ 0.43580785,  0.02744378],
        [ 0.43580785,  0.02744378]],

       [[ 0.9003515 ,  1.0893728 ],
        [ 0.93065804,  1.0662789 ],
        [ 0.96096456,  1.0431851 ],
        ...,
        [ 0.43580785,  0.02744378],
        [ 0.43580785,  0.02744378],
        [ 0.43580785,  0.02744378]],

       [[ 0.9003515 ,  1.0893728 ],
        [ 0.93065804,  1.0662789 ],
        [ 0.96096456,  1.0431851 ],
        ...,
        [ 0.43580785,  0.02744378],
        [ 0.43580785,  0.02744378],
        [ 0.43580785,  0.02744378]]], dtype=float32), array([[[ 0.49922907,  0.08599953],
        [ 0.5034714 ,  0.1123561 ],
        [ 0.5077137 ,  0.13871266],
        ...,
        [-0.3719127 ,  0.1080336 ],
        [-0.3719127 ,  0.1080336 ],
        [-0.3719127 ,  0.1080336 ]],

       [[ 0.49763823,  0.11536887],
        [ 0.4972613 ,  0.13717887],
        [ 0.49688435,  0.15898886],
        ...,
        [-0.36932352,  0.11556612],
        [-0.36932352,  0.11556612],
        [-0.36932352,  0.11556612]],

       [[ 0.49604735,  0.14473821],
        [ 0.4910512 ,  0.16200164],
        [ 0.48605505,  0.17926508],
        ...,
        [-0.36673436,  0.12309864],
        [-0.36673436,  0.12309864],
        [-0.36673436,  0.12309864]],

       ...,

       [[ 0.46260613, -0.47470346],
        [ 0.46841043, -0.46383834],
        [ 0.47421476, -0.4529732 ],
        ...,
        [-0.29265293, -0.4021799 ],
        [-0.29265293, -0.4021799 ],
        [-0.29265293, -0.4021799 ]],

       [[ 0.46260613, -0.47470346],
        [ 0.46841043, -0.46383834],
        [ 0.47421476, -0.4529732 ],
        ...,
        [-0.29265293, -0.4021799 ],
        [-0.29265293, -0.4021799 ],
        [-0.29265293, -0.4021799 ]],

       [[ 0.46260613, -0.47470346],
        [ 0.46841043, -0.46383834],
        [ 0.47421476, -0.4529732 ],
        ...,
        [-0.29265293, -0.4021799 ],
        [-0.29265293, -0.4021799 ],
        [-0.29265293, -0.4021799 ]]], dtype=float32)] - [2, 436, 1024, 2], but wanted [2]

If I reduce the batch-size to one I get this error

ValueError: Argument must be a dense tensor: - got shape [1, 436, 1024, 2], but wanted [1].

To get the minimum reproducible example, it requires the following:-

Python 3.7.3 Tensorflow 1.13.1 (latest stable version) Also, you'll need to download and copy and paste the script below over the existing one

There's also a model to download pwcnet-lg-6-2-multisteps-chairsthingsmix from https://drive.google.com/drive/folders/1iRJ8SFF6fyoICShRVWuzr192m_DgzSYp

"""
pwcnet_predict_from_img_pairs.py
Run inference on a list of images pairs.
Written by Phil Ferriere
Licensed under the MIT License (see LICENSE for details)
"""
from __future__ import absolute_import, division, print_function

from voxel_flow_geo_layer_utils import bilinear_interp
from voxel_flow_geo_layer_utils import meshgrid

from copy import deepcopy
from skimage.io import imread
from model_pwcnet import ModelPWCNet, _DEFAULT_PWCNET_TEST_OPTIONS
#from visualize import display_img_pairs_w_flows
import visualize
import numpy as np
import tensorflow as tf

# TODO: Set device to use for inference
# Here, we're using a GPU (use '/device:CPU:0' to run inference on the CPU)
gpu_devices = ['/device:GPU:0']  
controller = '/device:GPU:0'

# TODO: Set the path to the trained model (make sure you've downloaded it first https://drive.google.com/drive/folders/1iRJ8SFF6fyoICShRVWuzr192m_DgzSYp)
ckpt_path = './models/pwcnet-lg-6-2-multisteps-chairsthingsmix/pwcnet.ckpt-595000'

# Build a list of image pairs to process (in this case it's just one image pair)
img_pairs = []
image_path1 = f'./samples/mpisintel_test_clean_ambush_1_frame_0001.png'
image_path2 = f'./samples/mpisintel_test_clean_ambush_1_frame_0002.png'
image1, image2 = imread(image_path1), imread(image_path2)
img_pairs.append((image1, image2))

# Configure the model for inference, starting with the default options
nn_opts = deepcopy(_DEFAULT_PWCNET_TEST_OPTIONS)
nn_opts['verbose'] = True
nn_opts['ckpt_path'] = ckpt_path
nn_opts['batch_size'] = 1
nn_opts['gpu_devices'] = gpu_devices
nn_opts['controller'] = controller

# We're running the PWC-Net-large model in quarter-resolution mode
# That is, with a 6 level pyramid, and upsampling of level 2 by 4 in each dimension as the final flow prediction
nn_opts['use_dense_cx'] = True
nn_opts['use_res_cx'] = True
nn_opts['pyr_lvls'] = 6
nn_opts['flow_pred_lvl'] = 2

# The size of the images in this dataset are not multiples of 64, while the model generates flows padded to multiples
# of 64. Hence, we need to crop the predicted flows to their original size
nn_opts['adapt_info'] = (1, 8, 8, 2)

# Instantiate the model in inference mode and display the model configuration
nn = ModelPWCNet(mode='test', options=nn_opts)
nn.print_config()

# Generate the predictions and display them
pred_labels = nn.predict_from_img_pair # pred_labels shape is [436, 1024,2]
# array has len 1 when there is only one image pair.
pred_labels_tensor = tf.convert_to_tensor(pred_labels)
CogT
  • 53
  • 6
  • 1
    `pred_labels` contains two 3D NumPy arrays with different shapes. TensorFlow interprets that you want to make a 4D tensor out of those (stacking along a new first dimension), but that is not possible because the shapes are different. It's impossible to tell without a complete minimal reproducible example, but it is probably because the input images have different sizes and so their predictions too. – jdehesa Jun 21 '19 at 15:31
  • @jdehesa , I've updated the question with another trial below with one image pair instead of a batch of two. I've also added some information about the versions I am using. Once the github code has been downloaded, does this now form a complete reproducible example? It's my first time posting to stack overflow. – CogT Jun 23 '19 at 23:04

0 Answers0