3

I want to use Lucid to analyze the feature extraction of a detection model I trained using the tensorflow Object Detection API on my own dataset. The model used is one from the Tensorflow Object Detection Zoo, namely faster_rcnn_resnet101.

I followed the Lucid tutorial to import my own model and saved a frozen graph of the model with the node /all_class_predictions_with_background as output_node.

I'm having trouble finding the input node of the graph to make Lucid run on it.

Furthermore I don't really think I have the right approach. Maybe I should first extract all the classification part of the detection model and freeze a new graph with only this part before going to Lucid.

Or maybe I should just import a resnet_101 classification model and copy/paste the correct weights from the detection model on it?

But I don't really know how to do those kind of things.

Can someone help me? I really want to try running Lucid on my detection network.

Greenonline
  • 1,231
  • 6
  • 19
  • 25
leoDup
  • 31
  • 2

1 Answers1

0

Yes, you should export an inference (frozen) graph to work with in Lucid.

I use the following script to export a graph from the training checkpoint files. Useful information about the nodes in the exported file is logged to the console.

training_model="ssd_mnet_v2_ppn_512x288.config"

model_signature="eb_13_v09_ppmn2_13_256_adam_512x288_tf_1.14_200k"

# the specific checkpoint to export from
checkpoint_path="/TRAIN/models/model/train/model.ckpt-200000"

# directory to export into
output_path="/XYZ/graphs/${model_signature}"

# ensure these graph nodes are exported, and everything in between
additional_output_tensor_names="Preprocessor/sub,concat_1"

# 
python export_inference_graph.py \
   --input_type=image_tensor \
   --pipeline_config_path /TRAIN/models/model/$training_model \
   --trained_checkpoint_prefix=$checkpoint_path \
   --output_directory=$output_path \
   --additional_output_tensor_names=$additional_output_tensor_names

I found it convenient to make my own Lucid Model class, after reviewing the examples in the Lucid model zoo. You have to examine your graph carefully as you need to specify the input node, and provide a list of layers that Lucid can work with.

from lucid.modelzoo.vision_base import Model, _layers_from_list_of_dicts


# the input node "Preprocessor/sub" is appropriate for image injection
class SSD_Mnet2_PPN( Model ):

    def __init__(self, image_shape=None, graph_path=None, labels_path=None ):        
        self.model_path = graph_path
        self.labels_path = labels_path

        self.image_shape = image_shape
        self.image_value_range = (-1, 1) 
        self.input_name = "Preprocessor/sub"

        super().__init__()

# a hand-crafted list of layers - by inspection of the graph
SSD_Mnet2_PPN.layers = _layers_from_list_of_dicts(SSD_Mnet2_PPN, [
  { 'id':  0, 'tags': ['conv'], 'name': 'FeatureExtractor/MobilenetV2/expanded_conv_2/add', 'depth': 24, 'shape': [ 1, 72, 128, 24 ], 'transform_id': 2 },  
  { 'id':  2, 'tags': ['conv'], 'name': 'FeatureExtractor/MobilenetV2/expanded_conv_5/add', 'depth': 32, 'shape': [ 1, 36, 64, 32 ], 'transform_id': 2 },  
  { 'id':  5, 'tags': ['conv'], 'name': 'FeatureExtractor/MobilenetV2/expanded_conv_9/add', 'depth': 64, 'shape': [ 1, 18, 32, 64 ], 'transform_id': 2 },  
  { 'id':  7, 'tags': ['conv'], 'name': 'FeatureExtractor/MobilenetV2/expanded_conv_12/add', 'depth': 96, 'shape': [ 1, 18, 32, 96 ], 'transform_id': 2 },  
  { 'id':  9, 'tags': ['conv'], 'name': 'FeatureExtractor/MobilenetV2/expanded_conv_15/add', 'depth': 160, 'shape': [ 1, 9, 16, 160 ], 'transform_id': 2 },  
  { 'id': 11, 'tags': ['concat'], 'name': 'concat_1', 'depth': 13, 'shape': [ 1, 1212, 13 ], 'transform_id': 4 },
])


def model_for_version( version=None, path=None ):

    if "320x180" in version:
        return SSD_Mnet2_PPN( graph_path=path, image_shape=[ 320, 180, 3 ] )

    if "480x270" in version:
        return SSD_Mnet2_PPN( graph_path=path, image_shape=[ 480, 270, 3 ] )

    if "512x288" in version:
        return SSD_Mnet2_PPN( graph_path=path, image_shape=[ 512, 288, 3 ] )        

    if "720x405" in version:
        return SSD_Mnet2_PPN( graph_path=path, image_shape=[ 720, 405, 3 ] )

    raise ValueError( "No model for graph_version: {}".format( version ) )

Then you can write code as follows:

from lucid.optvis import render

model = model_for_version( 
    version = "eb_13_v09_ppmn2_13_256_adam_512x288_tf_1.14", 
    path = "/XYZ/graphs/eb_13_v09_ppmn2_13_256_adam_512x288_tf_1.14_200k/frozen_inference_graph.pb" 
)

model.load_graphdef()

_ = render.render_vis( model, "FeatureExtractor/MobilenetV2/expanded_conv_15/add:17", thresholds=( 32, 256, 1024 ) )    

Inevitably, one has to experiment quite a bit.