frame

Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Sign In

Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Custom RCNN-Incepciton-V2 Tensorflow to Movidius NCS graph

Hello dear community,

First of all congratulations on the great job done with this VPU stick, it is really amazing!

I am working on a project which started using tensorflow in CPU, but since I needed more speed on the inference, I decided to move on Movidius stick.
I have been able to run the typical object detector, and I am using the example provided in NCApp Zoo "video_objects". So far so good

The problem I have is that my trained network is based in "Faster-RCNN-Inception-V2" ("http://download.tensorflow.org/models/object_detection/faster_rcnn_inception_v2_coco_2018_01_28.tar.gz"), and I dont know whether it is possible to run this kind of network on the MNCS or which are the steps to follow. I am running this network with tensorflow-cpu on raspberry and it is very slow (8 minutes per inference... quite a lot, but I just need to do one inference ), and that is the reason that I would like to move on a movidius graph.

I have seen in some posts that this network is not supported but I don't know if that is still that way. In case that it is now supported, could you give me some guidance on how to export this frozen model to a movidius graph?

Thanks in advance

Comments

  • 6 Comments sorted by Votes Date Added
  • Update: This is what I am trying to do. As a proof of concept I am taking 2 re-trained models, one with the "Faster-RCNN-Inception-V2" network and other with the "ssd_mobilenet_v1_coco_2017_11_17"

    Compiling RCNN:

    sudo mvNCCompile -s 12 model.ckpt-21306.meta -in=image_tensor -on=detection_boxes,detection_scores,detection_classes,num_detections
    

    Throws:

    Traceback (most recent call last):
      File "/usr/local/bin/mvNCCompile", line 118, in <module>
        create_graph(args.network, args.inputnode, args.outputnode, args.outfile, args.nshaves, args.inputsize, args.weights)
      File "/usr/local/bin/mvNCCompile", line 104, in create_graph
        net = parse_tensor(args, myriad_config)
      File "/usr/local/bin/ncsdk/Controllers/TensorFlowParser.py", line 213, in parse_tensor
        saver = tf.train.import_meta_graph(path, clear_devices=True)
      File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/training/saver.py", line 1810, in import_meta_graph
        **kwargs)
      File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/meta_graph.py", line 660, in import_scoped_meta_graph
        producer_op_list=producer_op_list)
      File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/importer.py", line 292, in import_graph_def
        op_def = op_dict[node.op]
    KeyError: 'ParallelInterleaveDataset'
    

    This is because ParallellInterleaveDataset is not among the items contained in the default op_dict of protos, and I don't know how to follow here, so I will use the frozen graph:

    sudo mvNCCompile -s 12 inference_graph/frozen_inference_graph_great.pb -in=image_tensor -on=detection_boxes,detection_scores,detection_classes,num_detections
    

    Which throws:

    [Error 13] Toolkit Error: Provided OutputNode/InputNode name does not exist or does not match with one contained in model file Provided: detection_boxes,detection_scores,detection_classes,num_detections:0
    

    Ok, this is because its reading the output as a whole, not separated by different output nodes, so lets try one output node:

    sudo mvNCCompile -s 12 inference_graph/frozen_inference_graph_great.pb -in=image_tensor -on=detection_boxes
    

    Which throws:

    mvNCCompile v02.00, Copyright @ Movidius Ltd 2016
    
    /usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/ops.py:766: DeprecationWarning: builtin type EagerTensor has no __module__ attribute
      EagerTensor = c_api.TFE_Py_InitEagerTensor(_EagerTensorBase)
    /usr/local/lib/python3.5/dist-packages/tensorflow/python/util/tf_inspect.py:45: DeprecationWarning: inspect.getargspec() is deprecated, use inspect.signature() instead
      if d.decorator_argspec is not None), _inspect.getargspec(target))
    Traceback (most recent call last):
      File "/usr/local/bin/mvNCCompile", line 118, in <module>
        create_graph(args.network, args.inputnode, args.outputnode, args.outfile, args.nshaves, args.inputsize, args.weights)
      File "/usr/local/bin/mvNCCompile", line 104, in create_graph
        net = parse_tensor(args, myriad_config)
      File "/usr/local/bin/ncsdk/Controllers/TensorFlowParser.py", line 259, in parse_tensor
        input_data = np.random.uniform(0, 1, shape)
      File "mtrand.pyx", line 1302, in mtrand.RandomState.uniform
      File "mtrand.pyx", line 242, in mtrand.cont2_array_sc
    TypeError: 'NoneType' object cannot be interpreted as an integer
    

    Lets move to a supported network, mobilenet ssd. Doing the same, I basically have the same outputs.

    What am I missing? Thanks in advance

  • Same here. With RCNN_Resnet101.

    $ mvNCCompile -s 12 rcnn_frozen_inference_graph.pb -in=image_tensor -on=detection_boxes
    mvNCCompile v02.00, Copyright @ Movidius Ltd 2016
    
    /usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/ops.py:766: DeprecationWarning: builtin type EagerTensor has no __module__ attribute
      EagerTensor = c_api.TFE_Py_InitEagerTensor(_EagerTensorBase)
    /usr/local/lib/python3.5/dist-packages/tensorflow/python/util/tf_inspect.py:45: DeprecationWarning: inspect.getargspec() is deprecated, use inspect.signature() instead
      if d.decorator_argspec is not None), _inspect.getargspec(target))
    Traceback (most recent call last):
      File "/usr/local/bin/mvNCCompile", line 118, in <module>
        create_graph(args.network, args.inputnode, args.outputnode, args.outfile, args.nshaves, args.inputsize, args.weights)
      File "/usr/local/bin/mvNCCompile", line 104, in create_graph
        net = parse_tensor(args, myriad_config)
      File "/usr/local/bin/ncsdk/Controllers/TensorFlowParser.py", line 259, in parse_tensor
        input_data = np.random.uniform(0, 1, shape)
      File "mtrand.pyx", line 1302, in mtrand.RandomState.uniform
      File "mtrand.pyx", line 242, in mtrand.cont2_array_sc
    TypeError: 'NoneType' object cannot be interpreted as an integer
    
  • @JoseSecmotic @azmath We haven't added any updates in regards to Faster R-CNNs for the NCSDK. As an alternative, you can check out Intel's OpenVino Toolkit which offers some support for Faster R-CNNs and is compatible with the NCS device. https://software.intel.com/en-us/articles/OpenVINO-RelNotes.

  • Thanks a lot @Tome_at_Intel , I will give it a try!

  • Hi again @Tome_at_Intel
    I have successfully installed OpenVINO, and tried to use their model optimizer in order to obtain a graph "framework agnostic". Nevertheless, it keeps failing. Is there any other solution?

    For example, I would be interested exporting my SSD Mobilenet to Movidius graph, but I have the same errors that en Faster R-CNN model. Could you give me some guidance on how to achieve this?

    Thanks in advance, and best regards

  • @JoseSecmotic If you are using SSD MobileNet for TensorFlow, we don't have support for that yet on the NCSDK. I may have been mistaken about what I said about Faster R-CNNs being supported by OpenVino on the NCS. It isn't very clear what models are supported for which hardware yet on OpenVIno.

Sign In or Register to comment.