frame

Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Sign In

Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Is it possible to use TensorFlow SSD-MobileNet on NCS?

I'm working with an object detection model and I would like to use TensorFlow version of SSD-MobileNet. I saw the Caffe version and tried to retrain it, but the results were very poor. After training for 100 hours the mAP was still less than 0.03. I tried to tweak the learning rate and aspect ratios to better suit my dataset (my objects are mostly squares), but that didn't help. Then I switched to TensorFlow Object Detection API to see if there is a problem in my dataset. However, after training for just 6 hours I already got a mAP of 0.5. I also noticed that the TensorFlow version is also much faster on my machine; (0.6 sec / iteration) vs (2 sec / iteration) on caffe. So the TensorFlow version works much better and I'd like to use that instead if possible.

Is there any way to convert the model to NCS? And if direct conversion from TensorFlow to NCS is not possible, would it be possible to convert the model to Caffe format and then to NCS? Or could I just copy the TensorFlow model weights to the equivalent Caffe model?

«1

Comments

  • 47 Comments sorted by Votes Date Added
  • @WuXinyang Try the following:
    ./mo_tf.py --input_model=<path_to_frozen.pb> --tensorflow_use_custom_operations_config extensions/front/tf/ssd_support.json --output="detection_boxes,detection_scores,num_detections"

  • @alex_z For usage on NCS, I need to add a flag in your code: --data_type FP16
    since the MYRIAD plugin does not support FP32 and the converted models are FP32 by default.
    Again thanks for your intuitions! I have been searching for long time to try to find way to make tensorflow object-detection model run on NCS!

  • I'm also interested in this - any feedback on this from anyone?

  • Me too! I've got a working model on object detection API. Would love to find a way to do this.

  • You can try Intel OpenVINO™ toolkit. It supports Inference of SSD MobileNet from the TensorFlow Object Detection model zoo on NCS using the Myriad plugin.

  • @alex_z hi can you please explain more or give some github repositories?

  • @WuXinyang Hi! Sorry for the delayed response. Download and install the last version of OpenVINO Toolkit (https://software.intel.com/en-us/openvino-toolkit/choose-download). Inside installation folder, you can find C++/Python examples and several pre-trained models. Windows and Linux are supported by the toolkit. The main idea is the same as Movidius SDK, you convert a trained model into Intermediate Representation format using Model Optimization then the Inference Engine reads, loads, and infers the Intermediate Representation on different devices such as CPU, Intel GPU, MYRIAD 2 VPU.

  • @alex_z Hi thanks for your reply! In fact I already successfully set up the OpenVINO SDK and used it to convert one trained object detection ssd model but I met some problems. Did you ever try any models on NCS with OpenVINO?

  • @WuXinyang Yes, I have converted ssd_mobilenet_v1_coco model from Tensorflow detection model zoo and custom trained model based on SSD-Mobilenet v1 that I previously used with OpenCV DNN module. Then both models are run on NCS successfully.

  • @alex_z OMG!! amazing! Do you mind if you can give some instructions on how to implement it? Maybe you can post them in your blog. I am sure that many many people desire to make tensorflow ssd model work on NCS!

  • @alex_z I just set up the SDK and tried some sample applications. But I dont know how to compile the tensorflow model into their IR format. And after the convertion I guess I need to use some API in my code like:

    auto netBuilder = new InferenceEngine::CNNNetReader();
    netBuilder->ReadNetwork("Model.xml");
    netBuilder->ReadWeights("Model.bin");

    This is my understanding, if it is right?

  • PS. SSD MobileNet V2 is working on NCS via the OpenVINO SDK too.

  • @WuXinyang I am not good at C++. I use Python API.

  • @alex_z
    Hi the code I use to convert TF model is just like followings:

    python3 mo_tf.py --input_model /home/wuxy/Downloads/ssd_mobilenet_v1_coc
    o_2017_11_17/frozen_inference_graph.pb --output_dir ~/models_VINO

    and it returns some errors: [ ERROR ] Graph contains a cycle. Can not proceed.
    Can you pls tell me how can I make it work?

  • @alex_z I found the solution now! Thanks!

  • @WuXinyang Yes, you are right, I forgot to mention it.

  • @alex_z Great to hear that it's possible to run TF object detection models on NCS. What kind of fps do you get with that or what is the inference time? For example with the ssd_mobilenet_v1_coco model.

  • @mantu I have got about 10 FPS with the ssd_mobilenet_v1_coco model.

  • @alex_z Awesome! Thanks!

  • Is there any way to get the OpenVINO SDK on a raspberry Pi?

  • @alex_z Hi, I tried to use your command to convert a SSD net i trained for detecting heads. Unfortunately, I'm getting different error.

    sudo ./mo_tf.py --input_model=/work/22_movidus/ncappzoo/tensorflow/custom_tf/ssd_frozen_inference_graph.pb  --tensorflow_use_custom_operations_config extensions/front/tf/ssd_support.json --output="detection_boxes,detection_scores,num_detections" --data_type FP16
    

    It returned

    [ ERROR ]  Failed to determine the pre-processed image size from the original TensorFlow graph. Please, specify "preprocessed_image_width" and "preprocessed_image_height" in the topology replacement configuration file in the "custom_attributes" section of the "PreprocessorReplacement" replacer. This value is defined in the configuration file samples/configs/*.config of the model in the Object Detection model zoo as "min_dimension".
    

    So I opened ssd_support.json and added this to the top of the file

    {
            "custom_attributes": {
                "preprocessed_image_width":     300,
                "preprocessed_image_height":    300
            },
     "id" : "PreprocessorReplacement",
    .
    .
    .
    

    But now, I'm getting a different error

    InvalidArgumentError (see above for traceback): NodeDef mentions attr 'index_type' not in Op<name=Fill; signature=dims:int32, value:T -> output:T; attr=T:type>; NodeDef: MultipleGridAnchorGenerator/Meshgrid_4/ExpandedShape_1/ones = Fill[T=DT_INT32, index_type=DT_INT32, _device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_MultipleGridAnchorGenerator/Meshgrid_4/ExpandedShape_1/Reshape_port_0_ie_placeholder_0_0, _arg_MultipleGridAnchorGenerator/Meshgrid_4/ExpandedShape_1/ones/Const_port_0_ie_placeholder_0_1). (Check whether your GraphDef-interpreting binary is up to date with your GraphDef-generating binary.).
         [[Node: MultipleGridAnchorGenerator/Meshgrid_4/ExpandedShape_1/ones = Fill[T=DT_INT32, index_type=DT_INT32, _device="/job:localhost/replica:0/task:0/device:CPU:0"]
    (_arg_MultipleGridAnchorGenerator/Meshgrid_4/ExpandedShape_1/Reshape_port_0_ie_placeholder_0_0, _arg_MultipleGridAnchorGenerator/Meshgrid_4/ExpandedShape_1/ones/Const_port_0_ie_placeholder_0_1)]]

    Any clues? Thanks a lot!

  • @azmath Hi! Are you using SSD or SSD-MobileNet?

  • @alex_z, @WuXinyang
    Hi all !!! I am using SSD mobile net (tensorflow ) on raspberry pi . but it is very slow to the extent that it can not be used for real time apps...and i here that there is a magic called NCS for fast processing .. How can I use that ?..... just point me the direction and give github link..... Thank you......

  • @alex_z ssd_mobilenet_v1_coco . I used the one from tensorflow object detection model zoo. TF version is 1.8.

  • @azmath What archive file have you used for model training? I use ssd_mobilenet_v1_coco_2017_11_17.tar.gz and TF 1.4.

  • @alex_z

    Thank you
    Ok so i managed to get it converted. But I am not able to run it using inference_engine.

    /opt/intel/computer_vision_sdk/deployment_tools/demo$../inference_engine/samples/build/intel64/Release/classification_sample -d CPU -i car.png -m ./ir/ssdmobilenet/ssdmobilenet_frozen_inference_graph.xml
    [ INFO ] InferenceEngine: 
        API version ............ 1.1
        Build .................. 11653
    [ INFO ] Parsing input parameters
    [ INFO ] Loading plugin
    
        API version ............ 1.1
        Build .................. lnx_20180510
        Description ....... MKLDNNPlugin
    [ INFO ] Loading network files:
        ./ir/ssdmobilenet/ssdmobilenet_frozen_inference_graph.xml
        ./ir/ssdmobilenet/ssdmobilenet_frozen_inference_graph.bin
    [ INFO ] Preparing input blobs
    [ WARNING ] Image is resized from (787, 259) to (300, 300)
    [ INFO ] Batch size is 1
    [ INFO ] Preparing output blobs
    [ ERROR ] Incorrect output dimensions for classification model
    

    What could be done...

    Thanks a lot for your time.

  • @azmath Try object_detection_demo_ssd_async sample.

  • Oh! I had to use object_detection_sample

    But still now I get this error

    ../inference_engine/samples/build/intel64/Release/object_detection_sample_ssd -d CPU -i car.png -m ./ir/ssdmobilenet/ssdmobilenet_frozen_inference_graph.xml
    [ INFO ] InferenceEngine: 
        API version ............ 1.1
        Build .................. 11653
    Parsing input parameters
    [ INFO ] Loading plugin
    
        API version ............ 1.1
        Build .................. lnx_20180510
        Description ....... MKLDNNPlugin
    [ INFO ] Loading network files:
        ./ir/ssdmobilenet/ssdmobilenet_frozen_inference_graph.xml
        ./ir/ssdmobilenet/ssdmobilenet_frozen_inference_graph.bin
    [ INFO ] Preparing input blobs
    [ INFO ] Batch size is 1
    [ INFO ] Preparing output blobs
    [ INFO ] Loading model to the plugin
    [ ERROR ] Supported primitive descriptors list is empty for node: Postprocessor/convert_scores
    

    Anyone help...

Sign In or Register to comment.