Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Sign In

Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Alert: Beginning Tuesday, June 25th, we will be freezing this site and migrating the content and forums to our new home at Check it out now!

Multiple output nodes for mvNCCompile and tf object detection

I wonder to compile a trained tensorflow model for object detection (based on tensorflow/models: models/research/object_detection). Such model has a multiple outputs: boxes, scores, classes, number of detections.
Is it possible to pass multiple parameters for the -on key for the mvNCCompile?
Is there any advice on using tf object detection models?
Thank you


  • 10 Comments sorted by Votes Date Added
  • @ostfor As of NCSDK 1.12, the -on option for mvNCCompile only takes one output node. For TensorFlow object detection, currently there is only one network that I know of that works with the NCSDK: Tiny Yolo V2. You can generate a model for use with an NC device by following the steps at

  • Hi @Tome_at_Intel, I'm trying to do a similar thing. I've seven outputs from my model and I don't know how to transform my .pb file into the graph one for the Movidius neural compute stick.

    If it's possible for Yolo, I can't understand why is not possible to use for any other detector. Also Yolo has multiple outputs.
    I've followed the link you provided but there's no code to generate the Yolo's graphs.
    Thank you.

  • @EscVM To compile a NCS compatible graph file, you must have the NCSDK installed and use the mvNCCompile tool to compile a graph file for your model. For example: mvNCCompile my_model.pb -s 12. For more information you can visit

    As for TensorFlow object detectors, there are still some TensorFlow operations that are not supported yet which are required by these object detector models.

  • Thank @Tome_at_Intel for your reply.

    I know that I've to use the mvNCCompile my_model.pb -s 12 command, but for my application the architecture has seven output nodes. As far as I know, mvNCCompile can't handle that. However, Yolo has several outputs too, so I'd like to know how has been possible to generate the graph, in that case, for the neural compute stick.

  • @EscVM Although Yolo has multiple outputs like bounding box coordinates, scores, classes, etc, it shouldn't an issue. The graph file should still generate and when running an inference with your application, it will return an array with all of the information you need (num objects detected, bounding box coordinates, scores, etc.).

  • Is there an example code of how has been generated the graph of Yolo using the NCSD toolkit? Thanks

  • @EscVM Here is an example using Tiny Yolo v1 on Caffe. The results with all of the output can be interpreted as follows in my post at

  • I'm try to use AIY VisionKit (VisionBonnet) which is designed by google, and the solution is Myraid2.
    The Vision Bonnet seems like support tensorflow ssd mobilenetV1, and can be output multiply. (I already run the demo, but still some issues with my own model).
    Maybe there have some experience shared. @Tome_at_Intel

  • I have the same issue, i cant find whe syntax to fill the -on argument with multiples outpus, i working with faster rcnn model and it has 4 outputs. i need something like this: mvNCCompile -s 12 frozen_inference_graph.pb -in=image_tensor -on=[Layer1, Layer2, Layer3, Layer4] (this is just a example, it doesnt work!).

  • i was thinking of modifying the graph (pb) file like this:
    collecting all the outputs from all required nodes..creating a tensor as final output. This tensor can be fed as output layer to mvNCCompile
    not sure if it works.

    i wanted to check if one of the node(out of detection_boxes, detection_scores,num_detections,detection_classes) compiles with mnNCCompile [tensorflow mobileNet_SSD]
    mvNCCompile model.ckpt.meta -s 12 -in=image_tensor -on=num_detections -o output.graph

    Traceback (most recent call last):
    File "/usr/local/bin/mvNCCompile", line 118, in
    create_graph(, args.inputnode, args.outputnode, args.outfile, args.nshaves, args.inputsize, args.weights)
    File "/usr/local/bin/mvNCCompile", line 104, in create_graph
    net = parse_tensor(args, myriad_config)
    File "/usr/local/bin/ncsdk/Controllers/", line 259, in parse_tensor
    input_data = np.random.uniform(0, 1, shape)
    File "mtrand.pyx", line 1307, in mtrand.RandomState.uniform
    File "mtrand.pyx", line 242, in mtrand.cont2_array_sc
    TypeError: 'NoneType' object cannot be interpreted as an integer

    how to solve this error?

    second best alternative is to convert tensorflow model to caffe and then pass it to mnNCCompile.
    I don't know why there is poor support to tensorflow!

    anyway, was anybody able to compile mobilenet_SSD tensorflow with movidius SDK ?

    I'm just asking cause its easier to **retrain **tensorflow mobilenet_SSD to detect custom objects

Sign In or Register to comment.