frame

Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Sign In

Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

How To Use Own TF Object Detection Api Model In The ncsdk2?

Hi everyone . Now I have frozen_inference_graph.pb model.ckpt.data-00000-of-00001 model.ckpt.index model.ckpt.meta
What should I do next?????

Comments

  • 3 Comments sorted by Votes Date Added
  • Hi @YoucanBaby

    Thank you for reaching out! Now that you have your TensorFlow model, you will need to convert it to an Intel Movidius Graph file that can be used with the Neural Compute Stick.
    The Neural Compute SDK includes the mvNCCompile that is used to compile the networks into the Graph.

    Take a look at the Tools provided with the NCSDK.

    Once you have a graph file, you will need to write a C or Python program to load the graph into the Neural Compute Stick. I would start by looking at the examples included in the NCSDK and NCAPPZOO.
    https://github.com/movidius/ncsdk/tree/master/examples
    https://github.com/movidius/ncappzoo/tree/master/tensorflow

    Hope this helps!

    Regards,
    Jesus

  • Hi @Jesus_at_Intel
    Thanks! It works! :) But I meet an error. :'(
    When I command mvNCCompile frozen_graph_slim_87%.pb -s 12 -in=input -on=InceptionV2/Predictions/Reshape_1 -is 224 224 -o inception_v2.graph . Errors have arisen.

    Caused by op 'InceptionV2/InceptionV2/Conv2d_1a_7x7/separable_conv2d/depthwise', defined at:
    File "/usr/local/bin/mvNCCompile", line 118, in
    create_graph(args.network, args.inputnode, args.outputnode, args.outfile, args.nshaves, args.inputsize, args.weights)
    File "/usr/local/bin/mvNCCompile", line 104, in create_graph
    net = parse_tensor(args, myriad_config)
    File "/usr/local/bin/ncsdk/Controllers/TensorFlowParser.py", line 211, in parse_tensor
    tf.import_graph_def(graph_def, name="")
    File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/importer.py", line 313, in import_graph_def
    op_def=op_def)
    File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/ops.py", line 2956, in create_op
    op_def=op_def)
    File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/ops.py", line 1470, in init
    self._traceback = self._graph._extract_stack() # pylint: disable=protected-access

    InvalidArgumentError (see above for traceback): NodeDef mentions attr 'dilations' not in Op<name=DepthwiseConv2dNative; signature=input:T, filter:T -> output:T; attr=T:type,allowed=[DT_FLOAT, DT_DOUBLE]; attr=strides:list(int); attr=padding:string,allowed=["SAME", "VALID"]; attr=data_format:string,default="NHWC",allowed=["NHWC", "NCHW"]>; NodeDef: InceptionV2/InceptionV2/Conv2d_1a_7x7/separable_conv2d/depthwise = DepthwiseConv2dNative[T=DT_FLOAT, data_format="NHWC", dilations=[1, 1, 1, 1], padding="SAME", strides=[1, 2, 2, 1], _device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_input_0_0, InceptionV2/Conv2d_1a_7x7/depthwise_weights). (Check whether your GraphDef-interpreting binary is up to date with your GraphDef-generating binary.).
    [[Node: InceptionV2/InceptionV2/Conv2d_1a_7x7/separable_conv2d/depthwise = DepthwiseConv2dNative[T=DT_FLOAT, data_format="NHWC", dilations=[1, 1, 1, 1], padding="SAME", strides=[1, 2, 2, 1], _device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_input_0_0, InceptionV2/Conv2d_1a_7x7/depthwise_weights)]]

    I use bazel to check OutputNode/InputNode of my model. Information follows:
    Found 1 possible inputs: (name=input, type=float(1), shape=[1,224,224,3])
    No variables spotted.
    Found 1 possible outputs: (name=InceptionV2/Predictions/Reshape_1, op=Reshape)
    Found 10187114 (10.19M) const parameters, 0 (0) variable parameters, and 0 control_edges
    Op types used: 357 Const, 278 Identity, 70 Conv2D, 69 Relu, 68 FusedBatchNorm, 10 ConcatV2, 8 AvgPool, 5 MaxPool, 2 BiasAdd, 2 Reshape, 1 DepthwiseConv2dNative, 1 Placeholder, 1 Softmax, 1 Squeeze
    To use with tensorflow/tools/benchmark:benchmark_model try these arguments:
    bazel run tensorflow/tools/benchmark:benchmark_model -- --graph=/home/xuyifang/Desktop/ncs/frozen_graph_slim_87%.pb --show_flops --input_layer=input --input_layer_type=float --input_layer_shape=1,224,224,3 --output_layer=InceptionV2/Predictions/Reshape_1

    Can you help me? Thank u verrrrrrrrrrrrry much!

  • Hi @YoucanBaby

    Could you share the model you are trying to compile? I would like to try this myself.
    Which Neural Compute SDK version are you using? Are you using the Neural Compute Stick 1 or 2?

    Regards,
    Jesus

Sign In or Register to comment.