frame

Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Sign In

Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Different results from the same program on NCS1 and NCS2

Hi all,

I have a NCS first gen and have also recently purchased a NCS2, and I have been trying to set up the software enviroment and use it on Windows (with intel i7 CPU). I've installed the software dependencies including Visual Studio 2017, OpenVINO and the driver for MYRIAD.

I'm testing a simple network of densely connected layers with 1D inputs/outputs, like [1,X] as input and [1,Y] as output, and the variables are all floats. I use network converted with FP16 for MYRIAD devices, and FP32 for CPU. NCS1 works fine and outputs similar results as CPU. However, when I replace NCS1 with NCS2, the network runs but outputs incorrect results (the correct outputs are floats between 0 and 1, which is what NCS1 and CPU returns, but NCS2 outputs numbers far from these values, such as negative numbers and two-digit numbers).

The question is, I'm actually using exactly the same program (and the input arguments of -m "FP16/my_model.xml" -d MYRIAD), but I cannot seem to understand why would NCS1 and NCS2 output different results. Somehow, the sample program of classification with squeezenet returns similar results on NCS1 and NCS2, and the only modifications I used in my program that are different from the sample program are the input and output parsing, so my guess is does NCS2 have any different approach with parsing data? (For my program, I've specified FP32 precision in the host program, which reads and also outputs in float, and a float* pointer is used to marshall data into the interpret buffer, while FP16 is used for the network itself. This seems to work with no problem for NCS1, but I wonder if NCS2 might have a different approach in accepting input/output data?)

Or, to make the question shorter, I wonder what are the key differences one need to address in software, when programming NCS1 versus NCS2? Are they compatible with exactly the same software, or do one need special format/arguments for the NCS2? Also, would this be a version problem with OpenVINO or the driver (although I installed it a week ago so it should be a pretty updated version).

Thanks a lot!

P.S. the network is a Keras model, which uses the freeze-variable method to output the network data as a Tensorflow .pb file. The .pb file is converted using mo.py while specifying FP16/FP32, as well as input shape of [1,X]. (Here in the output sample below the network has input of [1,4] and output of [1,6]). I wonder if the IR model needs to be compiled differently for NCS2, or would the same xml&bin files suffice?


Attached are the outputs of the inference program running on CPU, NCS1, and NCS2:

CPU:
[ INFO ] InferenceEngine:
API version ............ 1.4
Build .................. 19154
arguments:1
[ INFO ] Parsing input parameters
[ INFO ] Files were added: 1
[ INFO ] car.png
[ INFO ] Loading plugin

    API version ............ 1.5
    Build .................. win_20181005
    Description ....... MKLDNNPlugin

[ INFO ] Loading network files:
FP32/my_model.xml
FP32/my_model.bin
[ INFO ] Preparing input blobs
input dim0 (size):1
input dim1 (variables):4
[ INFO ] Batch size is 1
[ INFO ] Preparing output blobs
[ INFO ] Loading model to the plugin
[ INFO ] Starting inference (1 iterations)
[ INFO ] Processing output blobs


output data:

0:0.0766812
1:0.271392
2:0.0494225
3:0.265109
4:0.0571161
5:0.472295

[ INFO ] Inference complete.


NCS1:
[ INFO ] InferenceEngine:
API version ............ 1.4
Build .................. 19154
arguments:1
[ INFO ] Parsing input parameters
[ INFO ] Files were added: 1
[ INFO ] car.png
[ INFO ] Loading plugin

    API version ............ 1.5
    Build .................. 19154
    Description ....... myriadPlugin

[ INFO ] Loading network files:
FP16/my_model.xml
FP16/my_model.bin
[ INFO ] Preparing input blobs
input dim0 (size):1
input dim1 (variables):4
[ INFO ] Batch size is 1
[ INFO ] Preparing output blobs
[ INFO ] Loading model to the plugin
[ INFO ] Starting inference (1 iterations)
[ INFO ] Processing output blobs


output data:

0:0.0767212
1:0.27124
2:0.0493469
3:0.266113
4:0.0570068
5:0.471924

[ INFO ] Inference complete.


NCS2:
[ INFO ] InferenceEngine:
API version ............ 1.4
Build .................. 19154
arguments:1
[ INFO ] Parsing input parameters
[ INFO ] Files were added: 1
[ INFO ] car.png
[ INFO ] Loading plugin

    API version ............ 1.5
    Build .................. 19154
    Description ....... myriadPlugin

[ INFO ] Loading network files:
FP16/my_model.xml
FP16/my_model.bin
[ INFO ] Preparing input blobs
input dim0 (size):1
input dim1 (variables):4
[ INFO ] Batch size is 1
[ INFO ] Preparing output blobs
[ INFO ] Loading model to the plugin
[ INFO ] Starting inference (1 iterations)
[ INFO ] Processing output blobs


output data:

0:20.7813
1:14.4219
2:6.92188
3:-21.1563
4:-8.58594
5:7.01563

[ INFO ] Inference complete.

Comments

  • 4 Comments sorted by Votes Date Added
  • edited March 7 Vote Up0Vote Down

    Update:

    I've tested with other samples, such as the text_detection_demo program in the sample solution, which uses "text-detection-0001.xml" to find text fields in an image.
    Again, both the CPU and NCS1 output correct results, while the NCS2 fails to load. (The progam runs up to the point where infer request is uploaded to the NCS2, and then gets stuck without any output.)

    This again makes one wonder if there is any difference in the inference engine API call, program argument, or network IR compilation when using NCS vs NCS2? (since apparently using the same program and arguments returns different result from the two devices.) For instance, do they have different support for network layers? Or, do they have different internal precision implementation?

    Alternatively, would it just be a faulty device? Is there some way one can check and debug the NCS/NCS2 devices?

    Thanks.

  • Hi @Kyme

    I ran a quick test with the sample text_detection_demo on both NCS 1 and NCS 2. The results are very similar on both device, however, the NCS 2 did take a little longer to show the results. Could you tell me which version of the OpenVINO toolkit you are using? I tested on a Linux Install, I will setup my windows environment.

    Have you also open a thread on the Computer Vision Forum? The NCS forum is mainly for the Neural Compute SDK.

    Regards,
    Jesus

  • edited March 7 Vote Up0Vote Down

    Thanks for the reply!

    The version is "computer_vision_sdk_2018.5.456", and I'm using visual studio 2017 version 15.4.5 on Windows 10. I've used the sample project from "deployment_tools\inference_engine\samples\text_detection_demo" and also the FP16 model from "deployment_tools\intel_models\text-detection-0001\FP16". The sample runs with argument -r -i input.png -m "FP16\text-detection-0001.xml" -d MYRIAD. (or with the FP32 and -d CPU).

    On CPU and NCS1 there is no problem with inference (a window pops up with green boxes around text in the image), but with NCS2 the program freezes, although as you've suggested, I've waited a while to see if it's just taking more time, and actually it did manage to infer and return results, but it took a significantly longer time (a whole minute) for the pop-up window to appear, while the NCS1 took 2s. I wonder if this is normal and similar to your test, or should it usually not take a minute (in which case does it mean this might be a faulty unit)?

    With other models, some (classification) runs okay on NCS2 and returns results but I don't know if the inferred probabilities are correct (they look similar but with different numbers from that of NCS1, despite using the same model and precision), some returns completely incorrect results for NCS2 but okay with NCS1 and CPU (like the densely connected simple network I tested with 1D input/output).

    Overall, may I ask if NCS1 and NCS2 usually should return similar results, and support exactly the same program and models (without having to modify the arguments)? Alternatively, would incorrect results (or much longer inference time) suggest a hardware problem?

    Thanks again. Looking forward to your thoughts!

    P.S. Thanks for the suggestion too. I've actually tried to post the question on the CV forum yesterday too but somehow the server returned an error and wouldn't let me register as new user. I tried again today and am able to register as a user just now. I'll post the question on the other forum too.

  • Hi @Kyme

    I don't believe it is a hardware issue with your NCS2 as I am seeing the same results with my NCS2 and OpenVINO on Windows/Linux. If you haven't already, please open the bug report on the Computer Vision forum and paste the link here for reference.

    Regards,
    Jesus

Sign In or Register to comment.