frame

Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Sign In

Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Issue with LightenedCNN running on Myriad 2 - Output results are not matching GPU's results

High level summary: we are seeing different outputs with the same input image, same network running on a GPU compared to the Movidius stick (Myriad 2). The neural network definition is coming from a GitHub project called "A Light CNN for Deep Face Representation with Noisy Labels". We would like to understand better why we have a difference and how can we fix it to have the same output values returned with the stick and the GPU.

1) The initial step was to compile this neural network: https://github.com/AlfredXiangWu/face_verification_experiment/blob/master/proto/LightenedCNN_C_deploy.prototxt

2) The Movidius parser/compiler returned an error around the slicer. Modification of the definition have been performed per post HERE

3) The new prototxt compiles properly after modification, see new definition in LightenedCNN_C_deploy.prototxt.txt (attached to this post).

4) Finally the neural network is fed with a gray scale image and output the results in movidius_op.txt. (attached to this post)

5) If we compare the result coming from the GPU with the same network definition and the same image as input we are not getting the same result, please compare: gnu_op.txt (attached to this post)

Questions:

a) Would it be possible to confirm this type of CNN can actually run on the Movidius stick?
b) Please confirm the slicing operation is supported on the Movidius stick?
c) Is there a tool that can be used to emulate the Movidius stick behavior on a GPU?
d) We are concerned about the "types" used to represent CNN parameters in the stick, we are thinking we may face data truncation during inferencing. Please comment on this.

//////////////////////////////////////////////////

Note: the input picture used for testing is attached to this post. The image went into two transformations (gray scale normalization and "reshaped") before going through the network. The Python code is below:

im_data = (img_gray - 127.5) * 0.0078125
ip_data = np.reshape(im_data, (1, 1, 128, 128))

The image is attached to this post as well.

Trained model is available upon request (big file, 126MB).

Comments

Sign In or Register to comment.