It looks like you're new here. If you want to get involved, click one of these buttons!Sign In
It looks like you're new here. If you want to get involved, click one of these buttons!
Hey, I'm new to ML so the answer to my question could be trivial, but unfortunately not for me at the moment.
Well, what happens there is that I have a caffe model which detects faces on images, when I'm running it on PC it runs well and the shape of the output is something like (1,1,123,7), the thing is when I compile the model to graph file with mvNCCompile, the shape of the output on NCS is something like (1407,) for the same input image. So, to get the bounding boxes and number of detected faces I can't use the same process I used when I was getting output on the PC, so the question is:
It was supposed to be different? If yes, then how can I interpret the format of output on ncs?
Also, if you happen to know any good tutorial to learn caffe models training, I would appreciate