It looks like you're new here. If you want to get involved, click one of these buttons!Sign In
It looks like you're new here. If you want to get involved, click one of these buttons!
Has anyone come across this before? Using ncsdk version 1, I use mvncLoadTensor to load the input pattern, then mvncGetResult to obtain the results (just like the AlexNet C++ example). It all works perfectly when run from the main program and uses at most 2% of a single ARM CPU on a quad core board. The board is an ASUS Tinkerboard, but the same effect is demonstrable on a Raspberry PI 3B+. It works fine even if the two functions are called from a CPU secondary thread rather than the main thread. Now, if I package up the inference code into a dynamic library (.so file) then load the library using dlopen() and obtain the internal inference function (which calls the two ncsdk functions) via dlsym() the results are exactly the same but all the remaining CPU (98% to 100% of all 4 cores) is used. I find this very strange, has anyone else seen this behavior?