It looks like you're new here. If you want to get involved, click one of these buttons!
I am getting this error on compiling a Caffe Network
[Error 35] Setup Error: Not enough resources on Myriad to process this network.
How to resolve this?
I meet the same problem, and my caffe model is about 200MB. Does anybody have any ideas to solve it?
@AshwinVijayakumar Did you had solved this problem?
@Tome_at_Intel Can you assist here?
@gauravmoon, can you please share your model so that we can try to reproduce & analyze it on our bench?
@xxys @gauravmoon We are investigating this problem at the moment. As soon as we find the root cause, I will contact you both. Thanks.
@Tome_at_Intel Any update? When can this be done.
@Tome_at_Intel any update? Getting the same error when trying to compile an VGG-based SSD.
@macsz If you are receiving this error, this likely means that the model's intermediate processing memory requirements are too large to be processed on the NCS device.
I'm getting the same error. "Setup Error: Not enough resources on Myriad to process this network". This is for my custom network though. Have tried with couple of .pb files with size 15MB and 50MB.
Any debug switches I can enable to get more information ? Please provide an email-ID to which I can send my model.
Since its my custom network, I can modify it to fit into the NCS but I need to understand what should be modified.
@email@example.com Please send me a PM with your model link and I can take a look at it for you.
When I use the mvNCCompile function for a .meta file with about 320KB I will get the same error :-/
I use the API Version 2 and printed the current memory useage with "print('#################', device.get_option(mvncapi.DeviceOption.RO_CURRENT_MEMORY_USED))" in the TensorFlowParser.py file ( in line 370 in the parse_tensor part). I suspected, that the memory useage will be grown while the mvNCCompile script is running, but the output says me, that the memory useage is always the same. Especially it is lower than the total memory size, which I have got with "total_memory = device.get_option(mvncapi.DeviceOption.RO_MEMORY_SIZE)".
@Tome_at_Intel so which "memory" do you mean in your post: "this likely means that the model's intermediate processing memory requirements are too large to be processed on the NCS device."?
@sneey The NCS comes with 4 Gb (or 500 MB) of DRAM NCS specs here. A portion of the 500 MB is set aside for graph file allocation and another portion is set aside for intermediate processing. If you are receiving this error, it is likely exceeding one of these memory limitations.
You can check your model's intermediate processing memory requirement by editing the FileIO.py file in your /usr/local/bin/ncsdk/Controllers directory. There is a debug flag and if you set the flag to True, you can see the memory requirements for running your model when running mvNCCheck or mvNCCompile.
Thanks for your help @Tome_at_Intel ! So the output says me, the stick has 133Mb storage for intermediate processing and it is needed 148Mb for my model...
So a solution could be to get a stick with more than 500 MB DRAM, so the 4GB Version?
The "NCS specs here" link explains "VPU includes 4Gbits of LPDDR3 DRAM"... so now I am a bit confused :-/ A 500MB Version is not mentioned.
@sneey In my previous message I meant that 4 Gigabits (Gb), not to be confused with 4 Gigabytes (GB), is equal to 500 Megabytes (MB).
Aaaah ok, now all is clear! I was confused with bits and bytes ... In my defense, I'm a mathematician, not a computer scientist... xD
Is there a solution to this problem? has anyone succeed to run openpose on Movidius?