Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Sign In

Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Alert: This site has migrating to our new home at Please visit us there!

Sample is not working using more than 5 SHAVE on TX2 as host

I have already asked the question here, but didn't get any proper response yet. In the hope to get some useful response, I am again raising the question.

Since the compiled model with 12 SHAVEs is running on x86 but not on TX2, I guess, it is a hardware issue.

Help me if anyone knows the reason. You can guess also if you are not sure.


  • 8 Comments sorted by Votes Date Added
  • Hi @Akhilesh ,

    Thanks for contacting us. I've attempted to install the NCSDK on a TX2 and although I was able to work around errors/warnings thrown by the install process, I haven't been able to get it to function properly (or at least as far as you have gotten it to work).

    My guess is that it may be a hardware problem, as one of the pre-reqs in the installation guide states "x86-64 with Ubuntu (64 bit) 16.04 Desktop". This might be the reason behind it, as I don't think the install on a non-x86 board has been tested (aside from an RPi 3). On the other hand I am interested in getting the install further, I'd appreciate if you could share a detailed list of the steps you followed to get the NCSDK to "work", maybe there is something I can do to help.

    Best Regards,

  • Hi @Luis_at_Intel ,
    I have few more information to share.
    I tried with small network, some 2 convolution and 2 dense layers, it's working with 12 SHAVEs. But the big network, like VGG16 is not working with 12 SHAVEs.

    If you read 7th point of Errata in, you will see "Convolution may fail to find a solution for very large inputs.". I think this may be the reason that vgg16 is not working.

    If you got any hint, please let me know.

  • If the size of intermediate generated data is too large relative to the size of internal memory, it will not work well.
    Internal memory is only 500 MB.
    According to the results I benchmarked long ago, I will consume "(number of Shave) x (model size)".

  • Thanks @PINTO , This is really helpful information for me. But my actual model is using VGG16 as a feature extractor (upto flatten layer) and 2 dense layer for classifying 7 classes. So, total size of the model is 35.9 MB.

  • @Akhilesh

    My explanation was inadequate.
    The intermediate generation data is larger than the size of the model file.
    There is no verification method as to what size the actual intermediate generation data is.

  • @PINTO , I understood. the actual size of tensorflow model in around 72 MB which is converted into around 35.9 MB after using mcNCCompile.

  • Hi @Luis_at_Intel and @PINTO ,
    So far, I was working with a SOM of TX2 chip. But when I checked it on TX2 board, it's working properly.

This discussion has been closed.