frame

Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Sign In

Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Not able to compile Variable sized image as input.

I have made a Object Detector using Keras.

input_1 (InputLayer) (None, None, None, 3)
fcn_conv0 (Conv2D) (None, None, None, 32)
fcn_conv1 (Conv2D) (None, None, None, 64)
fcn_conv2 (Conv2D) (None, None, None, 128)
max_pooling2d_1 (MaxPooling2 (None, None, None, 128)
fcn_conv3 (Conv2D) (None, None, None, 64)
max_pooling2d_2 (MaxPooling2 (None, None, None, 64)
fcn_conv4 (Conv2D) (None, None, None, 32)
max_pooling2d_3 (MaxPooling2 (None, None, None, 32)
fcn_conv5 (Conv2D) (None, None, None, 32)
fcn_conv6 (Conv2D) (None, None, None, 512)
fcn_conv7 (Conv2D) (None, None, None, 6)
global_max_pooling2d_1 (Glob (None, 6)

Since I am dealing with variable sized object, I have kept the input as None and made a custom loss.
But after converting my Keras model to Tensorflow model and compiling using mvNCCompile command line tool it throws an error that

input_data = np.random.uniform(0, 1, shape)
File "mtrand.pyx", line 1307, in mtrand.RandomState.uniform
File "mtrand.pyx", line 242, in mtrand.cont2_array_sc
TypeError: 'NoneType' object cannot be interpreted as an integer

Command I am using is:
mvNCCompile tf_model.meta -in=input_1 -on=fcn_conv11/BiasAdd

I've the TF_Model folder which contains checkpoint file, .meta file, .data file, .index file.

Kindly help. I am stuck with these for the past one week

Thanks

Comments

  • 6 Comments sorted by Votes Date Added
  • @bumzo The NCSDK doesn't support models that use variable input sizes. For reference, the NCSDK takes a model and compiles a version of that model in the form of a static Movidius graph file. The input size is set for the graph file and cannot be changed. You can always resize input images to a set input size resolution, but I understand that based on what you are doing, it may not be suitable. Currently there isn't a workaround for implementing variable input sizes.

  • @Tome_at_Intel But then how it is supporting Yolo?
    Even it works on the principle of R-FCN and variable size.

  • @bumzo The input size for Tiny Yolo is set at 448x448. This doesn't change. When using input images that are of a different resolution, the images are resized and passed through the model.

  • @Tome_at_Intel thanks for the information.

  • Hi, I am getting the same exact problem but I took a Resnet Inception V2 model from TensorFlow Zoo which should be supported and I retrained it on my custom dataset. After retraining, I took the weights from inference folder and tried to use the mvncCompile on them and got the same exact error. How can I change the input size to 1 in this case? @bumzo @Tome_at_Intel

  • Hi @ali18997
    I've answered your question on your other thread here.

    Best Regards,
    Sahira

This discussion has been closed.