frame

Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Sign In

Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Alert: Beginning Tuesday, June 25th, we will be freezing this site and migrating the content and forums to our new home at https://forums.intel.com/s/topic/0TO0P000000PqZDWA0/intel-neural-compute-sticks. Check it out now!

support keras?

since now movidius support TF, does mvNCCompile also support Keras with tensorflow backend?

Comments

  • 3 Comments sorted by Votes Date Added
  • I would also like to know this answer. It must be possible, but is there coding or configuration required?

  • I am still working on it, but the input and output I think should be implemented by raw tensorflow, and you could stack your layer using keras.

  • as I said above, I use raw tensorflow to implement Input and Output, between them it is built by Keras. Then I trained my model in the laptop. Everything goes well, and the model was saved as .meta file, but when I use mvNCCompile to compile my model, error occurs:

    InvalidArgumentError (see above for traceback): You must feed a value for placeholder tensor 'y_true' with dtype float and shape [?,2]
    [[Node: y_true = Placeholderdtype=DT_FLOAT, shape=[?,2], _device="/job:localhost/replica:0/task:0/device:CPU:0"]]


    img_input = tf.placeholder(tf.float32, shape=(None, 224, 224, 3), name="x") y_true = tf.placeholder(tf.float32, shape=(None, 2), name="y_true") # Block 1 x = Conv2D(64, (3, 3), activation='relu', padding='same', name='block1_conv1')(img_input) x = Conv2D(64, (3, 3), activation='relu', padding='same', name='block1_conv2')(x) x = MaxPooling2D((2, 2), strides=(2, 2), name='block1_pool')(x) # Block 2 x = Conv2D(128, (3, 3), activation='relu', padding='same', name='block2_conv1')(x) x = Conv2D(128, (3, 3), activation='relu', padding='same', name='block2_conv2')(x) x = MaxPooling2D((2, 2), strides=(2, 2), name='block2_pool')(x) # Block 3 x = Conv2D(256, (3, 3), activation='relu', padding='same', name='block3_conv1')(x) x = Conv2D(256, (3, 3), activation='relu', padding='same', name='block3_conv2')(x) x = Conv2D(256, (3, 3), activation='relu', padding='same', name='block3_conv3')(x) x = MaxPooling2D((2, 2), strides=(2, 2), name='block3_pool')(x) # Block 4 x = Conv2D(512, (3, 3), activation='relu', padding='same', name='block4_conv1')(x) x = Conv2D(512, (3, 3), activation='relu', padding='same', name='block4_conv2')(x) x = Conv2D(512, (3, 3), activation='relu', padding='same', name='block4_conv3')(x) x = MaxPooling2D((2, 2), strides=(2, 2), name='block4_pool')(x) # Block 5 x = Conv2D(512, (3, 3), activation='relu', padding='same', name='block5_conv1')(x) x = Conv2D(512, (3, 3), activation='relu', padding='same', name='block5_conv2')(x) x = Conv2D(512, (3, 3), activation='relu', padding='same', name='block5_conv3')(x) x = MaxPooling2D((2, 2), strides=(2, 2), name='block5_pool')(x) #dense layers are removed predictions = Dense(2, activation="softmax")(x) loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y_true, logits=predictions)) train_step = tf.train.AdamOptimizer(learning_rate=1e-3).minimize(loss) correct_pred = tf.equal(tf.argmax(predictions, 1), tf.argmax(y_true, 1)) acc_value = tf.reduce_mean(tf.cast(correct_pred, tf.float32))

    And I think I do feed the value to the y_true


    for j in range(n_batch): start = j * BATCH_SIZE batch_train = train[start:start+BATCH_SIZE] X_batch_train = np.array([i[0] for i in batch_train]) y_batch_train = np.array([i[1] for i in batch_train]) feed_dict_train = {img_input: X_batch_train, y_true: y_batch_train} _, acc = sess.run([train_step, acc_value], feed_dict=feed_dict_train) if j % 10 == 0: print("training accuracy:{}".format(acc))

    Is it the compile command problem or my code itself?

Sign In or Register to comment.