frame

Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Sign In

Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Wrong weights conversion during graph generation

Hi to everyone,

I'm trying to make it work a very simple neural network with the following structure:
- Input of 3 84 3 (row, col, channels)
- Conv2D with kernel 3x3, filter 1 and stride 3
- Flatten layer where the output is only 28
- A series of Dense layers of 28 ---> 256 ---> 128 ---> 64 ---> 32 ---> 5 with all Relu activations except for the last one that is linear.

initial_model = load_model('stage_4_1510.h5')
#initial_model.load_weights('stage_4_1510_weigths.h5')
initial_model.summary()
initial_model.trainable = False

input = Input(shape=(3, 84, 3), name='input')
x = Conv2D(filters=1,  kernel_size=(3,3), use_bias=False, 
                strides=(1,3), padding='valid', trainable=False,
                name='conv2d')(input)
x = Flatten()(x)
x = initial_model(x)

model = Model(inputs=input, outputs=x)
#model.compile(loss='mse', optimizer='rmsprop')

shape = (1,3,3,3,1)

model.load_weights('stage_4_1510.h5', by_name=True)
weights = np.empty(shape)
weights.fill(1.0)
model.get_layer('conv2d').set_weights(weights)

model.summary()

The network has been trained in Keras and so from .h5 I had to transform the model in .pb file.
At that point, in order to verify that everything was correct , I printed out the weights and everything was perfect.

So, I used the mvNCCompiler to get the .graph file and I generated it without problems with the following message:

No Bias
No Bias
No Bias
No Bias
No Bias
Fusing DeptwiseConv + Pointwise Convolution into plain Convolution
Fusing Add and Batch after Convolution
Eliminate layers that have been parsed as NoOp
Fusing Pad and Convolution2D
Fusing Scale after Convolution or FullyConnect
Fusing standalone postOps
Fusing Permute and Flatten
Fusing Eltwise and Relu
Fusing Concat of Concats

Evaluating input and weigths for each hw layer

Network Input tensors ['input:0#17']
Network Output tensors ['sequential_1/dense_5/BiasAdd:0#33']
Blob generated

It seems everything fine so far, but unfortunately the network, during the inference time, gives random outputs ( not coherent with the original network).

I'm sure that the problem is in the conversion and in particular in the fully connected part because with some tests I verified that the output of the convolutional part is correct.

So, considering that I can touch and get my hand the compiler, what I have to do? Do you have any idea? Thx

Sign In or Register to comment.