frame

Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Sign In

Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Completely wrong values from NCS

I have been working on implementing Caffe MTCNN on the ncs.
After compiling a PNet prototxt with models and running a sample application, I figured that the values are vastly incorrect from what I should be getting from a Caffe application running the same model.

Given the PNet weight definition (det1.caffemodel acquired from https://github.com/DuinoDu/mtcnn/tree/master/model ), all the values should be in the range of [-1, 1], however the output acquired via the NCS ranges from [-65344, 62912].
There is no correlation to be found between the outputs from NCS and Caffe app.

Is there a problem with the Caffe softmax layer on NCS?
Please help. Thanks.


name: "PNet"
input: "data"
input_shape {
dim: 1
dim: 3
dim: 328
dim: 148
}
layer {
name: "conv1"
type: "Convolution"
bottom: "data"
top: "conv1"
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 10
kernel_size: 3
stride: 1
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
value: 0
}
}
}
layer {
name: "PReLU1"
type: "PReLU"
bottom: "conv1"
top: "conv1"
}
layer {
name: "pool1"
type: "Pooling"
bottom: "conv1"
top: "pool1"
pooling_param {
pool: MAX
kernel_size: 2
stride: 2
}
}

layer {
name: "conv2"
type: "Convolution"
bottom: "pool1"
top: "conv2"
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 16
kernel_size: 3
stride: 1
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
value: 0
}
}
}
layer {
name: "PReLU2"
type: "PReLU"
bottom: "conv2"
top: "conv2"
}

layer {
name: "conv3"
type: "Convolution"
bottom: "conv2"
top: "conv3"
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 32
kernel_size: 3
stride: 1
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
value: 0
}
}
}
layer {
name: "PReLU3"
type: "PReLU"
bottom: "conv3"
top: "conv3"
}

layer {
name: "conv4-1"
type: "Convolution"
bottom: "conv3"
top: "conv4-1"
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 2
kernel_size: 1
stride: 1
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
value: 0
}
}
}

layer {
name: "conv4-2"
type: "Convolution"
bottom: "conv3"
top: "conv4-2"
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 4
kernel_size: 1
stride: 1
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
value: 0
}
}
}
layer {
name: "prob1"
type: "Softmax"
bottom: "conv4-1"
top: "prob1"
}

layer {
name: "concat"
type: "Concat"
bottom: "conv4-2"
bottom: "prob1"
top: "concat"
}

layer {
name: "output"
type: "Flatten"
bottom: "concat"
top: "output"
}

Comments

  • 4 Comments sorted by Votes Date Added
  • @sejunkim Caffe on NCS supports basically configuration with 3 channels and square images. You can try to change the input_shape to {1,3,X,X} and check if the issue persists.

  • @sejunkim I think the shape should be 1212,2424 and 48*48, do you have a demo run based on NCS?

  • I found that there is something wrong in transferring prototxt file to graph file when "top" and "bottom" are the same name. You can change the top name of PRelu layers and try again

  • @Tome_at_Intel Did you find the problem I mentioned above?

    Regarding to PNet:

    name: "PNet"
    input: "data"
    input_shape {
      dim: 1
      dim: 3
      dim: 328
      dim: 148
    }
    layer {
      name: "conv1"
      type: "Convolution"
      bottom: "data"
      top: "conv1"
      param {
        lr_mult: 1
        decay_mult: 1
      }
      param {
        lr_mult: 2
        decay_mult: 0
      }
      convolution_param {
        num_output: 10
        kernel_size: 3
        stride: 1
        weight_filler {
          type: "xavier"
        }
        bias_filler {
          type: "constant"
          value: 0
        }
      }
    }
    
    layer {
      name: "prelu1"
      type: "PReLU"
      bottom: "conv1"
      top: "prelu1"
    }
    
    layer {
      name: "pool1"
      type: "Pooling"
      bottom: "prelu1"
      top: "pool1"
      pooling_param {
        pool: MAX
        kernel_size: 2
        stride: 2
      }
    }
    
    layer {
      name: "conv2"
      type: "Convolution"
      bottom: "pool1"
      top: "conv2"
      param {
        lr_mult: 1
        decay_mult: 1
      }
      param {
        lr_mult: 2
        decay_mult: 0
      }
      convolution_param {
        num_output: 16
        kernel_size: 3
        stride: 1
        weight_filler {
          type: "xavier"
        }
        bias_filler {
          type: "constant"
          value: 0
        }
      }
    }
    
    layer {
      name: "prelu2"
      type: "PReLU"
      bottom: "conv2"
      top: "prelu2"
    }
    
    layer {
      name: "conv3"
      type: "Convolution"
      bottom: "prelu2"
      top: "conv3"
      param {
        lr_mult: 1
        decay_mult: 1
      }
      param {
        lr_mult: 2
        decay_mult: 0
      }
      convolution_param {
        num_output: 32
        kernel_size: 3
        stride: 1
        weight_filler {
          type: "xavier"
        }
        bias_filler {
          type: "constant"
          value: 0
        }
      }
    }
    
    layer {
      name: "prelu3"
      type: "PReLU"
      bottom: "conv3"
      top: "prelu3"
    }
    
    layer {
      name: "conv4-1"
      type: "Convolution"
      bottom: "prelu3"
      top: "conv4-1"
      param {
        lr_mult: 1
        decay_mult: 1
      }
      param {
        lr_mult: 2
        decay_mult: 0
      }
      convolution_param {
        num_output: 2
        kernel_size: 1
        stride: 1
        weight_filler {
          type: "xavier"
        }
        bias_filler {
          type: "constant"
          value: 0
        }
      }
    }
    
    layer {
      name: "conv4-2"
      type: "Convolution"
      bottom: "prelu3"
      top: "conv4-2"
      param {
        lr_mult: 1
        decay_mult: 1
      }
      param {
        lr_mult: 2
        decay_mult: 0
      }
      convolution_param {
        num_output: 4
        kernel_size: 1
        stride: 1
        weight_filler {
          type: "xavier"
        }
        bias_filler {
          type: "constant"
          value: 0
        }
      }
    }
    
    layer {
      name: "prob1"
      type: "Softmax"
      bottom: "conv4-1"
      top: "prob1"
    }
    
    layer {
      name: "concat"
      type: "Concat"
      bottom: "conv4-2"
      bottom: "prob1"
      top: "concat"
    }
    
    layer {
      name: "output"
      type: "Flatten"
      bottom: "concat"
      top: "output"
    }
    

    I have modified several places which are related to PRelu. These two prototxt can get same results in Caffe while different results in Movidius

Sign In or Register to comment.