It looks like you're new here. If you want to get involved, click one of these buttons!Sign In
It looks like you're new here. If you want to get involved, click one of these buttons!
Hi all, I've been playing with the examples in the zoo, and have a question about image aspect ratio, how do I ensure maximum accuracy? For example, the ssd mobilenet app will resize images to 300x300 which is a square ratio:
# Neural network assumes input images are these dimensions. SSDMN_NETWORK_IMAGE_WIDTH = 300 SSDMN_NETWORK_IMAGE_HEIGHT = 300
But, the video that it downloads and uses as examples is all 960x540 which is 16:9 ratio, for example:
The code uses OpenCV's resize filter. Now, let's compare the results of resizing vs. cropping then resizing. First, this is what resizing that video from 16:9 to 1:1 looks like:
This is what it looks like when you crop first and then resize:
These are clearly quite different, and my question is, how does this affect accuracy? Should I rather crop my video to 1:1 or do the example models all expect that video is going to be 16:9 squashed into 1:1 ratio? To the eye, it would seem more logical that the model would perform better on image files that are not distorted by resizing to a different ratio.
Thanks in advance.