Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Sign In

Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Alert: Beginning Tuesday, June 25th, we will be freezing this site and migrating the content and forums to our new home at Check it out now!

Error with the security-cam example under the ncappzoo


i am struggling to run the example.

The error I am getting is "AttributeError: 'numpy.ndarray' object has no attribute 'show'".

Note that when I comment out the offending line (137) then I get the screenshots appearing in the capture directory. But no annotated video appears.

I am using a Raspberry Pi 3 B+, a Pi Cam and version 1 of the SDK. The is running in a terminal on the Raspian Desktop. Note that the example runs fine on my Ubuntu machine.

Any suggestions for how to fix this? I just want to get this running with a Pi Camera :smile:

Best Regards


  • 16 Comments sorted by Votes Date Added
  • I will let you in on something I found out while working on this. The Web camera performs better because the GPU to memory on the PI is slow. The Pi Cam goes to the GPU memory then comes to the system memory you modify it and send it back. The USB comes into system memory then you modify it then you send it back to be displayed in GPU.
    That's roughly what I have seen and found out.

  • There was a post someplace in the PI forums where they spilled the beans on how it worked. Pi Camera going to GPU mem and USB going to basic memory. It was telling. I am going to switch hardware platforms. Latte Panda or something that has more get go..
    when I can actually successfully RE-TRAIN a InceptionV3 network and use it on the stick. So far no such luck.

  • @MarkWest1972 , good catch! I developed on, and have always run on a headless system, so I didn't run into this issue. I am unable to get to this code immediately, but try rerunning the code after commenting out lines 136 and 137. This should let the script run, but the results won't be visualized on the display.

    Based on the error you are reporting, I suspect that the issue happens when there are zero detection, which is when img is a numpy array as against an image object. Try converting img to an image object after line 86 and before line 100.

  • Thanks for your reply!

    Your assumption seems to be correct. Converting img to an PIL.Image.Image object removes the error, but there is still no image displayed in the Raspian desktop...

  • Additional information - after some googling I installed ImageMagick. Now the show() command works, but instead of a real time video I get lots of snapshots popping up on my desktop. All I want is the same behavior as shown by the example when I run on my ubuntu machine....

  • I've come to the conclusion that I need to instal OpenCV and fix this. I'll post back here when I have a working solution :)

  • FYI it works great. Best example in the entire bunch.

  • Oh I agree with you 100% @chicagobob123 - it's a great example! Apologies to @AshwinVijayakumar if I came across as negative!

    My only issue is that the Pi Camera version doesn't display the annotated images as a video when running in the Raspian Desktop. This isn't really needed functionality, but it would be nice to compare with the USB camera version for performance and so on.

    Anyway I'm working on fixing this (I just need a free evening). As soon as I do I'll post the results here.

  • @chicagobob123 Very interesting!

    I got the annotated video display running with a Pi Camera and this seems to confirm what you are saying.

    How I did it

    First I made sure that OpenCV was installed on the Pi, along with the relevant dependancies

    My installation of the NCS SDK is API only (and version 1) so I didn't have OpenCV installed. Building from source is a pain on the PI so I elected to install a pre-compiled version. This took around 5 minutes or so.

    sudo apt-get install python-opencv
    sudo pip3 install opencv-python==
    sudo apt-get install libjasper-dev
    sudo apt-get install libqtgui4
    sudo apt-get install libqt4-test

    Then I made a copy of the file and replaced it's main function with the following code

    def main():
        device = open_ncs_device()
        graph = load_graph( device )
        # Main loop: Capture live stream & send frames to NCS
        with picamera.PiCamera() as camera:
            with picamera.array.PiRGBArray( camera ) as frame:
                while( True ):
                    camera.resolution = ( 640, 480 )
                    camera.capture( frame, ARGS.colormode, use_video_port=True )
                    img = pre_process_image( frame.array )
                    infer_image( graph, img, frame.array )
                    # Clear PiRGBArray, so you can re-use it for next capture
           0 )
                    # Display the frame for 5ms, and close the window so that the next
                    # frame can be displayed. Close the window if 'q' or 'Q' is pressed.
                    if( cv2.waitKey( 5 ) & 0xFF == ord( 'q' ) ):
        close_ncs_device( device, graph )

    Finally I added the following import statements to the head of the new file

    import picamera
    import picamera.array

    The resulting file is a combination of the code from the existing and files.

    Running the new file results in the annotated video being displayed, albeit at a seemingly slower rate than the USB camera version.

    Note that I'm not an expert in using the Pi Camera so it might be possible to tune this for better performance. YMMV :)

    EDIT : Added missing instructions to add import statements.

  • One more thing : I see also that all code using the 'camera' variable APART FROM THAT IN THE MAIN METHOD can also be removed from the new file, as this code is no longer required.

    EDIT : Added text in bold.

  • @chicagobob123 I can now see that you've done more or less the same as I did over on another thread. Great minds think alike :)

  • In case it is of interest to anyone reading this thread, I'm getting around 5 FPS with a Raspberry Pi Camera by using the imutils library (and specifically the VideoStream class) for handling the video stream. This class also allows one to easily switch between a USB and Raspberry Pi Camera.

    From what I can tell the improvement is due to threading. More information is available here.

  • 5 fps at 640x480 or 320x240?

  • Here's a quick benchmark I did with 640x480:

    Non Threaded: 4.48 FPS
    Threaded: 4.82 FPS

    Non Threaded: 1.4 FPS
    Threaded: 4.15 FPS

    At 320x240 I got 5.13 FPS with the Raspberry Pi Camera by using threading!

  • Those numbers look a little quicker than I got. I think I was averaging about 3.x fps @640x480
    using the Pi Cam. I did not try the web cam because I was confined by form factor.
    Wanted something small and compact.

  • Did you try threading? If not, it might help. Good luck in any case!

    One last thing - how did you work out the reason for the difference in USB vs PiCam performance? Is there a blog post or article out there somewhere? I'm thinking about writing up my experiences...

Sign In or Register to comment.