frame

Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Sign In

Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

YOLO on Movidius using Darkflow error

Here is the Github to the repository: https://github.com/fernandodelacalle/yolo-darkflow-movidius

original img dim (720, 1280, 3)
Traceback (most recent call last):
  File "test_movidius.py", line 111, in <module>
    main()
  File "test_movidius.py", line 108, in main
    args.threshold)
  File "test_movidius.py", line 70, in inference_video
    boxes = yolo_utils.procces_out(y_out, meta, img_orig_dimensions)
  File "/home/pi/yolo-darkflow-movidius/yolo_utils.py", line 48, in procces_out
    boxes = findboxes_meta(meta, out)
  File "/home/pi/yolo-darkflow-movidius/yolo_utils.py", line 22, in findboxes_meta
    boxes = box_constructor(meta, net_out)
  File "darkflow/cython_utils/cy_yolo2_findboxes.pyx", line 67, in darkflow.cython_utils.cy_yolo2_findboxes.box_constructor
    float[:, :, :, ::1] net_out = net_out_in.reshape([H, W, B, net_out_in.shape[2]/B])
  File "stringsource", line 653, in View.MemoryView.memoryview_cwrapper
  File "stringsource", line 348, in View.MemoryView.memoryview.__cinit__
ValueError: buffer source array is read-only

Why am I getting this ValueError?

Comments

  • 6 Comments sorted by Votes Date Added
  • Hi @user123

    I was able to successfully run YOLOv2 using Darkflow (I followed the link you provided). Can you please provide more details about how you're getting this error and what your environment is and I can help you out with this.

    Sincerely,
    Sahira

  • Sure and thank you in advance.

    I first trained the network using Google Colab. Here is the link: https://colab.research.google.com/drive/1fBK29gdEwXjl76D6O1PXZZGL5H5ZWTT0 I first uploaded 299x299 .jpg images and their associated .xml files to Google Colab. Here is the link to my data (less than 200 images).: http://s000.tinyupload.com/index.php?file_id=02342227694131079024

    I only trained the network on one class and it took a couple of minutes. Make sure to go to Edit -> Notebook Settings -> Choose GPU as Hardware Accelerator before training.

    After training, I got the .pb and .meta files and transferred it to my Raspberry Pi. I ran the line of code below in the terminal after putting the two files in the appropriate directory.

    mvNCCompile built_graph/tiny-yolo-voc-1c.pb -s 12 -in input -on output -o built_graph/tiny-yolo-voc-1c.graph

    That works fine.

    However, this line: python3 test_movidius.py -i tide.mp4 throws me the error below.

    pi@raspberrypi:~/Desktop/yolo-darkflow-movidius $ python3 test_movidius.py -i tide.mp4
    D: [ 0] ncDeviceCreate:308 ncDeviceCreate index 0

    D: [ 0] resetAll:228 Found stalled device 1.3-

    I: [ 0] resetAll:251 Stalled devices found, Reseting...
    I: [ 0] resetAll:275 ...
    D: [ 0] ncDeviceCreate:308 ncDeviceCreate index 1

    D: [ 0] ncDeviceOpen:524 File path /usr/local/lib/mvnc/MvNCAPI-ma2450.mvcmd

    I: [ 0] ncDeviceOpen:530 ncDeviceOpen() XLinkBootRemote returned success 0

    I: [ 0] ncDeviceOpen:568 XLinkConnect done - link Id 1

    D: [ 0] ncDeviceOpen:582 done

    I: [ 0] ncDeviceOpen:584 Booted 1.3-ma2450 -> VSC

    I: [ 0] getDevAttributes:383 Device attributes

    I: [ 0] getDevAttributes:386 Device FW version: 2.8.2450.16e

    I: [ 0] getDevAttributes:388 mvTensorVersion 2.8

    I: [ 0] getDevAttributes:389 Maximum graphs: 10

    I: [ 0] getDevAttributes:390 Maximum fifos: 20

    I: [ 0] getDevAttributes:392 Maximum graph option class: 1

    I: [ 0] getDevAttributes:394 Maximum device option class: 1

    I: [ 0] getDevAttributes:395 Device memory capacity: 522059056

    I: [ 0] ncGraphAllocate:960 Starting Graph allocation sequence

    I: [ 0] ncGraphAllocate:1026 Sent graph
    D: [ 0] ncGraphAllocate:1051 Graph Status 0 rc 0

    D: [ 0] ncGraphAllocate:1097 Input tensor w 416 h 416 c 3 n 1 totalSize 1038336 wstide 6 hstride 2496 cstride 2 layout 0

    D: [ 0] ncGraphAllocate:1110 output tensor w 13 h 13 c 30 n 1 totalSize 10140 wstide 60 hstride 780 cstride 2 layout 0

    I: [ 0] ncGraphAllocate:1164 Graph allocation completed successfully

    I: [ 0] ncFifoCreate:2164 Init fifo
    I: [ 0] ncFifoAllocate:2359 Creating fifo
    I: [ 0] ncFifoCreate:2164 Init fifo
    I: [ 0] ncFifoAllocate:2359 Creating fifo
    D: [ 0] ncFifoWriteElem:2630 No layout conversion is needed 0

    D: [ 0] convertDataTypeAndLayout:170 src data type 1 dst data type 0

    D: [ 0] convertDataTypeAndLayout:172 SRC: w 416 h 416 c 3 w_s 12 h_s 4992 c_s 4

    D: [ 0] convertDataTypeAndLayout:174 DST: w 416 h 416 c 3 w_s 6 h_s 2496 c_s 2

    D: [ 0] ncFifoWriteElem:2655 write count 0 num_elements 2 userparam 0x6e177c88

    I: [ 0] ncGraphQueueInference:3090 trigger start

    I: [ 0] ncGraphQueueInference:3187 trigger end

    D: [ 0] ncFifoReadElem:2724 No layout conversion is needed 0

    D: [ 0] convertDataTypeAndLayout:170 src data type 0 dst data type 1

    D: [ 0] convertDataTypeAndLayout:172 SRC: w 13 h 13 c 30 w_s 60 h_s 780 c_s 2

    D: [ 0] convertDataTypeAndLayout:174 DST: w 13 h 13 c 30 w_s 120 h_s 1560 c_s 4

    D: [ 0] ncFifoReadElem:2756 num_elements 2 userparam 0x6e177e18 output length 20280

    FPS: 3.60
    Traceback (most recent call last):
    File "test_movidius.py", line 108, in
    main()
    File "test_movidius.py", line 105, in main
    args.threshold)
    File "test_movidius.py", line 67, in inference_video
    boxes = yolo_utils.procces_out(y_out, meta, img_orig_dimensions)
    File "/home/pi/Desktop/yolo-darkflow-movidius/yolo_utils.py", line 48, in procces_out
    boxes = findboxes_meta(meta, out)
    File "/home/pi/Desktop/yolo-darkflow-movidius/yolo_utils.py", line 22, in findboxes_meta
    boxes = box_constructor(meta, net_out)
    File "darkflow/cython_utils/cy_yolo2_findboxes.pyx", line 67, in darkflow.cython_utils.cy_yolo2_findboxes.box_constructor
    ## float[:, :, :, ::1] net_out = net_out_in.reshape([H, W, B, net_out_in.shape[2]/B])
    File "stringsource", line 653, in View.MemoryView.memoryview_cwrapper
    File "stringsource", line 348, in View.MemoryView.memoryview.cinit
    ValueError: buffer source array is read-only

    If you guys can help me, I will love you guys forever.

  • Hi @user123

    Can you please provide your code? Looking through the error messages, I think that there might be something in the pre processing code that is throwing an error.

    Sincerely,
    Sahira

  • code for test_movidius.py

    import json
    import time
    import argparse
    import cv2
    import numpy as np
    import mvnc.mvncapi as mvncapi
    import movidus_utils
    import yolo_utils
    
    def inference_image(graph_file,
                        meta_file,
                        img_in_name,
                        img_out_name,
                        threshold):
        meta = yolo_utils.get_meta(meta_file)
        meta['thresh'] = threshold
        dev = movidus_utils.get_mvnc_device()
        graph, input_fifo, output_fifo = movidus_utils.load_graph(dev, graph_file)
        img = cv2.imread(img_in_name)
        img = img.astype(np.float32)
        img_orig = np.copy(img)
        img_orig_dimensions = img_orig.shape
        img = yolo_utils.pre_proc_img(img, meta)    
        graph.queue_inference_with_fifo_elem(
            input_fifo, output_fifo, img, 'user object')
        output, _ = output_fifo.read_elem()
        y_out = np.reshape(output, (13, 13, 125))
        y_out = np.squeeze(y_out)
        boxes = yolo_utils.procces_out(y_out, meta, img_orig_dimensions)
        yolo_utils.add_bb_to_img(img_orig, boxes)
        cv2.imwrite(img_out_name, img_orig)
    
    def inference_video(graph_file, 
                        meta_file, 
                        video_in_name, 
                        video_out_name, 
                        threshold): 
        meta = yolo_utils.get_meta(meta_file)
        meta['thresh'] = threshold   
        dev = movidus_utils.get_mvnc_device()
        graph, input_fifo, output_fifo = movidus_utils.load_graph(dev, graph_file)
        cap = cv2.VideoCapture()
        cap.open(video_in_name)
        fps = int(cap.get(cv2.CAP_PROP_FPS))  
        width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH) ) 
        height= int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT)) 
        fourcc = cv2.VideoWriter_fourcc(*'XVID')
        out = cv2.VideoWriter(video_out_name, fourcc, fps, (width,height))
        times = []
        while True:
            ret, frame = cap.read()
            if not ret:
                print("Video Ended")
                break   
            frame_orig = np.copy(frame)
            img_orig_dimensions = frame_orig.shape
            frame = yolo_utils.pre_proc_img(frame, meta)
            start = time.time()
            graph.queue_inference_with_fifo_elem(
                input_fifo, output_fifo, frame, 'user object')
            output, _ = output_fifo.read_elem()
            end = time.time()
            print('FPS: {:.2f}'.format((1 / (end - start))))
            times.append((1/ (end - start)))
            y_out = np.reshape(output, (13, 13,30))
            y_out = np.squeeze(y_out)
            boxes = yolo_utils.procces_out(y_out, meta, img_orig_dimensions)
            yolo_utils.add_bb_to_img(frame_orig, boxes)
            out.write(frame_orig)
        cap.release()
        out.release()
    
    def main():
        ap = argparse.ArgumentParser()
        ap.add_argument(
            "-i", "--input_video", 
            required=True, 
            help="path to input video")
        ap.add_argument(
            "-o", "--output_video", 
            required=False,
            default='out.avi',
            help="path to output video")
        ap.add_argument(
            "-m", "--meta_file", 
            required=False,
            default='built_graph/tiny-yolo-voc-1c.meta',
            help="path to meta file")
        ap.add_argument(
            "-mg", "--movidius_graph", 
            required=False,
            default= 'built_graph/tiny-yolo-voc-1c.graph',
            help="path to movidius graph")
        ap.add_argument(
            "-th", "--threshold", 
            required=False,
            default = 0.3, 
            help="threshold")
        args = ap.parse_args()
        inference_video(
            args.movidius_graph, 
            args.meta_file, 
            args.input_video, 
            args.output_video, 
            args.threshold)
    
    if __name__ == '__main__':
        main()
    
  • Hi @user123 ,

    I apologize for the delay in our response, I have tested your instructions and your program and I had no issues whatsoever. I used an Ubuntu 16.04 Host with the latest NCSDK v2.10. Here is the terminal output of the test run and after this I can see detections on the video.avi output file created by the run.

    luis@ubuntu:~/movidius/yolo-darkflow-movidius$ python3 movtest.py -i tide.mp4
    D: [ 0] ncDeviceCreate:308 ncDeviceCreate index 0
    D: [ 0] ncDeviceCreate:308 ncDeviceCreate index 1
    D: [ 0] ncDeviceOpen:524 File path /usr/local/lib/mvnc/MvNCAPI-ma2450.mvcmd
    I: [ 0] ncDeviceOpen:530 ncDeviceOpen() XLinkBootRemote returned success 0
    I: [ 0] ncDeviceOpen:568 XLinkConnect done - link Id 0
    D: [ 0] ncDeviceOpen:582 done
    I: [ 0] ncDeviceOpen:584 Booted 4.1-ma2450 -> VSC
    I: [ 0] getDevAttributes:383 Device attributes
    I: [ 0] getDevAttributes:386 Device FW version: 2.a.2450.8a
    I: [ 0] getDevAttributes:388 mvTensorVersion 2.10
    I: [ 0] getDevAttributes:389 Maximum graphs: 10
    I: [ 0] getDevAttributes:390 Maximum fifos: 20
    I: [ 0] getDevAttributes:392 Maximum graph option class: 1
    I: [ 0] getDevAttributes:394 Maximum device option class: 1
    I: [ 0] getDevAttributes:395 Device memory capacity: 522047856
    I: [ 0] ncGraphAllocate:960 Starting Graph allocation sequence
    I: [ 0] ncGraphAllocate:1026 Sent graph
    D: [ 0] ncGraphAllocate:1051 Graph Status 0 rc 0
    D: [ 0] ncGraphAllocate:1097 Input tensor w 416 h 416 c 3 n 1 totalSize 1038336 wstide 6 hstride 2496 cstride 2 layout 0
    D: [ 0] ncGraphAllocate:1110 output tensor w 13 h 13 c 30 n 1 totalSize 10140 wstide 60 hstride 780 cstride 2 layout 0
    I: [ 0] ncGraphAllocate:1164 Graph allocation completed successfully
    I: [ 0] ncFifoCreate:2164 Init fifo
    I: [ 0] ncFifoAllocate:2359 Creating fifo
    I: [ 0] ncFifoCreate:2164 Init fifo
    I: [ 0] ncFifoAllocate:2359 Creating fifo
    D: [ 0] ncFifoWriteElem:2630 No layout conversion is needed 0
    D: [ 0] convertDataTypeAndLayout:170 src data type 1 dst data type 0
    D: [ 0] convertDataTypeAndLayout:172 SRC: w 416 h 416 c 3 w_s 12 h_s 4992 c_s 4
    D: [ 0] convertDataTypeAndLayout:174 DST: w 416 h 416 c 3 w_s 6 h_s 2496 c_s 2
    D: [ 0] ncFifoWriteElem:2655 write count 0 num_elements 2 userparam 0x7f113ce1c560
    I: [ 0] ncGraphQueueInference:3090 trigger start
    I: [ 0] ncGraphQueueInference:3187 trigger end
    D: [ 0] ncFifoReadElem:2724 No layout conversion is needed 0
    D: [ 0] convertDataTypeAndLayout:170 src data type 0 dst data type 1
    D: [ 0] convertDataTypeAndLayout:172 SRC: w 13 h 13 c 30 w_s 60 h_s 780 c_s 2
    D: [ 0] convertDataTypeAndLayout:174 DST: w 13 h 13 c 30 w_s 120 h_s 1560 c_s 4
    D: [ 0] ncFifoReadElem:2756 num_elements 2 userparam 0x7f113ce1c780 output length 20280
    /usr/local/lib/python3.5/dist-packages/mvnc/mvncapi.py:416: DeprecationWarning: The binary mode of fromstring is deprecated, as it behaves surprisingly on unicode inputs. Use frombuffer instead
    tensor = numpy.fromstring(tensor.raw, dtype=numpy.float32)
    FPS: 4.70
    D: [ 0] ncFifoWriteElem:2630 No layout conversion is needed 0
    D: [ 0] convertDataTypeAndLayout:170 src data type 1 dst data type 0
    D: [ 0] convertDataTypeAndLayout:172 SRC: w 416 h 416 c 3 w_s 12 h_s 4992 c_s 4
    D: [ 0] convertDataTypeAndLayout:174 DST: w 416 h 416 c 3 w_s 6 h_s 2496 c_s 2
    D: [ 0] ncFifoWriteElem:2655 write count 1 num_elements 2 userparam 0x7f113ce1c4d8
    I: [ 0] ncGraphQueueInference:3090 trigger start
    I: [ 0] ncGraphQueueInference:3187 trigger end
    D: [ 0] ncFifoReadElem:2724 No layout conversion is needed 0
    D: [ 0] convertDataTypeAndLayout:170 src data type 0 dst data type 1
    D: [ 0] convertDataTypeAndLayout:172 SRC: w 13 h 13 c 30 w_s 60 h_s 780 c_s 2
    D: [ 0] convertDataTypeAndLayout:174 DST: w 13 h 13 c 30 w_s 120 h_s 1560 c_s 4
    D: [ 0] ncFifoReadElem:2756 num_elements 2 userparam 0x7f113ce1c5e8 output length 20280
    FPS: 5.00

    I haven't tried this on a Raspberry Pi but it will be my next test, I will let you know the results. May I ask which version of the NCSDK are you using? Also since you are using a RPi, may I ask if you are using a powered USB hub? If that is not the case, I may suggest to use a powered USB hub as there have been reported problems in the past about not using one. Find a discussion of that topic here for your reference.

    Regards,
    @Luis_at_Intel

  • Hi @user123 ,

    I completed the testing on a RPi 3 Model B+ and it runs the same way, I am not able to reproduce the problem you are encountering. I can see the object detection being made on the out.avi output file created by the run. Not sure what could be causing the problem on your end, I followed the steps as you mentioned and it works just fine. If you have any additional information that can help us reproduce or any other questions on this topic, don't hesitate to ask.

    Regards,
    @Luis_at_Intel

This discussion has been closed.