4

I run YoloV3 model and get detections - dictionary of 3 entries:

  1. "detector/yolo-v3/Conv_22/BiasAdd/YoloRegion" : numpy.ndarray with shape (1,255,52,52),
  2. "detector/yolo-v3/Conv_6/BiasAdd/YoloRegion" : numpy.ndarray with shape (1,255,13,13),
  3. "detector/yolo-v3/Conv_14/BiasAdd/YoloRegion" : numpy.ndarray with shape (1,255,26,26).

I know that each entry in dictionary is other size of object detection. Conv_22 is for small objects Conv_14 is for medium objects Conv_6 is for big objects

enter image description here

How can I convert this dictionary output to coordinates of bounding box, label and confidence?

gal leshem
  • 455
  • 5
  • 13

1 Answers1

1

Presuming you use python and opencv,

Pelase find the below code with comments where ever required, to extract the output using cv2.dnn module.

net.setInput(blob)

layerOutputs = net.forward(ln)

boxes = []
confidences = []
classIDs = []
for output in layerOutputs:
# loop over each of the detections
    for detection in output:
        # extract the class ID and confidence (i.e., probability) of
        # the current object detection
        scores = detection[5:]
        classID = np.argmax(scores)
        confidence = scores[classID]

        # filter out weak predictions by ensuring the detected
        # probability is greater than the minimum probability
        if confidence > threshold:
            # scale the bounding box coordinates back relative to the
            # size of the image, keeping in mind that YOLO actually
            # returns the center (x, y)-coordinates of the bounding
            # box followed by the boxes' width and height
            box = detection[0:4] * np.array([W, H, W, H])
            (centerX, centerY, width, height) = box.astype("int")

            # use the center (x, y)-coordinates to derive the top and
            # and left corner of the bounding box
            x = int(centerX - (width / 2))
            y = int(centerY - (height / 2))

            # update our list of bounding box coordinates, confidences,
            # and class IDs
            boxes.append([x, y, int(width), int(height)])
            confidences.append(float(confidence))
            classIDs.append(classID)
idxs = cv2.dnn.NMSBoxes(boxes, confidences, confidence, threshold)
#results are stored in idxs,boxes,confidences,classIDs
venkata krishnan
  • 1,655
  • 1
  • 10
  • 19
  • 3
    I just want to note something: The proper math for calculating the X/Y coordinates of all four corners of the box is in the original YOLO source code, here: https://github.com/pjreddie/darknet/blob/810d7f797bdb2f021dbe65d2524c2ff6b8ab5c8b/src/image.c#L283-L291 – Mitch McMabers Sep 26 '19 at 01:21