0

I don't necessarily need source code but looking for pseudocode or ideas on how to approach this. I have a camera pointed at my water meter and want to log the usage to a database. I've been playing with OpenCV the past couple weeks and have digit detection working okay. (The last digit poses problems because it spins endlessly, not snap to the number like the first 5, so the full digit is in view less than half the time.)

Anyway, for the best resolution I want to track the actual big red needle. Right now I'm going in a circle and finding the "reddest" pixel, and using that to calculate the angle of the needle. It works... as long as the light is decent and the camera doesn't move.

I'm guessing there is a trick I can do with Canny edge detection... ultimately I'm just trying to find the angle of the longest line maybe? And the needle being a unique color should help eliminate a lot of false positives but how do I actually apply a filter for that?

(Note that there's a little red spinner on the bottom left of the meter, same color as the needle. I need to make sure that doesn't false trigger something also.)

Thank you! -AdamWater Meter Example

Adamlive
  • 129
  • 1
  • 1
  • 9
  • I don't think Canny is what you want here. Ultimately your original idea for grabbing the red color to start with is good, but it sounds like lighting changes can happen throughout the video which makes your red detector go off track. What I would try here is to use your initial method to get the region of red pixels, and then `floodFill` from the center of those blobs with some tolerance to get a selection of all the red pixels, which you can then calculate the mean and standard deviation of the colors for good current red vals. If you do that every frame you could track color changes. – alkasm Oct 01 '17 at 14:33
  • [This example](http://docs.opencv.org/3.2.0/d5/d26/ffilldemo_8cpp-example.html) from the OpenCV docs show how `floodFill` works (it's what the Photoshop Magic Wand selection tool does, if you've ever used it). As long as the brightness changes between each single frame is small enough, you should be able to track the color changes pretty well. Also, how far can the gauge move in a single frame? A lot or a little? If it's relatively slow (vs. frame rate), then you should be able to narrow in an ROI to look for that color based on the last position. – alkasm Oct 01 '17 at 14:40

1 Answers1

4

国庆节快乐,中秋节快乐! O(∩_∩)O 哈哈哈~

The new answer:

Hi, today I do some research on this question, after many times of trial and error, I now get a pretty good solution. The result as follow.

enter image description here

The basic steps are as follows:

  1. Convert to grayscale, threshold, project to axis, so we can find the watch region.
  2. Convert to Lab-color-space, so the red needle (in the warm color) can be easily distinguished in the a channel.
  3. Normalized the a channels, threshold, and do morphology operation, so we get an binary mask. After these operations, it will make for finding the potential red needle region.
  4. Find contours in the mask, filter by some properties, then you'll get it.

1. Find the watch region

# https://i.stack.imgur.com/5oOGL.jpg
imgname = "stkdata/5oOGL.jpg"
img = cv2.imread(imgname)
## Threshold in grayscale
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
retval, threshed = cv2.threshold(gray, 180, 255, cv2.THRESH_BINARY )

## Find wathc region by counting the projector
h,w = img.shape[:2]
x = np.sum(threshed, axis=0)
y = np.sum(threshed, axis=1)
yy = np.nonzero(y>(w/5*255))[0]
xx = np.nonzero(x > (h/5*255))[0]
region = img[yy[0]:yy[-1], xx[0]:xx[-1]]
#cv2.imwrite("region.png", region)

enter image description here

2. Convert to LAB

## Change to LAB space
lab = cv2.cvtColor(region, cv2.COLOR_BGR2LAB)
l,a,b = cv2.split(lab)
imglab = np.hstack((l,a,b))
cv2.imwrite("region_lab.png", imglab)

In the lab-color-space, the red needle can be easily be distinguished.

enter image description here

3. Normalized the a channels, threshold, and do the morphology operation.

## normalized the a channel to all dynamic range
na = cv2.normalize(a, None, alpha=0, beta=255, norm_type=cv2.NORM_MINMAX, dtype=cv2.CV_8UC1)
cv2.imwrite("region_a_normalized.png", na)

## Threshold to binary
retval, threshed = cv2.threshold(na, thresh = 180,  maxval=255, type=cv2.THRESH_BINARY)

## Do morphology
kernel = cv2.getStructuringElement( cv2.MORPH_ELLIPSE , (3,3))
opened = cv2.morphologyEx(threshed, cv2.MORPH_OPEN,kernel)
res = np.hstack((threshed, opened))
cv2.imwrite("region_a_binary.png", res)

The normalized a channel:

enter image description here

Threshold and do morphology operation, we get a binary mask as the right side.

enter image description here

4. Find contours, and filter them by some properties (such as area and so on).

## Find contours
contours = cv2.findContours(opened, mode=cv2.RETR_LIST, method=cv2.CHAIN_APPROX_SIMPLE)[-2]

## Draw Contours
res = region.copy()
cv2.drawContours(res, contours, -1, (255,0,0), 1)
cv2.imwrite("region_contours.png", res)

## Filter Contours
for idx, contour in enumerate(contours):
    bbox = cv2.boundingRect(contour)
    area = bbox[-1]*bbox[-2]
    if area < 100:
        continue
    rot_rect = cv2.minAreaRect(contour)
    (cx,cy), (w,h), rot_angle = rot_rect
    rbox = np.int0(cv2.boxPoints(rot_rect))
    cv2.drawContours(res, [rbox], 0, (0,255,0), 1)
    text="#{}: {:2.3f}".format(idx, rot_angle)
    org=(int(cx)-10,int(cy)-10)
    cv2.putText(res, text=text, org = org, fontFace = 1, fontScale=0.8, color=(0,0,255), thickness = 1, lineType=16)

cv2.imwrite("region_result.png", res)

The founded contours:

enter image description here

The result:

enter image description here

We can find that, the #1 contour belong to the red needle, and it's angle is -85.601 .

( The right direction is 0°, the anti-clockwise is negative, and the clockwise is positive)

All Python code is presented here:

#!/usr/bin/python3
# 2017.10.01 22:59:02 CST
# 2017.10.03 23:49:18 CST

import numpy as np
import cv2

# https://i.stack.imgur.com/5oOGL.jpg
imgname = "stkdata/5oOGL.jpg"
img = cv2.imread(imgname)
## Threshold in grayscale
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
retval, threshed = cv2.threshold(gray, 180, 255, cv2.THRESH_BINARY )

## Find wathc region by counting the projector
h,w = img.shape[:2]
x = np.sum(threshed, axis=0)
y = np.sum(threshed, axis=1)
yy = np.nonzero(y>(w/5*255))[0]
xx = np.nonzero(x > (h/5*255))[0]
region = img[yy[0]:yy[-1], xx[0]:xx[-1]]
cv2.imwrite("region.png", region)

## Change to LAB space
lab = cv2.cvtColor(region, cv2.COLOR_BGR2LAB)
l,a,b = cv2.split(lab)
imglab = np.hstack((l,a,b))
cv2.imwrite("region_lab.png", imglab)

## normalized the a channel to all dynamic range
na = cv2.normalize(a, None, alpha=0, beta=255, norm_type=cv2.NORM_MINMAX, dtype=cv2.CV_8UC1)
cv2.imwrite("region_a_normalized.png", na)

## Threshold to binary
retval, threshed = cv2.threshold(na, thresh = 180,  maxval=255, type=cv2.THRESH_BINARY)

## Do morphology
kernel = cv2.getStructuringElement( cv2.MORPH_ELLIPSE , (3,3))
opened = cv2.morphologyEx(threshed, cv2.MORPH_OPEN,kernel)
res = np.hstack((threshed, opened))
cv2.imwrite("region_a_binary.png", res)

## Find contours
contours = cv2.findContours(opened, mode=cv2.RETR_LIST, method=cv2.CHAIN_APPROX_SIMPLE)[-2]

## Draw Contours
res = region.copy()
cv2.drawContours(res, contours, -1, (255,0,0), 1)
cv2.imwrite("region_contours.png", res)

## Filter Contours
for idx, contour in enumerate(contours):
    bbox = cv2.boundingRect(contour)
    area = bbox[-1]*bbox[-2]
    if area < 100:
        continue
    rot_rect = cv2.minAreaRect(contour)
    (cx,cy), (w,h), rot_angle = rot_rect
    rbox = np.int0(cv2.boxPoints(rot_rect))
    cv2.drawContours(res, [rbox], 0, (0,255,0), 1)
    text="#{}: {:2.3f}".format(idx, rot_angle)
    org=(int(cx)-10,int(cy)-10)
    #cv2.putText(res, text=text, org = org, fontFace = cv2.FONT_HERSHEY_PLAIN, fontScale=0.7, color=(0,0,255), thickness = 1, lineType=cv2.LINE_AA)
    cv2.putText(res, text=text, org = org, fontFace = 1, fontScale=0.8, color=(0,0,255), thickness = 1, lineType=16)

cv2.imwrite("region_result.png", res)
cv2.imshow("result", res); cv2.waitKey();cv2.destroyAllWindows()

The Origin answer:

I think first you can extract the watch region, then do search red needle in the region, you may change the color space. Here is my sample code in python.

imgname = "5oOGL.jpg"
# https://i.stack.imgur.com/5oOGL.jpg
img = cv2.imread(imgname)

## Threshold in grayscale
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
th, imgbin = cv2.threshold(gray, 180, 255, cv2.THRESH_BINARY )

## Find watch region by projecting and counting
h,w = img.shape[:2]
x = np.sum(imgbin, axis=0)
y = np.sum(imgbin, axis=1)
yy = np.nonzero(y>(w/5*255))[0]
xx = np.nonzero(x > (h/5*255))[0]
region = img[yy[0]:yy[-1], xx[0]:xx[-1]]
cv2.imwrite("region.png", region)
cv2.imshow("region", region); cv2.waitKey(0);# cv2.destroyAllWindows()

## Change to LAB space
lab = cv2.cvtColor(region, cv2.COLOR_BGR2LAB)
l,a,b = cv2.split(lab)
imglab = np.hstack((l,a,b))
cv2.imwrite("imglab.png", imglab)

the watch region Watch region

the watch region in lab space. Watch region in LAB


Some links:

  1. Choosing the correct upper and lower HSV boundaries for color detection with`cv::inRange` (OpenCV)

  2. How to use `cv2.findContours` in different OpenCV versions?

  3. How to find the RED color regions using OpenCV?

Kinght 金
  • 14,440
  • 4
  • 49
  • 62
  • Nice use of the `Lab` space, red needle is clear as day in the `a` channel. – alkasm Oct 01 '17 at 16:20
  • Yeah, I try to split the color in **RGB**, **HSV**, **LAB**. Finally, I find in **a** channel, it is clear in that situation. – Kinght 金 Oct 01 '17 at 16:22
  • Note that the reason it works so well for this image in particular is because the `a` channel of the `Lab` space has greens, blues in lower values; oranges, yellows in middle values; and reds, pinks in the higher values. So for an image with hardly any bright red or magenta, this will pop right out. Basically in `Lab` the `a` channel ranges from "cool" to "warm" colors, more or less. Check out [this chart](https://en.wikipedia.org/wiki/Lab_color_space#/media/File:Lab_color_space.png) for e.g. – alkasm Oct 01 '17 at 16:28
  • Wow, you explain the lab color space so clearly. I also read the wiki roughly but english is not my mother languge to I skiped. I will add color space split to the answer. – Kinght 金 Oct 01 '17 at 16:47
  • First off, thank you *both* so much for your effort and detail. You gave me a lot to consider and learn from, and I spent the past week thinking and testing. In the end I am actually going back to my original method of simply finding the reddest pixel in a circle. By solving the lighting problem BEFORE it gets to software, I am getting perfect recognition. I plan to write up an answer explaining my method and weigh the pros and cons. – Adamlive Oct 09 '17 at 05:36