19

Let's say I query for

http://images.google.com.sg/images?q=sky&imgcolor=black

and I get all the black color sky, how actually does the algorithm behind work?

Kara
  • 5,650
  • 15
  • 48
  • 55
SteD
  • 13,331
  • 12
  • 60
  • 74
  • Here is a [question](http://stackoverflow.com/questions/677395/how-to-check-if-an-rgb-image-contains-only-one-color) about how to find a if the image has only one colour. This can be just a basic idea. – Shoban Mar 26 '09 at 04:36

4 Answers4

32

Based on this paper published by Google engineers Henry Rowley, Shumeet Baluja, and Dr. Yushi Jing, it seems the most important implication of your question about recognizing colors in images relates to google's "saferank" algorithm for pictures that can detect flesh-tones without any text around it.

The paper begins by describing by describing the "classical" methods, which are typically based on normalizing color brightness and then using a "Gaussian Distribution," or using a three-dimensional histogram built up using the RGB values in pixels (each color is a 8bit integer value from 0-255 representing how much . of that color is included in the pixel). Methods have also been introduced that rely on properties such as "luminance" (often incorrectly called "luminosity"), which is the density of luminous intensity to the naked eye from a given image.

The google paper mentions that they will need to process roughly 10^9 images with their algorithm so it needs to be as efficient as possible. To achieve this, they perform the majority of their calculations on an ROI (region of interest) which is a rectangle centered in the image and inset by 1/6 of the image dimensions on all sides. Once they've determined the ROI, they have many different algorithms that are then applied to the image including Face-Detection algs, Color Constancy algs, and others, which as a whole find statistical trends in the image's coloring and most importantly find the color shades with the highest frequency in the statistical distribution.

They use other features such as Entropy , Edge-Detection, and texture-definitions to In order to extract lines from the images, they use the OpenCV implementation (Bradski, 2000) of the probabilistic Hough transform (Kiryati et al., 1991) computed on the edges of the skin color connected components, which allows them to find straight lines which are probably not body parts and additionally allows them to better determine which colors are most important in an image, which is a key factor in their Image Color Search.

For more on the technicalities of this topic including the math equations and etc, read the google paper linked to in the beginning and look at the Research section of their web site.

Very interesting question and subject!

HipsterZipster
  • 1,226
  • 1
  • 13
  • 33
7

Images are just pixels. Pixels are just RGB values. We know what black is in RGB, so we can look for it in an image.

Instance Hunter
  • 7,621
  • 5
  • 39
  • 52
  • 2
    But how does google know which bit is the sky, try http://images.google.com.sg/images?q=car&imgcolor=red - it's clearly picking up red cars, and ignoring background – MrTelly Mar 26 '09 at 02:55
  • Not really, its just searching for "cars" and then weighing in favour of ones significantly red. If you can find one that produces a search focusing on an *insignificant* region of the picture then I'll be interested – Kent Fredric Mar 26 '09 at 03:04
  • note of course, if I google for just "red car" without the colour filter, I get 90% of the same results. – Kent Fredric Mar 26 '09 at 03:05
  • 1
    http://images.google.com.sg/images?imgcolor=green&gbv=1&hl=en&sa=1&q=red+car&btnG=Search+Images # note how its returning red cars but the background is mostly green. Its because its now searching for the string "red car" and filtering on the green background. – Kent Fredric Mar 26 '09 at 03:10
  • http://images.google.com.sg/images?imgcolor=green&gbv=1&hl=en&sa=1&q=%22car+red%22&btnG=Search+Images # Another good example – Kent Fredric Mar 26 '09 at 03:11
3

Well, one method is, in very basic terms:

Given a corpus of images, determine the high concentrations of a given color range (this is actually fairly trivial), store this data, index accordingly (index the images according to colors determined from the previous step). Now, you have essentially the same sort of thing as finding documents containing certain words.

This is a very, very basic description of one possible method.

BobbyShaftoe
  • 27,660
  • 5
  • 50
  • 71
0

There are various ways of extracting color from an image, and I think other answers addressed them (K-Means, distributions, etc).

Assuming you have extracted the colors, there are a few ways to search by color. One slow, but obvious approach would be to calculate the distance between the search color and the dominant colors of the image using some metric (e.g. Color Difference), and then weight the results based on "closeness."

Another, much faster, approach would be to essentially downscale the resolution of your color space. Rather than deal with all possible RGB color values, limit the extraction to a smaller range like Google does (just Blue, Green, Black, Yellow, etc). Then the user can search with a limited set of color swatches and calculating color distance becomes trivial.