96

I need to create fingerprints of many images (about 100.000 existing, 1000 new per day, RGB, JPEG, max size 800x800) to compare every image to every other image very fast. I can't use binary compare methods because also images which are nearly similar should be recognized.

Best would be an existing library, but also some hints to existing algorithms would help me a lot.

skaffman
  • 381,978
  • 94
  • 789
  • 754
Philip Dreyer
  • 985
  • 2
  • 7
  • 3

11 Answers11

59

Normal hashing or CRC calculation algorithms do not work well with image data. The dimensional nature of the information must be taken into account.

If you need extremely robust fingerprinting, such that affine transformations (scaling, rotation, translation, flipping) are accounted for, you can use a Radon transformation on the image source to produce a normative mapping of the image data - store this with each image and then compare just the fingerprints. This is a complex algorithm and not for the faint of heart.

a few simple solutions are possible:

  1. Create a luminosity histogram for the image as a fingerprint
  2. Create scaled down versions of each image as a fingerprint
  3. Combine technique (1) and (2) into a hybrid approach for improved comparison quality

A luminosity histogram (especially one that is separated into RGB components) is a reasonable fingerprint for an image - and can be implemented quite efficiently. Subtracting one histogram from another will produce a new historgram which you can process to decide how similar two images are. Histograms, because the only evaluate the distribution and occurrence of luminosity/color information handle affine transformations quite well. If you quantize each color component's luminosity information down to an 8-bit value, 768 bytes of storage are sufficient for the fingerprint of an image of almost any reasonable size. Luminosity histograms produce false negatives when the color information in an image is manipulated. If you apply transformations like contrast/brightness, posterize, color shifting, luminosity information changes. False positives are also possible with certain types of images ... such as landscapes and images where a single color dominates others.

Using scaled images is another way to reduce the information density of the image to a level that is easier to compare. Reductions below 10% of the original image size generally lose too much of the information to be of use - so an 800x800 pixel image can be scaled down to 80x80 and still provide enough information to perform decent fingerprinting. Unlike histogram data, you have to perform anisotropic scaling of the image data when the source resolutions have varying aspect ratios. In other words, reducing a 300x800 image into an 80x80 thumbnail causes deformation of the image, such that when compared with a 300x500 image (that's very similar) will cause false negatives. Thumbnail fingerprints also often produce false negatives when affine transformations are involved. If you flip or rotate an image, its thumbnail will be quite different from the original and may result in a false positive.

Combining both techniques is a reasonable way to hedge your bets and reduce the occurence of both false positives and false negatives.

LBushkin
  • 121,016
  • 31
  • 208
  • 258
  • Regarding CRC, agreed. However, if one wants to use it, it's better to use MD5 hash than CRC32 – mloskot Jan 24 '10 at 02:00
  • 6
    You wouldn't want to use MD5 because it is a one way cryptographic hash. You need to use a hash method that will produce a similar result for a similar input so that you can directly compare the differences between the hashes. – AJ Quick Aug 05 '15 at 18:22
  • I think this is called [locality-sensitive hashing](https://en.wikipedia.org/wiki/Locality-sensitive_hashing) – xdhmoore Dec 02 '20 at 07:16
34

There is a much less ad-hoc approach than the scaled down image variants that have been proposed here that retains their general flavor, but which gives a much more rigorous mathematical basis for what is going on.

Take a Haar wavelet of the image. Basically the Haar wavelet is the succession of differences from the lower resolution images to each higher resolution image, but weighted by how deep you are in the 'tree' of mipmaps. The calculation is straightforward. Then once you have the Haar wavelet appropriately weighted, throw away all but the k largest coefficients (in terms of absolute value), normalize the vector and save it.

If you take the dot product of two of those normalized vectors it gives you a measure of similarity with 1 being nearly identical. I posted more information over here.

Community
  • 1
  • 1
Edward KMETT
  • 28,852
  • 6
  • 82
  • 107
20

You should definitely take a look at phash.

For image comparison there is this php project : https://github.com/kennethrapp/phasher

And my little javascript clone: https://redaktor.me/phasher/demo_js/index.html

Unfortunately this is "bitcount"-based but will recognize rotated images. Another approach in javascript was to build a luminosity histogram from the image by the help of canvas. You can visualize a polygon histogram on the canvas and compare that polygon in your database (e.g. mySQL spatial ...)

sebilasse
  • 3,202
  • 2
  • 30
  • 30
  • is this on npm? I'm looking for a way to compare similarity between two images using javascript – chovy Mar 22 '16 at 09:11
  • Hm, I thought it is "to cheap for npm". It was really just a demo quickly written from the scratch. However feel free to do whatever you want with the source. If I can make it I'll look into it later and push it to github https://github.com/redaktor/ ... – sebilasse Mar 22 '16 at 12:35
  • @SebastianLasse I just checked out your JS port and it's fantastic! I just wish that you can pass an image URI to the `Compare()` function instead of having to download the image first. Also, from my testing, the threshold for "a very similar image" should be >90%, not >98%. – thdoan Jul 16 '16 at 07:13
12

A long time ago I worked on a system that had some similar characteristics, and this is an approximation of the algorithm we followed:

  1. Divide the picture into zones. In our case we were dealing with 4:3 resolution video, so we used 12 zones. Doing this takes the resolution of the source images out of the picture.
  2. For each zone, calculate an overall color - the average of all pixels in the zone
  3. For the entire image, calculate an overall color - the average of all zones

So for each image, you're storing n + 1 integer values, where n is the number of zones you're tracking.

For comparisons, you also need to look at each color channel individually.

  1. For the overall image, compare the color channels for the overall colors to see if they are within a certain threshold - say, 10%
  2. If the images are within the threshold, next compare each zone. If all zones also are within the threshold, the images are a strong enough match that you can at least flag them for further comparison.

This lets you quickly discard images that are not matches; you can also use more zones and/or apply the algorithm recursively to get stronger match confidence.

GalacticCowboy
  • 11,426
  • 1
  • 41
  • 62
6

Similar to Ic's answer - you might try comparing the images at multiple resolutions. So each image get saved as 1x1, 2x2, 4x4 .. 800x800. If the lowest resolution doesn't match (subject to a threshold), you can immediately reject it. If it does match, you can compare them at the next higher resolution, and so on..

Also - if the images share any similar structure, such as medical images, you might be able to extract that structure into a description that is easier/faster to compare.

allclaws
  • 5,217
  • 6
  • 28
  • 27
3

As of 2015 (back to the future... on this 2009 question which is now high-ranked in Google) image similarity can be computed using Deep Learning techniques. The family of algorithms known as Auto Encoders can create a vector representation which is searchable for similarity. There is a demo here.

Alex R
  • 10,320
  • 12
  • 76
  • 145
  • Is it possible to generate a fingerprint image from binary data? – SwR Jun 23 '16 at 04:57
  • Sure, there are ANNs for this task, but your answer doesn't seem to actually answer anything. Question is: How is that done? The linked page doesn't disclose any information and the term "Auto Encoders" doesn't help either. – Simon Steinberger Nov 30 '17 at 23:31
  • the Original Question does not say "How is that done?", but it does say "some hints to existing algorithms would help me a lot" which is what I provided. – Alex R Dec 01 '17 at 23:10
  • You didn't link a "hint" to an algorithm, in fact the linked page says, "it works, but no one knows why. Please don't expect too much about the result"... – odyth Feb 18 '18 at 07:52
  • This https://deeplearning4j.org/deepautoencoder#use-cases provides more clarity on how Auto Encoders can be used to create a finger print and then how you can use that finger print to find similarities in other images based on how similar the vertices are. – odyth Feb 18 '18 at 08:03
  • lt is easily to compare two images to determinate if lt is roughly the same. but how would the fingerprint of the image is roughly the same if images are roughly the same – user1631977 Apr 27 '18 at 03:55
2

So you want to do "fingerprint matching" that's pretty different than "image matching". Fingerprints' analysis has been deeply studied during the past 20 years, and several interesting algorithms have been developed to ensure the right detection rate (with respect to FAR and FRR measures - False Acceptance Rate and False Rejection Rate).

I suggest you to better look to LFA (Local Feature Analysis) class of detection techniques, mostly built on minutiae inspection. Minutiae are specific characteristics of any fingerprint, and have been classified in several classes. Mapping a raster image to a minutiae map is what actually most of Public Authorities do to file criminals or terrorists.

See here for further references

ZZambia
  • 420
  • 3
  • 7
  • Do you know how to calculate the False Acceptance Rate if you have a Gaussian distribution of scores for a given biometric system? – GobiasKoffi Oct 11 '10 at 17:21
  • 2
    OP wants to "create fingerprints of many images". Not compare images of human fingerprints. – Navin Jul 25 '19 at 18:05
2

For iPhone image comparison and image similarity development check out: http://sites.google.com/site/imagecomparison/

To see it in action, check out eyeBuy Visual Search on the iTunes AppStore.

Brett
  • 29
  • 1
  • 2
2

One way you can do this is to resize the image and drop the resolution significantly (to 200x200 maybe?), storing a smaller (pixel-averaged) version for doing the comparison. Then define a tolerance threshold and compare each pixel. If the RGB of all pixels are within the tolerance, you've got a match.

Your initial run through is O(n^2) but if you catalog all matches, each new image is just an O(n) algorithm to compare (you only have to compare it to each previously inserted image). It will eventually break down however as the list of images to compare becomes larger, but I think you're safe for a while.

After 400 days of running, you'll have 500,000 images, which means (discounting the time to resize the image down) 200(H)*200(W)*500,000(images)*3(RGB) = 60,000,000,000 comparisons. If every image is an exact match, you're going to be falling behind, but that's probably not going to be the case, right? Remember, you can discount an image as a match as soon as a single comparison falls outside your threshold.

lc.
  • 105,606
  • 20
  • 147
  • 176
2

Do you literally want to compare every image against the others? What is the application? Maybe you just need some kind of indexing and retrieval of images based on certain descriptors? Then for example you can look at MPEG-7 standard for Multimedia Content Description Interface. Then you could compare the different image descriptors, which will be not that accurate but much faster.

Anonymous
  • 17,067
  • 2
  • 38
  • 63
0

It seems that specialised image hashing algorithms are an area of active research but perhaps a normal hash calculation of the image bytes would do the trick.

Are you seeking byte-identical images rather than looking for images that are derived from the same source but may be a different format or resolution (which strikes me as a rather hard problem).

Ian Hopkinson
  • 3,396
  • 4
  • 22
  • 28