28

I'm looking for a way to detect which of two (similar) images is sharper.

I'm thinking this could be using some measure of overall sharpness and generating a score (hypothetical example: image1 has sharpness score of 9, image2 has sharpness score of 7; so image1 is sharper)

I've done some searches for sharpness detection/scoring algorithms, but have only come across ones that will enhance image sharpness.

Has anyone done something like this, or have any useful resources/leads?

I would be using this functionality in the context of a webapp, so PHP or C/C++ is preferred.

Dr. belisarius
  • 59,172
  • 13
  • 109
  • 187
econstantin
  • 751
  • 1
  • 7
  • 15
  • 2
    Are they two images of the same object/distance but one is sharper than the other? – Assaf Lavie Jul 11 '11 at 06:22
  • 3
    interesting paper: http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?tp=&arnumber=4697259 (Image sharpness measure using eigenvalues) – Assaf Lavie Jul 11 '11 at 06:25
  • @gigantt, thanks will check it out. for the most part, I imagine the images will *mostly* similar. Perhaps slight changes in distance thst could cause small variations in subject size, or with a narrow depth of field that could cause different parts to be in/out of focus. – econstantin Jul 11 '11 at 08:21

5 Answers5

16

As e.g. shown in this Matlab Central page, the sharpness can be estimated by the average gradient magnitude.

I used this in Python as

from PIL import Image
import numpy as np

im = Image.open(filename).convert('L') # to grayscale
array = np.asarray(im, dtype=np.int32)

gy, gx = np.gradient(array)
gnorm = np.sqrt(gx**2 + gy**2)
sharpness = np.average(gnorm)

A similar number can be computed with the simpler numpy.diff instead of numpy.gradient. The resulting array sizes need to be adapted there:

dx = np.diff(array)[1:,:] # remove the first row
dy = np.diff(array, axis=0)[:,1:] # remove the first column
dnorm = np.sqrt(dx**2 + dy**2)
sharpness = np.average(dnorm)
Robert Pollak
  • 2,849
  • 3
  • 26
  • 51
  • 1
    Yes, less sharpness means more blur. – Robert Pollak Apr 10 '15 at 21:59
  • Does this need to be done on the grayscale image, as in the matlab code? Or should it work on the colorful image as well? (I am assuming that array = list(img.getdata()), is this correct?) – faerubin Dec 08 '16 at 12:00
  • @faerubin, my code is for grayscale. I have now extended the snippet to show this. However, a similar method would work on color image data. – Robert Pollak Dec 12 '16 at 10:27
16

The simple method is to measure contrast -- the image with the largest differences between pixel values is the sharpest. You can, for example, compute the variance (or standard deviation) of the pixel values, and whichever produces the larger number wins. That looks for maximum overall contrast, which may not be what you want though -- in particular, it will tend to favor pictures with maximum depth of field.

Depending on what you want, you may prefer to use something like an FFT, to see which displays the highest frequency content. This allows you to favor a picture that's extremely sharp in some parts (but less so in others) over one that has more depth of field, so more of the image is reasonably sharp, but the maximum sharpness is lower (which is common, due to diffraction with smaller apertures).

Jerry Coffin
  • 437,173
  • 71
  • 570
  • 1,035
  • About the FFT method, Interesting approach! Do you mean do compare the brightneses of images in certain parts of the FFT transformed image? Would higher frequences be located in the centre or on the edges of the image? – ellockie Mar 19 '15 at 13:00
  • 1
    @ellockie: After the FFT what you have is data describing the image, but no longer an actual image. The higher frequencies would depend on the content of the image, not the location in the image (i.e., could be anywhere--the idea is that it would happen in the parts that were sharpest). – Jerry Coffin Mar 20 '15 at 04:32
  • So in the FFT transorm image can you say that the pixels further from the centre represent higher frequences related to more detailed features? Thank you for the explanation. – ellockie Mar 20 '15 at 12:18
6

Simple practical approach would be to use edge detection (more edges == sharper image).

Quick and dirty hands-on using PHP GD

function getBlurAmount($image) {
    $size = getimagesize($image);
    $image = imagecreatefromjpeg($image);
    imagefilter($image, IMG_FILTER_EDGEDETECT);    
    $blur = 0;
    for ($x = 0; $x < $size[0]; $x++) {
        for ($y = 0; $y < $size[1]; $y++) {
            $blur += imagecolorat($image, $x, $y) & 0xFF;
        }
    }
    return $blur;
}

$e1 = getBlurAmount('http://upload.wikimedia.org/wikipedia/commons/thumb/5/51/Jonquil_flowers_at_f32.jpg/800px-Jonquil_flowers_at_f32.jpg');
$e2 = getBlurAmount('http://upload.wikimedia.org/wikipedia/commons/thumb/0/01/Jonquil_flowers_at_f5.jpg/800px-Jonquil_flowers_at_f5.jpg');

echo "Relative blur amount: first image " . $e1 / min($e1, $e2) . ", second image " . $e2 / min($e1, $e2);

(image with less blur is sharper) More efficient approach would be to detect edges in your code, using Sobel operator. PHP example (rewriting in C++ should give huge performance boost I guess).

lxa
  • 3,044
  • 2
  • 25
  • 29
  • The last byte returned from `imagecolorat()` contains the blue component. To consider red and green use the filter `imagefilter ($image, IMG_FILTER_GRAYSCALE);` before. – hermannk Mar 14 '16 at 14:29
  • The `imagefilter($image, IMG_FILTER_EDGEDETECT)` return values around 127. If there is more contrast in the picture then the values differ more from that value _locally_. Nevertheless, the mean value is always close to 127. To fix: Calculate the variance of the grey values. – hermannk Mar 14 '16 at 14:33
3

This paper describes a method for computing a blur factor using DWT. Looked pretty straight forward but instead of detecting sharpness it's detecting blurredness. Seems it detects edges first (simple convolution) and then uses DWT to accumulate and score it.

gordy
  • 8,350
  • 1
  • 27
  • 40
2

Check Contrast Transfer Functions (CTF)

Here's an implementation
Here's an explanation

Dr. belisarius
  • 59,172
  • 13
  • 109
  • 187
Eric Fortis
  • 13,518
  • 6
  • 37
  • 57
  • 1
    As far as I can see, the paper and the implementation apply to Electron Microscopy. Extrapolation to normal photography does not seem straightforward. I downvoted because I am very interested in knowing how the CTF could be used here, and I'll remove my vote after you edit and enhance your answer. Thanks! – Dr. belisarius Jul 11 '11 at 11:35
  • 2
    links are broken :( – Alessandro Mariani Feb 22 '17 at 14:41
  • You should post them both rather than providing a link which may change. – M.Innat Nov 13 '20 at 18:50