31

I am trying to count the number of hairs transplanted in the following image. So practically, I have to count the number of spots I can find in the center of image. (I've uploaded the inverted image of a bald scalp on which new hairs have been transplanted because the original image is bloody and absolutely disgusting! To see the original non-inverted image click here. To see the larger version of the inverted image just click on it). Is there any known image processing algorithm to detect these spots? I've found out that the Circle Hough Transform algorithm can be used to find circles in an image, I'm not sure if it's the best algorithm that can be applied to find the small spots in the following image though.

enter image description here

P.S. According to one of the answers, I tried to extract the spots using ImageJ, but the outcome was not satisfactory enough:

  1. I opened the original non-inverted image (Warning! it's bloody and disgusting to see!).
  2. Splited the channels (Image > Color > Split Channels). And selected the blue channel to continue with.
  3. Applied Closing filter (Plugins > Fast Morphology > Morphological Filters) with these values: Operation: Closing, Element: Square, Radius: 2px
  4. Applied White Top Hat filter (Plugins > Fast Morphology > Morphological Filters) with these values: Operation: White Top Hat, Element: Square, Radius: 17px enter image description here

However I don't know what to do exactly after this step to count the transplanted spots as accurately as possible. I tried to use (Process > Find Maxima), but the result does not seem accurate enough to me (with these settings: Noise tolerance: 10, Output: Single Points, Excluding Edge Maxima, Light Background):

enter image description here

As you can see, some white spots have been ignored and some white areas which are not actually hair transplant spots, have been marked.

What set of filters do you advise to accurately find the spots? Using ImageJ seems a good option since it provides most of the filters we need. Feel free however, to advise what to do using other tools, libraries (like OpenCV), etc. Any help would be highly appreciated!

B Faley
  • 14,391
  • 34
  • 113
  • 191
  • Instead of implementing it on your own, maybe try to use Emgu CV library (OpenCV in .net). I used it a little bit in the past but unfortunately not so much to help more. http://www.emgu.com/wiki/index.php/Main_Page – M G Oct 03 '15 at 18:01

5 Answers5

36

I do think you are trying to solve the problem in a bit wrong way. It might sound groundless, so I'd better show my results first.

Below I have a crop of you image on the left and discovered transplants on the right. Green color is used to highlight areas with more than one transplant.

enter image description here

The overall approach is very basic (will describe it later), but still it provides close to be accurate results. Please note, it was a first try, so there is a lot of room for enhancements.

Anyway, let's get back to the initial statement saying you approach is wrong. There are several major issues:

  1. the quality of your image is awful
  2. you say you want to find spots, but actually you are looking for hair transplant objects
  3. you completely ignores the fact average head is far from being flat
  4. it does look like you think filters will add some important details to your initial image
  5. you expect algorithms to do magic for you

Let's review all these items one by one.

1. Image quality

It might be very obvious statement, but before the actual processing you need to make sure you have best possible initial data. You might spend weeks trying to find a way to process photos you have without any significant achievements. Here are some problematic areas:

enter image description here

I bet it is hard for you to "read" those crops, despite the fact you have the most advanced object recognition algorithms in your brain.

Also, your time is expensive and you still need best possible accuracy and stability. So, for any reasonable price try to get: proper contrast, sharp edges, better colors and color separation.

2. Better understanding of the objects to be identified

Generally speaking, you have a 3D objects to be identified. So you can analyze shadows in order to improve accuracy. BTW, it is almost like a Mars surface analysis :)

enter image description here

3. The form of the head should not be ignored

Because of the form of the head you have distortions. Again, in order to get proper accuracy those distortions should be corrected before the actual analysis. Basically, you need to flatten analyzed area.

enter image description here

3D model source

4. Filters might not help

Filters do not add information, but they can easily remove some important details. You've mentioned Hough transform, so here is interesting question: Find lines in shape

I will use this question as an example. Basically, you need to extract a geometry from a given picture. Lines in shape looks a bit complex, so you might decide to use skeletonization

enter image description here

All of a sadden, you have more complex geometry to deal with and virtually no chances to understand what actually was on the original picture.

5. Sorry, no magic here

Please be aware of the following:

enter image description here

You must try to get better data in order to achieve better accuracy and stability. The model itself is also very important.

Results explained

As I said, my approach is very simple: image was posterized and then I used very basic algorithm to identify areas with a specific color.

enter image description here

Posterization can be done in a more clever way, areas detection can be improved, etc. For this PoC I just have a simple rule to highlight areas with more than one implant. Having areas identified a bit more advanced analysis can be performed.

Anyway, better image quality will let you use even simple method and get proper results.

Finally

How did the clinic manage to get Yondu as client? :)

enter image description here

Update (tools and techniques)

  • Posterization - GIMP (default settings,min colors)
  • Transplant identification and visualization - Java program, no libraries or other dependencies
  • Having areas identified it is easy to find average size, then compare to other areas and mark significantly bigger areas as multiple transplants.

Basically, everything is done "by hand". Horizontal and vertical scan, intersections give areas. Vertical lines are sorted and used to restore the actual shape. Solution is homegrown, code is a bit ugly, so do not want to share it, sorry.

The idea is pretty obvious and well explained (at least I think so). Here is an additional example with different scan step used:

enter image description here

Yet another update

A small piece of code, developed to verify a very basic idea, evolved a bit, so now it can handle 4K video segmentation in real-time. The idea is the same: horizontal and vertical scans, areas defined by intersected lines, etc. Still no external libraries, just a lot of fun and a bit more optimized code.

enter image description here

enter image description here

Additional examples can be found on YouTube: RobotsCanSee

or follow the progress in Telegram: RobotsCanSee

Renat Gilmanov
  • 17,223
  • 5
  • 35
  • 54
  • Hi, I really appreciate the effort you have put in preparing such great answer buddy! Could you please elaborate on what software/library you have used to posterize the image? If you've coded, could you please provide the sample code as well? If you've used a software, what settings did you apply? How have you identified areas with more than one transplant? And finally, what have you used to draw those lines around each identified area? – B Faley Oct 10 '15 at 20:27
  • Hello @Meysam. NP, interesting task. See updated. Code is really simple, int[][] represents and image and so on, nothing to share. Please don't miss the main point of my answer - with proper data you'll find several methods solve you problem. I did implement basic scanning just for fun and it shows pretty good results when input data is OK. – Renat Gilmanov Oct 10 '15 at 22:40
  • Thank you, your solution is a good starting point to work on. Let me see if I can improve it :) I will keep you posted. – B Faley Oct 11 '15 at 12:19
5

I've just tested this solution using ImageJ, and it gave good preliminary result:

  1. On the original image, for each channel
  2. Small (radius 1 or 2) closing in order to get rid of the hairs (black part in the middle of the white one)
  3. White top-hat of radius 5 in order to detect the white part around each black hair.
  4. Small closing/opening in order to clean a little bit the image (you can also use a median filter)
  5. Ultimate erode in order to count the number of white blob remaining. You can also certainly use a LoG (Laplacian of Gaussian) or a distance map.

[EDIT] You don't detect all the white spots using the maxima function, because after the closing, some zones are flat, so the maxima is not a point, but a zone. At this point, I think that an ultimate opening or an ultimate eroded would give you the center or each white spot. But I am not sure that there is a function/pluggin doing it in ImageJ. You can take a look to Mamba or SMIL.

A H-maxima (after white top-hat) may also clean a little bit more your results and improve the contrast between the white spots.

FiReTiTi
  • 4,748
  • 11
  • 22
  • 47
  • Could you please provide a link to the corresponding function of ImageJ you have used in each step? Is it possible to share your sample code? – B Faley Oct 05 '15 at 06:50
  • I used the plugin filter "Fast Morphology" in ImageJ. Then you just call the name of the methods as I indicated, with the given parameter (only 1 the radius). – FiReTiTi Oct 05 '15 at 08:36
  • You certainly have the same functions into OpenCV – FiReTiTi Oct 05 '15 at 08:36
  • Could you please elaborate on which menu items you have selected in each step? It's very hard for someone who has never used ImageJ before to understand the meaning of "small closing". – B Faley Oct 05 '15 at 14:34
  • Ok, I could find all of them except for the last one. Where is "Ultimate erode"? I could only find "Process/Binary/Erode". And for the first step, I used "Image/Color/Split Channels", and selected one of the three channels to work on. Is it what exactly you meant? – B Faley Oct 05 '15 at 15:55
  • Meysam, are you familiar with Markov random walk? I have an idea that I think it will work. Let me know if the thread is yet active. – Saeed Oct 10 '15 at 15:16
  • @Saeed, the thread is extremely active :) Please share your findings, there a lot of interested people here. – Renat Gilmanov Oct 10 '15 at 22:49
3

As Renat mentioned, you should not expect algorithms to do magic for you, however I'm hopeful to come up with a reasonable estimate of the number of spots. Here, I'm going to give you some hints and resources, check them out and call me back if you need more information.

First, I'm kind of hopeful to morphological operations, but I think a perfect pre-processing step may push the accuracy yielded by them dramatically. I want you put my finger on the pre-processing step. Thus I'm going ti work with this image:

Enter image description here

That's the idea:

Collect and concentrate the mass around the spot locations. What do I mean my concentrating the masses? Let's open the book from the other side: As you see, the provided image contains some salient spots surrounded by some noisy gray-level dots.

By dots, I mean the pixels that are not part of a spot, but their gray-value are larger than zero (pure black) - which are available around the spots. It is clear that if you clear these noisy dots, you surely will come up with a good estimate of spots using other processing tools such as morphological operations.

Now, how to make the image more sharp? What if we could make the dots to move forward to their nearest spots? This is what I mean by concentrating the masses over the spots. Doing so, only the prominent spots will be present in the image and hence we have made a significant step toward counting the prominent spots.

How to do the concentrating thing? Well, the idea that I just explained is available in this paper, which its code is luckily available. See the section 2.2. The main idea is to use a random walker to walk on the image for ever. The formulations is stated such that the walker will visit the prominent spots far more times and that can lead to identifying the prominent spots. The algorithm is modeled Markov chain and The equilibrium hitting times of the ergodic Markov chain holds the key for identifying the most salient spots.

What I described above is just a hint and you should read that short paper to get the detailed version of the idea. Let me know if you need more info or resources.

That is a pleasure to think on such interesting problems. Hope it helps.

Peter Mortensen
  • 28,342
  • 21
  • 95
  • 123
Saeed
  • 732
  • 1
  • 6
  • 19
  • Hi. Concentrating the masses over the spots is a great idea, but we have to see how feasible is it to apply the algorithm to this image. You said the source code is available, but I cannot seem to find the source code. Could you please point me to that? – B Faley Oct 11 '15 at 12:24
  • Here is the link to the code. Note that you have use part of the code which is related to the mass concentration. Feed me back on the results. http://www.vision.caltech.edu/~harel/share/gbvs.php – Saeed Oct 11 '15 at 13:25
  • Hello again! I don't know whether I've used the correct function (gbvs), but the produced saliency map is as follows: http://i.stack.imgur.com/0tVBA.jpg Do you know what I've done wrong? – B Faley Oct 12 '15 at 20:21
  • @Meysam Well, let me check. I'll update you on the results. – Saeed Oct 13 '15 at 07:59
  • Thank you. Much appreciated! – B Faley Oct 13 '15 at 08:25
2

You could do the following:

  1. Threshold the image using cv::threshold
  2. Find connected components using cv::findcontour
  3. Reject the connected components of size larger than a certain size as you seem to be concerned about small circular regions only.
  4. Count all the valid connected components.
  5. Hopefully, you have a descent approximation of the actual number of spots.
  6. To be statistically more accurate, you could repeat 1-4 for a range of thresholds and take the average.
Ajay
  • 330
  • 1
  • 9
2

This is what you get after applying unsharpen radius 22, amount 5, threshold 2 to your image.

This increases the contrast between the dots and the surrounding areas. I used the ballpark assumption that the dots are somewhere between 18 and 25 pixels in diameter.

Now you can take the local maxima of white as a "dot" and fill it in with a black circle until the circular neighborhood of the dot (a circle of radius 10-12) erases the dot. This should let you "pick off" the dots joined to each other in clusters more than 2. Then look for local maxima again. Rinse and repeat.

The actual "dot" areas are in stark contrast to the surrounding areas, so this should let you pick them off as well as you would by eyeballing it.

Unsharpen radius 22, amount 5, threshold 2.

Dmitry Rubanovich
  • 2,234
  • 15
  • 22