I'm trying to separate connected objects. It seems that Python and the watershed algorithm (scipy implementation) are well-suited to handle this.
Here is my image and automatically generated watershed seed points (local maxima of the thresholded and distance-transformed image):
seeds = myGenSeeds( image_grey )
So far, so good; there is a seed for every object.
Things break down when I run the watershed though:
segmented = ndimage.measurements.watershed_ift( 255 - image_grey, seeds)`
Both the top-middle cluster and the centre cluster are poorly separated. In the top cluster, one object flooded around the other two. In the centre cluster, though it might be too small to see here, the centre seed flooded to only a few pixels.
I have two questions:
- Is the watershed algorithm a good choice for separating objects like this?
- If so, is there some sort of pre-processing that I've got do do to make the image more suitable for watershed segmentation?