5

I'm converting ciimage to monochrome, clipping with CICrop and running sobel to detect edges, #if section at the bottom is used to display result

CIImage *ci = [[CIImage alloc] initWithCGImage:uiImage.CGImage];

CIImage *gray = [CIFilter filterWithName:@"CIColorMonochrome" keysAndValues:
      @"inputImage", ci, @"inputColor", [[CIColor alloc] initWithColor:[UIColor whiteColor]],
      nil].outputImage;



CGRect rect = [ci extent];
rect.origin = CGPointZero;

CGRect cropRectLeft = CGRectMake(0, 0, rect.size.width * 0.2, rect.size.height);
CIVector *cropRect = [CIVector vectorWithX:rect.origin.x Y:rect.origin.y Z:rect.size.width* 0.2 W:rect.size.height];
CIImage *left = [gray imageByCroppingToRect:cropRectLeft];

CIFilter *cropFilter = [CIFilter filterWithName:@"CICrop"];

[cropFilter setValue:left forKey:@"inputImage"];
[cropFilter setValue:cropRect forKey:@"inputRectangle"];

// The sobel convoloution will produce an image that is 0.5,0.5,0.5,0.5 whereever the image is flat
// On edges the image will contain values that deviate from that based on the strength and
// direction of the edge
const double g = 1.;
const CGFloat weights[] = { 1*g, 0, -1*g,
    2*g, 0, -2*g,
    1*g, 0, -1*g};
left = [CIFilter filterWithName:@"CIConvolution3X3" keysAndValues:
      @"inputImage", cropFilter.outputImage,
      @"inputWeights", [CIVector vectorWithValues:weights count:9],
      @"inputBias", @0.5,
      nil].outputImage;

#define VISUALHELP 1
#if VISUALHELP
CGImageRef imageRefLeft = [gcicontext createCGImage:left fromRect:cropRectLeft];
CGContextDrawImage(context, cropRectLeft, imageRefLeft);
CGImageRelease(imageRefLeft);
#endif

Now whenever 3x3 convolution is not part of the ciimage pipeline the portion of the image I run edge detection on shows up gray, but whenever CIConvolution3X3 postfix is part of the processing pipeline the colors magically appear back. This happens no matter if I use CIColorMonochrome or CIPhotoEffectMono prefix to remove color. Any ideas how to keep the color out all the way to the bottom of the pipeline? tnx

UPD: not surprisingly running a crude custom monochrome kernel such as this one

kernel vec4 gray(sampler image)
{
    vec4 s = sample(image, samplerCoord(image));
    float r = (s.r * .299 + s.g * .587 + s.b * 0.114) * s.a;
    s = vec4(r, r, r, 1);
    return s;
}

instead of using standard mono filters from apple results in the exact same issue with the color coming back when 3x3 convolution is part of my ci pipeline

Anton Tropashko
  • 4,300
  • 3
  • 31
  • 53

2 Answers2

4

This issue is that CI convolutions operations (e.g CIConvolution3X3, CIConvolution5X5 and CIGaussianBlur) operate on all four channels of the input image. This means that, in your code example, the resulting alpha channel will be 0.5 where you probably want it to be 1.0. Try adding a simple kernel after the convolution to set the alpha back to 1.

David Hayward
  • 525
  • 2
  • 8
2

to followup: I gave up on coreimage for this task. it seems that using two instances of CIFilter or CIKernel cause a conflict. Someone somewhere in the coreimage innards seems to manipulate gles state incorrectly and, thusly, reverse engineering what went wrong ends up costlier than using something other than core image (with custom ci filters which work on ios8 only anyway) gpuimage seems not as buggy and easier to service/debug (no affiliation on my part)

Anton Tropashko
  • 4,300
  • 3
  • 31
  • 53