10

I’m using CISourceOverCompositing to overlay text on top of an image and I’m getting unexpected results when the text image is not fully opaque. Dark colors are not dark enough and light colors are too light in the output image.

I recreated the issue in a simple Xcode project. It creates an image with orange, white, black text drawn with 0.3 alpha, and that looks correct. I even threw that image into Sketch placing it on top of the background image and it looks great. The image at the bottom of the screen shows how that looks in Sketch. The problem is, after overlaying the text on the background using CISourceOverCompositing, the white text is too opaque as if alpha was 0.5 and the black text is barely visible as if alpha was 0.1. The top image shows that programmatically created image. You can drag the slider to adjust the alpha (defaulted at 0.3) which will recreate the result image.

enter image description here

The code is included in the project of course, but also included here. This creates the text overlay with 0.3 alpha, which appears as expected.

let colorSpace = CGColorSpaceCreateDeviceRGB()
let alphaInfo = CGImageAlphaInfo.premultipliedLast.rawValue

let bitmapContext = CGContext(data: nil, width: Int(imageRect.width), height: Int(imageRect.height), bitsPerComponent: 8, bytesPerRow: 0, space: colorSpace, bitmapInfo: alphaInfo)!
bitmapContext.setAlpha(0.3)
bitmapContext.setTextDrawingMode(CGTextDrawingMode.fill)
bitmapContext.textPosition = CGPoint(x: 20, y: 20)

let displayLineTextWhite = CTLineCreateWithAttributedString(NSAttributedString(string: "hello world", attributes: [.foregroundColor: UIColor.white, .font: UIFont.systemFont(ofSize: 50)]))
CTLineDraw(displayLineTextWhite, bitmapContext)

let textCGImage = bitmapContext.makeImage()!
let textImage = CIImage(cgImage: textCGImage)

Next that text image is overlaid on top of the background image, which does not appear as expected.

let combinedFilter = CIFilter(name: "CISourceOverCompositing")!
combinedFilter.setValue(textImage, forKey: "inputImage")
combinedFilter.setValue(backgroundImage, forKey: "inputBackgroundImage")
let outputImage = combinedFilter.outputImage!
Andy Fedoroff
  • 26,838
  • 8
  • 85
  • 144
Jordan H
  • 45,794
  • 29
  • 162
  • 306
  • Wish I could help you. This question *did* receive my attention. The Apple doc suggested checking out the formula it uses ( http://keithp.com/~keithp/porterduff/p253-porter.pdf ). Have you checked it? Specifically page 4, sections 4.3? It's a bit "Greek" for my current usage of CI but maybe it's help you? Seems like *multiplying* alpha may be happening unexpectedly? – dfd Feb 28 '18 at 16:01
  • Thanks @dfd that's a good thought, but I don't see how. I've updated the question with more details and a sample project if anything stands out to you! – Jordan H Mar 04 '18 at 20:49
  • @Joey you won't belive but i m having same issue so and i tried a lot but didn't get any proper solutions so i tried one trick i have just put one white UIView Behind the image so it work perfectly. so just try may be it will help :) – Ravi Panchal Mar 05 '18 at 13:59

3 Answers3

4

FOR BLACK-AND-WHITE TEXT

If you're using .normal compositing operation you'll definitely get not the same result as using .hardLight. Your picture shows the result of .hardLight operation.

.normal operation is classical OVER op with formula: (Image1 * A1) + (Image2 * (1 – A1)).

Here's a premultiplied text (RGB*A), so RGB pattern depends on A's opacity in this particular case. RGB of text image can contain any color, including a black one. If A=0 (black alpha) and RGB=0 (black color) and your image is premultiplied – the whole image is totally transparent, if A=1 (white alpha) and RGB=0 (black color) – the image is opaque black.

If your text has no alpha when you use .normal operation, I'll get ADD op: Image1 + Image2.


To get what you want, you need to set up a compositing operation to .hardLight.

.hardLight compositing operation works as .multiply

if alpha of text image less than 50 percent (A < 0.5, the image is almost transparent)

Formula for .multiply: Image1 * Image2


.hardLight compositing operation works as .screen

if alpha of text image greater than or equal to 50 percent (A >= 0.5, the image is semi-opaque)

Formula 1 for .screen: (Image1 + Image2) – (Image1 * Image2)

Formula 2 for .screen: 1 – (1 – Image1) * (1 – Image2)

.screen operation has much softer result than .plus, and it allows to keep alpha not greater than 1 (plus operation adds alphas of Image1 and Image2, so you might get alpha = 2, if you have two alphas). .screen compositing operation is good for making reflections.

enter image description here

func editImage() {

    print("Drawing image with \(selectedOpacity) alpha")

    let text = "hello world"
    let backgroundCGImage = #imageLiteral(resourceName: "background").cgImage!
    let backgroundImage = CIImage(cgImage: backgroundCGImage)
    let imageRect = backgroundImage.extent

    //set up transparent context and draw text on top
    let colorSpace = CGColorSpaceCreateDeviceRGB()
    let alphaInfo = CGImageAlphaInfo.premultipliedLast.rawValue

    let bitmapContext = CGContext(data: nil, width: Int(imageRect.width), height: Int(imageRect.height), bitsPerComponent: 8, bytesPerRow: 0, space: colorSpace, bitmapInfo: alphaInfo)!
    bitmapContext.draw(backgroundCGImage, in: imageRect)

    bitmapContext.setAlpha(CGFloat(selectedOpacity))
    bitmapContext.setTextDrawingMode(.fill)

    //TRY THREE COMPOSITING OPERATIONS HERE 
    bitmapContext.setBlendMode(.hardLight)
    //bitmapContext.setBlendMode(.multiply)
    //bitmapContext.setBlendMode(.screen)

    //white text
    bitmapContext.textPosition = CGPoint(x: 15 * UIScreen.main.scale, y: (20 + 60) * UIScreen.main.scale)
    let displayLineTextWhite = CTLineCreateWithAttributedString(NSAttributedString(string: text, attributes: [.foregroundColor: UIColor.white, .font: UIFont.systemFont(ofSize: 58 * UIScreen.main.scale)]))
    CTLineDraw(displayLineTextWhite, bitmapContext)

    //black text
    bitmapContext.textPosition = CGPoint(x: 15 * UIScreen.main.scale, y: 20 * UIScreen.main.scale)
    let displayLineTextBlack = CTLineCreateWithAttributedString(NSAttributedString(string: text, attributes: [.foregroundColor: UIColor.black, .font: UIFont.systemFont(ofSize: 58 * UIScreen.main.scale)]))
    CTLineDraw(displayLineTextBlack, bitmapContext)

    let outputImage = bitmapContext.makeImage()!

    topImageView.image = UIImage(cgImage: outputImage)
}

So for recreating this compositing operation you need the following logic:

//rgb1 – text image 
//rgb2 - background
//a1   - alpha of text image

if a1 >= 0.5 { 
    //use this formula for compositing: 1–(1–rgb1)*(1–rgb2) 
} else { 
    //use this formula for compositing: rgb1*rgb2 
}

I recreated an image using compositing app The Foundry NUKE 11. Offset=0.5 here is Add=0.5.

I used property Offset=0.5 because transparency=0.5 is a pivot point of .hardLight compositing operation.

enter image description here

enter image description here

FOR COLOR TEXT

You need to use .sourceAtop compositing operation in case you have ORANGE (or any other color) text in addition to B&W text. Applying .sourceAtop case of .setBlendMode method you make Swift use the luminance of the background image to determine what to show. Alternatively you can employ CISourceAtopCompositing core image filter instead of CISourceOverCompositing.

bitmapContext.setBlendMode(.sourceAtop)

or

let compositingFilter = CIFilter(name: "CISourceAtopCompositing")

.sourceAtop operation has the following formula: (Image1 * A2) + (Image2 * (1 – A1)). As you can see you need two alpha channels: A1 is the alpha for text and A2 is the alpha for background image.

bitmapContext.textPosition = CGPoint(x: 15 * UIScreen.main.scale, y: (20 + 60) * UIScreen.main.scale)
let displayLineTextOrange = CTLineCreateWithAttributedString(NSAttributedString(string: text, attributes: [.foregroundColor: UIColor.orange, .font: UIFont.systemFont(ofSize: 58 * UIScreen.main.scale)]))
CTLineDraw(displayLineTextOrange, bitmapContext)

enter image description here

enter image description here

Andy Fedoroff
  • 26,838
  • 8
  • 85
  • 144
  • almost correct, but it depends on color of image not alpha which blend mode is used. – Juraj Antas Mar 06 '18 at 15:01
  • it depends on both in this case: RGB and A. Math applied to RGB, alpha controls opacity. Because here is RGB*A pattern (premultiplied image). – Andy Fedoroff Mar 06 '18 at 15:03
  • do you know how is A computed in your math? – Juraj Antas Mar 06 '18 at 15:08
  • would you mind to write it here? I am testing in custom CIFilter. – Juraj Antas Mar 06 '18 at 15:13
  • What exactly to write? – Andy Fedoroff Mar 06 '18 at 15:14
  • how to get float from vec4 or in other words, from input rgba you need to get single number you compare to 0.5 and choose blending formula. – Juraj Antas Mar 06 '18 at 15:17
  • For instance: you've got premultiplied rgba image. When you use slider, you must use only alpha, because lowering A make lower RGB. Use variable for alpha in vec4. – Andy Fedoroff Mar 06 '18 at 15:22
  • image that is generated is not using premultiplied alpha. That would be only rgb. but it is rgba. What I am trying now to do is to exactly recreate .hardLight in custom CIFilter. – Juraj Antas Mar 06 '18 at 15:25
  • https://developer.apple.com/documentation/coregraphics/cgblendmode/1455901-hardlight – Andy Fedoroff Mar 06 '18 at 15:33
  • This is definitely controlled by alpha. – Andy Fedoroff Mar 06 '18 at 15:33
  • whole image has alpha == 0.3 – Juraj Antas Mar 06 '18 at 15:35
  • @IBAction func sliderValueChanged(_ sender: UISlider) { selectedOpacity = sender.value editImage() } – Andy Fedoroff Mar 06 '18 at 15:38
  • Yes, alpha == 0.3, slider controls Alpha, and Alpha controls RGB. – Andy Fedoroff Mar 06 '18 at 15:38
  • So, A*RGB, and if R=1, B=1, G=1, RGB == 0.3 – Andy Fedoroff Mar 06 '18 at 15:39
  • That's what I said. – Andy Fedoroff Mar 06 '18 at 15:40
  • and for R=0, G=0, B=0, A=0.3 -> A*RGB == 0.0 ? That would mean for both cases it is lower than 0.5 and multiply is used always...not true – Juraj Antas Mar 06 '18 at 15:43
  • Logic is wrong. A controls RGB, not RGB controls A. – Andy Fedoroff Mar 06 '18 at 15:45
  • A can be 0.5 or 0.1. So A*RGB. And RGB=0.5 or RGB=0.1 – Andy Fedoroff Mar 06 '18 at 15:46
  • RGB image can contain any color, including black one. If A=0 and RGB=0 and image is premultiplied – the whole image is transparent, if A=1 and RGB=0 the image is opaque black. – Andy Fedoroff Mar 06 '18 at 15:51
  • that is something I know. But it does not help me recreating right blend mode. – Juraj Antas Mar 06 '18 at 15:53
  • what compositing operation you're recreating? .hardLight? – Andy Fedoroff Mar 06 '18 at 15:53
  • if a1 >= 0.5 { // 1 – (1 – rgb1) * (1 – rgb2) } else { // rgb1 * rgb2 } – Andy Fedoroff Mar 06 '18 at 15:59
  • problem is in A, if it is alpha, than it is constant 0.3, if it is RGB*A, than again it is 0.3, or 0, depending when text is white or black...so there is no case where it is bigger than 0.5 ;) this is why I think it is computed from color not alpha. – Juraj Antas Mar 06 '18 at 16:30
  • You were right – alpha has no impact on hardLight operation. Only on transparency. – Andy Fedoroff Mar 06 '18 at 17:11
  • Hey Andy, thanks for you answer. Hard light does look correct for purely black and white text, but unfortunately for colored text like `UIColor.orange` for example, it does not appear as expected, as it is inappropriately blended with the background so it can appear red rather than uniformly orange. I'm really looking to achieve the normal blend mode, not hard light, multiply, or screen. – Jordan H Mar 06 '18 at 19:08
  • @Joey, if it's recreated with preliminary color correction (what you can see in NUKE interface) it looks OK. – Andy Fedoroff Mar 06 '18 at 19:14
  • @Joey, if you're looking to achieve the .normal blend mode (this is classical OVER compositing operation), the formula is: RGB1*A1 + RGB2*(1–A1) – Andy Fedoroff Mar 06 '18 at 19:17
  • problem is color space. Generic RGB on iOS, sRGB in graphic programs. Without conversion between the spaces result never will be the same. – Juraj Antas Mar 06 '18 at 20:09
  • Gamma property can compensate sRGB curve. – Andy Fedoroff Mar 06 '18 at 20:11
  • For orange/black/white colors use ".sourceAtop" operation. – Andy Fedoroff Mar 06 '18 at 20:53
  • Yep, `sourceAtop` is all that's needed! Interestingly, I also tried `CISourceAtopCompositing` but it appears the exact same as `CISourceOverCompositing ` from what I can tell. Odd. Anyways, if you'll update the answer I'll accept it! Thank you! – Jordan H Mar 06 '18 at 21:46
  • I did try the `CISourceAtopCompositing` CIFilter, but it did not result in the same outcome as using `sourceAtop` blend mode. It appears the same as `CISourceOverCompositing`. But your answer says it would obtain the same results. I'd think it would too, but it doesn't, strangely. – Jordan H Mar 07 '18 at 16:35
  • You need to add alpha channel to background image at first. – Andy Fedoroff Mar 07 '18 at 16:36
  • Hey @andy can you expand upon what you mean by adding alpha channel to the background image first to make this work using `CISourceAtopCompositing`? I made part of the background image transparent but I continue to get the same result. – Jordan H Mar 17 '18 at 22:48
  • The formula for Atop operation is Ab+B(1-a). The resulted image shows the shape of image B, with A covering B where the images overlap. – Andy Fedoroff Mar 17 '18 at 22:50
  • a is the alpha channel of foreground image, b is the alpha channel of background image. – Andy Fedoroff Mar 17 '18 at 22:51
  • A is the RGB channels of foreground image, B is the RGB channels of background image. – Andy Fedoroff Mar 17 '18 at 22:52
  • As formula says you definitely need alpha for background. – Andy Fedoroff Mar 17 '18 at 22:53
  • Ok, hm, with alpha on the background image I'm getting the same unexpected outcome. Can you try it with the sample project see if you can get it working with `CISourceAtopCompositing`? – Jordan H Mar 17 '18 at 23:02
  • At the moment I can't reach Xcode. Only tomorrow))) – Andy Fedoroff Mar 17 '18 at 23:06
  • You're definitely right about the problem being caused by the background image. If I create a new image by drawing the background into the context and getting it out via `makeImage()` to use that instead, it appears as expected. But that's really inefficient and uses a lot of RAM for large photos. I'm wondering what's an efficient way to get that background image in the right format. – Jordan H Mar 18 '18 at 18:30
  • I've tried it today. It seems ok with compositing op. – Andy Fedoroff Mar 18 '18 at 18:54
  • I suppose the only efficient way to get background image in the right format is to prepare it in compositing app like Nuke or Fusion. – Andy Fedoroff Mar 18 '18 at 18:55
  • Let us [continue this discussion in chat](https://chat.stackoverflow.com/rooms/167051/discussion-between-andy-and-joey). – Andy Fedoroff Mar 18 '18 at 18:59
4

After a lot of back and forth trying different things, (thanks @andy and @Juraj Antas for pushing me in the right direction) I finally have the answer. So drawing into a Core Graphics context results in the correct appearance, but it is more costly to draw images using that approach. It seemed the problem was with CISourceOverCompositing, but the problem actually lies with the fact that, by default, Core Image filters work in linear space whereas Core Graphics works in perceptual space, which explains the different results. You can however create a Core Graphics image from the Core Image filter using a Core Image context that performs no color management, thus matching the output of the Core Graphics approach. So the original code was just fine, just had to output the image a bit differently.

let ciContext = CIContext(options: [kCIContextWorkingColorSpace: NSNull()])
let outputImage = ciContext.createCGImage(outputCIImage, from: outputCIImage.extent) 
//this image appears as expected
Jordan H
  • 45,794
  • 29
  • 162
  • 306
  • I also added `outputColorSpace`. For swift 4: `let ciContext = CIContext(options: [CIContextOption.outputColorSpace: NSNull(), CIContextOption.workingColorSpace: NSNull()])` – lenooh Oct 28 '18 at 22:28
1

Final answer: Formula in CISourceOverCompositing is good one. It is right thing to do.

BUT

It is working in wrong color space. In graphic programs you most likely have sRGB color space. On iOS Generic RGB color space is used. This is why results don't match.

Using custom CIFilter I recreated CISourceOverCompositing filter.
s1 is text image.
s2 is background image.

Kernel for it is this:

 kernel vec4 opacity( __sample s1, __sample s2) {
     vec3 text = s1.rgb;
     float textAlpha = s1.a;
     vec3 background = s2.rgb;

     vec3 res = background * (1.0 - textAlpha) + text;
     return vec4(res, 1.0);
 }

So to fix this color 'issue' you must convert text image from RGB to sRGB. I guess your next question will be how to do that ;)

Important: iOS does not support device-independent or generic color spaces. iOS applications must use device color spaces instead. Apple doc about color spaces

test image with RGB and sRGB color spaces

Juraj Antas
  • 2,790
  • 22
  • 34
  • Thanks for your answer! Unfortunately, while this appears as expected using the 50% gray test I had, it doesn't look correct on real photos due to the overlay blend mode. I updated the project to use a real photo that more clearly demonstrates the issue, and I added your proposed changes that you can uncomment to try it with the real background image. – Jordan H Mar 06 '18 at 04:21
  • Try exchange .overlay with .hardLight. Is the result what are you after? If not only other way is custom CIFilter. – Juraj Antas Mar 06 '18 at 08:37
  • Hm hard light works well for black and white text, but trying orange text, it appears red. – Jordan H Mar 06 '18 at 15:44
  • 1
    I am testing custom CIFilter, I need to find good math formula for that "normal" blend mode. using formulas from there: http://www.simplefilter.de/en/basics/mixmods.html – Juraj Antas Mar 06 '18 at 15:51
  • 1
    With custom CIFilter I got same result as with CISourceOverCompositing. CISourceOverCompositing formula is: vec3 res = background * (1.0 - textAlpha) + text; vec3 is rgb color vector. Now still the question is what is right formula for composition that graphic programs use for "normal" blend mode. Have hard time to find it. – Juraj Antas Mar 06 '18 at 19:10
  • I am starting to think that problem is not in the formula but in the color space. Graphic programs are using sRGB.... – Juraj Antas Mar 06 '18 at 19:40
  • Hmm I tried replacing `CGColorSpaceCreateDeviceRGB()` with `CGColorSpace(name: CGColorSpace.sRGB)` but I'm getting the same result. – Jordan H Mar 06 '18 at 20:45
  • yeah, I know. there is a catch. "Important: iOS does not support device-independent or generic color spaces. iOS applications must use device color spaces instead." https://developer.apple.com/library/content/documentation/GraphicsImaging/Conceptual/drawingwithquartz2d/dq_color/dq_color.html – Juraj Antas Mar 06 '18 at 20:57
  • in other words Apple is saying that you need to do it yourself. It is not that hard (but not exactly easy, color spaces are tricky) http://www.ryanjuckett.com/programming/rgb-color-space-conversion/?start=2 – Juraj Antas Mar 06 '18 at 20:59
  • Thanks for your answer! I was able to obtain the desired results simply by using the `sourceAtop` blend mode instead of `hardLight`, if you hadn't seen that in the other comments. – Jordan H Mar 07 '18 at 16:36
  • yes, got it. simple solution is always the best. Andy's answer is good, except that A in his equations is not alpha. Alpha in your test image is never bigger than 0.3. It is lightness and can be computed as: float lightness = (0.2126* b.r + 0.7152*b.g + 0.0722*b.b). He should fix that. HardLight is only good for grey or near to grey images as you get color shift if you use colors. – Juraj Antas Mar 08 '18 at 10:16