0

I have an assignment and I want to convert an RGB Image to Greyscale 8-bit and 16-bit. I got this formula from google but it doesn't whether that is 8-bit or 16-bit. Can someone explain the difference between 8-bit Greyscale and 16-bit Greyscale?

int i, j;
if (File != null)
{
    File2 = new Bitmap(File);
    for (i = 0; i <= File2.Width - 1; i++)
    {
        for (j = 0; j <= File2.Height - 1; j++)
        {
            Color originalColor = File2.GetPixel(i, j);
            int grayScale = (int)((originalColor.R * .3) + (originalColor.G * .59) + (originalColor.B * .11));
            Color newColor = Color.FromArgb(grayScale, grayScale, grayScale);
            File2.SetPixel(i, j, newColor);
        }
    }
    hasilBox.Width = File2.Width;
    hasilBox.Height = File2.Height;
    hasilBox.Image = File2;
}
Wai Ha Lee
  • 7,664
  • 52
  • 54
  • 80

1 Answers1

1

This will work for 8 bit, 16 bit and 24 bit images.

An 8-bit image has RGB values between 0 and 255. A 16-bit image has RGB values between 0 and 65535

Let's suppose you have an 8-bit white value with RGB all set to 255.

The calculation will result in 255 (full white) which is correct.

Similarly, the result for 16-bit white with RGB all set to 65535 will result in 65536 (full white) which is also correct.

This scaling works because 0.3 + 0.59 + 0.11 = 1.0

For values that aren't white, the scaling works similarly. And of course, black will always have RGB all zero, which will always result in 0.

Incidentally, the reason for these numbers is that they approximate the human eye's response to those colours. Our eyes are more sensitive to green, so the green component contributes more to the final luminosity than red or blue.

Matthew Watson
  • 90,570
  • 7
  • 128
  • 228