Dithering a 24-bit Image To 16-bit Question submitted by (16 July 1999) Return to The Archives
 I have a program that loads any image, and then displays it on the screen. Most of these images are in 24-bit format, and I would like to know how to dither them down to 16-bit nicely so I don't get that ugly ramp effect on gradients.

 Loadtime and runtime palette reduction and dithering is really a topic better suited for a tutorial than for this question and answer format, but I'll try to cram some notes into a few paragraphs. Search the web for articles and samples on this topic. Let me start with some caveats: Dithering, like all algorithms, takes CPU cycles. Depending on you needs, you may want to use a paint program to dither the image before runtime or dither the image as you load it. Everything I'm writing here is BS. I've never actually written my own dithering algorithms and I haven't tested these concepts -- This note is just the result of a little web-based research. The graphics APIs that I use do dithering for me (along with other lower-level tasks). There are several different methods used to dither an image. What follows is a overview of dithering in general. I'm using a 24-bit to 15-bit conversion to illustrate this because it's easier to explain if the primary colors are reduced by the same number of bits. I'm also assuming a top-to-bottom, left-to-right render. Let's review color reduction before jumping into dithering. To represent a color with 24 bits, you use 8 bits for each of the components: red, green, blue. (8+8+8=24). When you reduce that color to 15 bits, you just keep the most significant 5 bits of each component. (5+5+5=15). When dithering, you don't just trash the insignificant bits - you use them to create a pattern that will help reduce banding in the image.In this case, we're discarding 3 bits from each color component. These extra bits hold the value of the difference between the intensity that we wanted and the intensity that we got -- the error. When you just ignore the error, you get banding. What we really want to do is to preserve the overall intensity of the image by distributing this error to the surrounding pixels. Essentially, we're adding noise to the image so that series of similarly-shaded, adjacent pixels don't form noticible borders.Treat red, green, and blue seperately. Once you've selected the best matching color, add the error proportionately among the surrounding pixels to the right (7/16ths), bottom-left (3/16ths), bottom (5/16ths), and bottom-right(1/16ths). [These proportions are suggested in the Foley / Van Dam book].The result should be an "error diffusion" dithered rendering of the original image.Another posting I read in a the comp.graphics.algorithm newsgroup suggested that instead of selecting the closest match in the target, you could randomly select between the 3 closest matches. I'm pretty sure that this is NOT the most efficient way to dither and it might look horrible when implemented, but it could be a simple kludge.Here's a somewhat related topic. When reducing an image from RGB to indexed, the process changes. You'll want to scan the color table for the closest match to the source color. But how do you determine the closest match? You need to create some primitive value (most likely a long) that represents the RGB value of the source color and can be compared to similar values using simple subtraction to find the closest match. If we represent the bits of a 9-bit (or N-bit) color as r2r1r0g2g1g0b2b1b0, then the generated number will be r2g2b2r1g1b1r0g0b0. By interlacing the bits in this maner, we get a number with each components' most siginicant bits where they should be -- in the most significant bits of our generated value. Now we can just subtract the target and source to find the smallest difference -- that's our match.Hope this helps! Response provided by Joseph Hall
 This article was originally an entry in flipCode's Fountain of Knowledge, an open Question and Answer column that no longer exists.