Colour quantisation is a very old and common computer graphics problem. -Many methods exist to do the task, and their efficiency depends on several -parameters:
+Since a grayscale pixel has a value between 0 and 1, a fast method -to convert the image to black and white is to set all pixels below 0.5 -to black and all pixels above 0.5 to white. This method is called -thresholding and, in our case, results in the following image:
- -- - -
- -Not that bad, but we were pretty lucky: the original image’s brightness -was rather well balanced. A lot of detail is lost, though. Different results -can be obtained by choosing “threshold values” other than 0.5, for instance -0.4 or 0.6, resulting in a much brighter or darker image:
- -- - - - -
- -Choosing the best thresholding value for a given image is called -average dithering. But even with the best value, the results will -not improve tremendously.
- -Better results can be achieved with a slightly bigger palette. Here is -thresholding applied to a 3-colour and to a 5-colour palette:
- -- - - - -
- -Observe the following patterns. From a certain distance or assuming small -enough pixels, they look like shades of gray despite being made of only black -and white pixels:
- -- -
- -We can do even better using additional patterns such as these 25% and -the 75% halftone patterns:
- -- -
- -This looks promising. Let’s try immediately on Lenna: we will use the -5-colour thresholding picture and replace the 0.25, 0.5 and 0.75 gray values -with the above patterns:
- -- - -
- -Not bad for a start. But there is a lot to improve. By the way, this -technique is covered by Apple’s -U.S. patent -5761347.
- -If your screen’s quality is not very good, you might experience slightly -different shades of gray for the following patterns, despite being made of 50% -black and 50% white pixels:
- -- -
- -Obviously the middle pattern looks far better to the human eye on a -computer screen. Optimising patterns so that they look good to the human -eye and don't create artifacts is a crucial element of a dithering -algorithm. Here is another example of two patterns that approximate to -the same shade of gray but may look slightly different from a distance:
- -- -
- -A generalisation of the dithering technique we just saw that uses a -certain family of patterns is called ordered dithering. It is based on -a dither matrix such as the following one:
- -- -
- -This matrix is then repeated all over the image, and the cells are used -as threshold values. In this case, pixels (0,0), (0,2), (0,4) etc. will be -thresholded with a value of 0.2. Pixels (0,1), (0, 3), (0, 4) etc. will be -thresholded with a value of 0.8, and so on, resulting in the image seen -in 2.1. Here is a sample of what happens to groups of 4 even-coloured pixels -of various gray values:
- -- -
- -Different matrices can give very different results. This is a 4×4 -Bayer ordered dithering matrix:
- -- - - -
- -This 4×4 cluster dot matrix creates dot patterns that mimic the -halftoning techniques used by newspapers:
- -- - - -
- -This unusual 5×3 matrix creates artistic vertical line artifacts:
- -- - - -
- -There are two major issues with ordered dithering. First, important -visual artifacts may appear. Even Bayer ordered dithering causes -weird cross-hatch pattern artifacts on some images. Second, dithering -matrices do not depend on the original image and thus do not take input -data into account: high frequency features in the image are often missed -and, in some cases, cause even worse artifacts.
- -Instead of using a deterministic threshold matrix, one can use a different -random value for each pixel in the image. This technique is simply called -random dithering. Here is an example with values uniformly chosen -between 0 and 1:
- -- - -
- -This is random dithering with threshold values chosen with a gaussian -distribution (mean 0.5, standard deviation 0.15):
- -- - -
- -The idea behind error diffusion is to compute the error caused by -thresholding a given pixel and propagate it to neighbour pixels to -compensate for the average intensity loss or gain. It is based upon the -assumption that a slightly out-of-place pixel causes little visual harm. -
- -The most famous error diffusion method is the Floyd-Steinberg -algorithm. It handles each pixel of the image one after the other and -propagates the error to adjacent pixels:
- -- -
- -The error is computed by simply substracting the source value and the -destination value. Destination value can be chosen by many means but does -not impact the image a lot compared to the error distribution coefficients -choice. The image on the left is the result using a 0.5 threshold. The image -on the right is a variant called serpentine Floyd-Steinberg that parses -every odd line in reverse order (right to left). The results are very close -to the original Floyd-Steinberg, but the method avoids artifacts in some -corner cases:
- -- - - - -
- -Fan dithering is a slight modification of Floyd-Steinberg with a -very similar matrix:
- -- - - -
- -Jarvis, Judice and Ninke dithering uses a much more complex -error diffusion matrix than Floyd-Steinberg:
- -- - - -
- -Stucki dithering is a slight variation of Jarvis-Judice-Ninke -dithering:
- -- - - -
- -Burkes dithering is yet another variation:
- -- - - -
- -Frankie Sierra came up with a few error diffusion matrices: Sierra -dithering is a variation of Jarvis that is slightly faster because it -propagates to fewer pixels, Two-row Sierra is a simplified version -thereof, and Filter Lite is one of the simplest Floyd-Steinberg -derivatives:
- -- - - -
- -- - - -
- -- - - -
- -Atkinson dithering only propagates 75% of the error, leading to a -loss of contrast around black and white areas, but better contrast in the -midtones:
- -- - - -
- - - - - -Generalising dithering to more than two colours is straightforward in the -grayscale palette. Here are the results with 4×4 Bayer ordered dithering and -with Floyd-Steinberg error diffusion:
- -- - - - -
- -Unfortunately the result is not as good as expected. The white pattern -on Lenna’s cheeks is visually disturbing, and the whole image looks darker -than with pure black-and-white dithering. But then, the previous dithering -results looked a lot brighter than the original image. This is due to the -output media’s gamma.
- -If you are reading this document on a computer screen, you may have -noticed that the black and white 50% pattern was closer to a 0.73 grayscale -(left) than to the intuitively expected 0.5 value (right). If you are reading -a printed copy, it might be a different matter.
- -- -
- -The mapping linking grayscale steps to intensities is called gamma -correction. An approximate law for gamma correction is given as -I = v^{γ} where v is the coded colour -value (between 0 and 1), I is the perceived colour intensity (between -0 and 1) and γ is the gamma. Most modern computer systems use the -sRGB gamma model close to the law with γ = 2.2. As can be seen, it is -highly non-linear:
- -- -
- -Éric Brasseur wrote a -pretty comprehensive essay about why on a computer screen a 50% black and -white pattern should be scaled down to a gray value of 0.73 instead of 0.5 and -how major computer graphics software totally misses the point.
- -Conversely, it clearly means that a gray value of 0.5 should not be -emulated with a 50% dither pattern.
- - - - + + + + + + + + + + + + +Colour quantisation is a very old and common computer graphics problem. +Many methods exist to do the task, and their efficiency depends on several +parameters:
+ +Since a grayscale pixel has a value between 0 and 1, a fast method +to convert the image to black and white is to set all pixels below 0.5 +to black and all pixels above 0.5 to white. This method is called +thresholding and, in our case, results in the following image:
+ ++ + +
+ +Not that bad, but we were pretty lucky: the original image’s brightness +was rather well balanced. A lot of detail is lost, though. Different results +can be obtained by choosing “threshold values” other than 0.5, for instance +0.4 or 0.6, resulting in a much brighter or darker image:
+ ++ + + + +
+ +Choosing the best thresholding value for a given image is called +average dithering. But even with the best value, the results will +not improve tremendously.
+ +Better results can be achieved with a slightly bigger palette. Here is +thresholding applied to a 3-colour and to a 5-colour palette:
+ ++ + + + +
+ + +Observe the following patterns. From a certain distance or assuming small +enough pixels, they look like shades of gray despite being made of only black +and white pixels:
+ ++ +
+ +We can do even better using additional patterns such as these 25% and +the 75% halftone patterns:
+ ++ +
+ +This looks promising. Let’s try immediately on Lenna: we will use the +5-colour thresholding picture and replace the 0.25, 0.5 and 0.75 gray values +with the above patterns:
+ ++ + +
+ +Not bad for a start. But there is a lot to improve. By the way, this +technique is covered by Apple’s +U.S. patent +5761347.
+ +If your screen’s quality is not very good, you might experience slightly +different shades of gray for the following patterns, despite being made of 50% +black and 50% white pixels:
+ ++ +
+ +Obviously the middle pattern looks far better to the human eye on a +computer screen. Optimising patterns so that they look good to the human +eye and don't create artifacts is a crucial element of a dithering +algorithm. Here is another example of two patterns that approximate to +the same shade of gray but may look slightly different from a distance:
+ ++ +
+ +A generalisation of the dithering technique we just saw that uses a +certain family of patterns is called ordered dithering. It is based on +a dither matrix such as the following one:
+ ++ +
+ +This matrix is then repeated all over the image, and the cells are used +as threshold values. In this case, pixels (0,0), (0,2), (0,4) etc. will be +thresholded with a value of 0.2. Pixels (0,1), (0, 3), (0, 4) etc. will be +thresholded with a value of 0.8, and so on, resulting in the image seen +in 2.1. Here is a sample of what happens to groups of 4 even-coloured pixels +of various gray values:
+ ++ +
+ +Different matrices can give very different results. This is a 4×4 +Bayer ordered dithering matrix:
+ ++ + + +
+ +This 4×4 cluster dot matrix creates dot patterns that mimic the +halftoning techniques used by newspapers:
+ ++ + + +
+ +This unusual 5×3 matrix creates artistic vertical line artifacts:
+ ++ + + +
+ +There are two major issues with ordered dithering. First, important +visual artifacts may appear. Even Bayer ordered dithering causes +weird cross-hatch pattern artifacts on some images. Second, dithering +matrices do not depend on the original image and thus do not take input +data into account: high frequency features in the image are often missed +and, in some cases, cause even worse artifacts.
+ +Instead of using a deterministic threshold matrix, one can use a different +random value for each pixel in the image. This technique is simply called +random dithering. Here is an example with values uniformly chosen +between 0 and 1:
+ ++ + +
+ +This is random dithering with threshold values chosen with a gaussian +distribution (mean 0.5, standard deviation 0.15):
+ ++ + +
+ + + +The idea behind error diffusion is to compute the error caused by +thresholding a given pixel and propagate it to neighbour pixels to +compensate for the average intensity loss or gain. It is based upon the +assumption that a slightly out-of-place pixel causes little visual harm. +
+ +The most famous error diffusion method is the Floyd-Steinberg +algorithm. It handles each pixel of the image one after the other and +propagates the error to adjacent pixels:
+ ++ +
+ +The error is computed by simply substracting the source value and the +destination value. Destination value can be chosen by many means but does +not impact the image a lot compared to the error distribution coefficients +choice. The image on the left is the result using a 0.5 threshold. The image +on the right is a variant called serpentine Floyd-Steinberg that parses +every odd line in reverse order (right to left). The results are very close +to the original Floyd-Steinberg, but the method avoids artifacts in some +corner cases:
+ ++ + + + +
+ +Fan dithering is a slight modification of Floyd-Steinberg with a +very similar matrix:
+ ++ + + +
+ +Jarvis, Judice and Ninke dithering uses a much more complex +error diffusion matrix than Floyd-Steinberg:
+ ++ + + +
+ +Stucki dithering is a slight variation of Jarvis-Judice-Ninke +dithering:
+ ++ + + +
+ +Burkes dithering is yet another variation:
+ ++ + + +
+ +Frankie Sierra came up with a few error diffusion matrices: Sierra +dithering is a variation of Jarvis that is slightly faster because it +propagates to fewer pixels, Two-row Sierra is a simplified version +thereof, and Filter Lite is one of the simplest Floyd-Steinberg +derivatives:
+ ++ + + +
+ ++ + + +
+ ++ + + +
+ +Atkinson dithering only propagates 75% of the error, leading to a +loss of contrast around black and white areas, but better contrast in the +midtones:
+ ++ + + +
+ + + + + + + +Generalising dithering to more than two colours is straightforward in the +grayscale palette. Here are the results with 4×4 Bayer ordered dithering and +with Floyd-Steinberg error diffusion:
+ ++ + + + +
+ +Unfortunately the result is not as good as expected. The white pattern +on Lenna’s cheeks is visually disturbing, and the whole image looks darker +than with pure black-and-white dithering. But then, the previous dithering +results looked a lot brighter than the original image. This is due to the +output media’s gamma.
+ +If you are reading this document on a computer screen, you may have +noticed that the black and white 50% pattern was closer to a 0.73 grayscale +(left) than to the intuitively expected 0.5 value (right). If you are reading +a printed copy, it might be a different matter.
+ ++ +
+ +The mapping linking grayscale steps to intensities is called gamma +correction. An approximate law for gamma correction is given as +I = v^{γ} where v is the coded colour +value (between 0 and 1), I is the perceived colour intensity (between +0 and 1) and γ is the gamma. Most modern computer systems use the +sRGB gamma model close to the law with γ = 2.2. As can be seen, it is +highly non-linear:
+ ++ +
+ +Éric Brasseur wrote a +pretty comprehensive essay about why on a computer screen a 50% black and +white pattern should be scaled down to a gray value of 0.73 instead of 0.5 and +how major computer graphics software totally misses the point.
+ +Conversely, it clearly means that a gray value of 0.5 should not be +emulated with a 50% dither pattern.
+ + + + + + + +