Ignore:
Timestamp:
Apr 16, 2008, 12:11:47 AM (15 years ago)
Author:
Sam Hocevar
Message:
  • More fixes in the paper.
File:
1 edited

Legend:

Unmodified
Added
Removed
  • research/2008-displacement/paper/paper.tex

    r2290 r2291  
    4040\begin{abstract}
    4141In this contribution we introduce a little-known property of error diffusion
    42 halftoning algorithms which we call error diffusion displacement.
     42halftoning algorithms which we call {\it error diffusion displacement}.
    4343By accounting for the inherent sub-pixel displacement caused by the error
    4444propagation, we correct an important flaw in most metrics used to assess the
     
    6565address the problem of colour reduction. Comparing two algorithms in terms of
    6666speed or memory usage is often straightforward, but how exactly a halftoning
    67 algorithm performs in terms of quality is a far more complex issue, as it
    68 highly depends on the display device and the inner workings of the human eye.
     67algorithm performs quality-wise is a far more complex issue, as it highly
     68depends on the display device and the inner workings of the human eye.
    6969
    7070Though this document focuses on the particular case of bilevel halftoning,
     
    8282yielding visually pleasing results.
    8383
    84 Error diffusion dithering, introduced in 1976 by Floyd and Steinberg
     84\medskip Error diffusion dithering, introduced in 1976 by Floyd and Steinberg
    8585\cite{fstein}, tries to compensate for the thresholding error through the use
    8686of feedback. Typically applied in raster scan order, it uses an error diffusion
    87 matrix such as the following one:
     87matrix such as the following one, where $x$ denotes the pixel being processed:
    8888
    8989\[ \frac{1}{16} \left| \begin{array}{ccc}
     
    9393Though efforts have been made to make error diffusion parallelisable
    9494\cite{parfstein}, it is generally considered more computationally expensive
    95 than screening.
    96 
    97 Model-based halftoning is the third important algorithm category. It relies
    98 on a model of the human visual system (HVS) and attempts to minimise an error
    99 value based on that model. One such algorithm is direct binary seach (DBS)
    100 \cite{allebach}, also referred to as least-squares model-based halftoning
     95than screening, but carefully chosen coefficients yield good visual results
     96\cite{kite}.
     97
     98\medskip Model-based halftoning is the third important algorithm category. It
     99relies on a model of the human visual system (HVS) and attempts to minimise
     100an error value based on that model. One such algorithm is direct binary seach
     101(DBS) \cite{allebach}, also referred to as least-squares model-based halftoning
    101102(LSMB) \cite{lsmb}.
    102103
     
    108109DBS yields halftones of impressive quality. However, despite efforts to make
    109110it more efficient \cite{bhatt}, it suffers from its large computational
    110 requirements and error diffusion remains a widely used technique.
     111requirements and error diffusion remains a more widely used technique.
    111112
    112113\section{Error diffusion displacement}
     
    121122bottom-right of each pixel (Fig. \ref{fig:direction}), one may expect the
    122123resulting image to be slightly translated. This expectation is confirmed
    123 when alternatively viewing an error diffused image and the corresponding DBS
    124 halftone.
     124visually when rapidly switching between an error diffused image and the
     125corresponding DBS halftone.
    125126
    126127\begin{figure}
     
    134135This small translation is visually innocuous but we found that it means a lot
    135136in terms of error computation. A common way to compute the error between an
    136 image $h_{i,j}$ and the corresponding halftone $b_{i,j}$ is to compute the
    137 mean square error between modified versions of the images, in the form:
     137image $h_{i,j}$ and the corresponding binary halftone $b_{i,j}$ is to compute
     138the mean square error between modified versions of the images, in the form:
    138139
    139140\begin{equation}
     
    144145convolution and $v$ is a model for the human visual system.
    145146
    146 To compensate for the slight translation experienced in the halftone, we
    147 use the following error metric instead:
     147To compensate for the slight translation observed in the halftone, we use the
     148following error metric instead:
    148149
    149150\begin{equation}
     
    152153
    153154\noindent where $t_{dx,dy}$ is an operator which translates the image along the
    154 $(dx,dy)$ vector.
     155$(dx,dy)$ vector. By design, $E_{0,0} = E$.
    155156
    156157A simple example can be given using a Gaussian HVS model:
     
    185186\end{figure}
    186187
    187 For instance, a Floyd-Steinberg dither of \textit{Lena}, with $\sigma = 1.2$
    188 yields a per-pixel mean square error of $8.51\times10^{-4}$. However, when
    189 taking the displacement into account, the error becomes $7.77\times10^{-4}$
    190 for $(dx,dy) = (0.167709,0.299347)$. The new, corrected error is significantly
    191 smaller, with the exact same input and output images.
     188For instance, a Floyd-Steinberg dither of \textit{Lena} with $\sigma = 1.2$
     189yields a per-pixel mean square error of $3.67\times10^{-4}$. However, when
     190taking the displacement into account, the error becomes $3.06\times10^{-4}$ for
     191$(dx,dy) = (0.165,0.293)$. The new, corrected error is significantly smaller,
     192with the exact same input and output images.
    192193
    193194Experiments show that the corrected error is always noticeably smaller except
     
    196197vision sets and from the image board \textit{4chan}, providing a representative
    197198sampling of the photographs, digital art and business graphics widely exchanged
    198 on the Internet.
    199 
    200 In addition to the classical Floyd-Steinberg and Jarvis, Judice and Ninke
    201 kernels, we tested two serpentine error diffusion algorithms: Ostromoukhov's
    202 simple error diffusion \cite{ostromoukhov}, which uses a variable coefficient
    203 kernel, and Wong and Allebach's optimum error diffusion kernel \cite{wong}.
     199on the Internet nowadays \cite{4chan}.
     200
     201In addition to the classical Floyd-Steinberg and Jarvis-Judice-Ninke kernels,
     202we tested two serpentine error diffusion algorithms: Ostromoukhov's simple
     203error diffusion \cite{ostromoukhov}, which uses a variable coefficient kernel,
     204and Wong and Allebach's optimum error diffusion kernel \cite{wong}.
    204205
    205206\begin{center}
    206   \begin{tabular}{|l|l|l|}
    207   \hline
    208   & $E$ & $E_{min}$ \\ \hline
    209   raster Floyd-Steinberg & 0.00089705 & 0.000346514 \\ \hline
    210   raster Ja-Ju-Ni & 0.0020309 & 0.000692003 \\ \hline
    211   Ostromoukhov & 0.00189721 & 0.00186343 \\ \hline
    212 %    raster optimum kernel & 0.00442951 & 0.00135092 \\ \hline
    213   optimum kernel & 0.00146338 & 0.00136522 \\
     207  \begin{tabular}{|l|c|c|}
     208  \hline
     209  &~ $E\times10^4$ ~&~ $E_{min}\times10^4$ ~\\ \hline
     210  ~raster Floyd-Steinberg ~&~ 3.7902 ~&~ 3.1914 ~\\ \hline
     211  ~raster Ja-Ju-Ni        ~&~ 9.7013 ~&~ 6.6349 ~\\ \hline
     212  ~Ostromoukhov           ~&~ 4.6892 ~&~ 4.4783 ~\\ \hline
     213  ~optimum kernel         ~&~ 7.5209 ~&~ 6.5772 ~\\
    214214  \hline
    215215  \end{tabular}
     
    227227visual error measurement than $E(h,b)$. However, its major drawback is that it
    228228is highly computationally expensive: for each image, the new $(dx,dy)$ values
    229 need to be calculated to minimise the energy value.
     229need to be calculated to minimise the error value.
    230230
    231231Fortunately, we found that for a given raster or serpentine scan
     
    271271
    272272\begin{center}
    273   \begin{tabular}{|l|l|l|l|l|}
    274   \hline
    275   & $E$ & $dx$ & $dy$ & $E_{fast}$ \\ \hline
    276   raster Floyd-Steinberg & 0.00089705 & 0.16 & 0.28 & 0.00083502 \\ \hline
    277   raster Ja-Ju-Ni & 0.0020309 & 0.26 & 0.76 & 0.00192991 \\ \hline
    278   Ostromoukhov & 0.00189721 & 0.00 & 0.19 & 0.00186839 \\ \hline
    279   optimum kernel & 0.00146338 & 0.00 & 0.34 & 0.00138165 \\
     273  \begin{tabular}{|l|c|c|c|c|}
     274  \hline
     275  &~ $E\times10^4$ ~&~ $dx$ ~&~ $dy$ ~&~ $E_{fast}\times10^4$ ~\\ \hline
     276  ~raster Floyd-Steinberg ~&~ 3.7902 ~&~ 0.16 ~&~ 0.28 ~&~ 3.3447 ~\\ \hline
     277  ~raster Ja-Ju-Ni        ~&~ 9.7013 ~&~ 0.26 ~&~ 0.76 ~&~ 7.5891 ~\\ \hline
     278  ~Ostromoukhov           ~&~ 4.6892 ~&~ 0.00 ~&~ 0.19 ~&~ 4.6117 ~\\ \hline
     279  ~optimum kernel         ~&~ 7.5209 ~&~ 0.00 ~&~ 0.34 ~&~ 6.8233 ~\\
    280280  \hline
    281281  \end{tabular}
     
    291291diffusion kernels. According to the original authors, the coefficients were
    292292found "mostly by trial and error" \cite{fstein}. With our improved metric, we
    293 now have the tools to confirm or infirm Floyd and Steinberg's initial proposal.
     293now have the tools to confirm or infirm Floyd and Steinberg's initial choice.
    294294
    295295We chose to do an exhaustive study of every $\frac{1}{16}\{a,b,c,d\}$ integer
    296 combination. We deliberately chose positive integers whose sum is 16. Error
     296combination. We deliberately chose positive integers whose sum was 16: error
    297297diffusion coefficients smaller than zero or adding up to more than 1 are known
    298 to be unstable \cite{stability}, and diffusing less than 100\% of the error is
    299 known to cause important error in shadow and highlight areas of the image.
    300 
    301 First we studied all possible coefficients on a pool of 250 images with an
    302 error metric $E$ based on a standard Gaussian HVS model. Since we are studying
    303 algorithms on different images but error values are only meaningful for a given
    304 image, we chose a Condorcet voting scheme to determine winners. $E_{min}$ is
    305 only given here as an indication and had no role in the computation:
     298to be unstable \cite{stability}, and diffusing less than 100\% of the error
     299causes important loss of detail in the shadow and highlight areas of the image.
     300
     301We studied all possible coefficients on a pool of 3,000 images with an error
     302metric $E$ based on a standard Gaussian HVS model. $E_{min}$ is only given here
     303as an indication and only $E$ was used to elect the best coefficients:
    306304
    307305\begin{center}
    308306  \begin{tabular}{|c|c|c|c|}
    309307  \hline
    310   rank & coefficients & $E$ & $E_{min}$ \\ \hline
    311   1 & 8 3 5 0 & 0.00129563 & 0.000309993 \\ \hline
    312   2 & 7 3 6 0 & 0.00131781 & 0.000313941 \\ \hline
    313   3 & 9 3 4 0 & 0.00131115 & 0.000310815 \\ \hline
    314   4 & 9 2 5 0 & 0.00132785 & 0.000322754 \\ \hline
    315   5 & 8 4 4 0 & 0.0013117 & 0.00031749 \\ \hline
    316   \dots & \dots & \dots & \dots \\
     308  ~ rank ~&~ coefficients ~&~ $E$ ~&~ $E_{min}$ ~\\ \hline
     309  ~ 1 ~&~ 8 3 5 0 ~&~ 0.00129563 ~&~ 0.000309993 ~\\ \hline
     310  ~ 2 ~&~ 7 3 6 0 ~&~ 0.00131781 ~&~ 0.000313941 ~\\ \hline
     311  ~ 3 ~&~ 9 3 4 0 ~&~ 0.00131115 ~&~ 0.000310815 ~\\ \hline
     312  ~ 4 ~&~ 9 2 5 0 ~&~ 0.00132785 ~&~ 0.000322754 ~\\ \hline
     313  ~ 5 ~&~ 8 4 4 0 ~&~ 0.00131170 ~&~ 0.000317490 ~\\ \hline
     314  ~ \dots ~&~ \dots ~&~ \dots ~&~ \dots ~\\
    317315  \hline
    318316  \end{tabular}
     
    320318
    321319The exact same operation using $E_{min}$ as the decision variable yields very
    322 different results. Again, $E$ is only given here as an indication:
     320different results. Similarly, $E$ is only given here as an indication:
    323321
    324322\begin{center}
    325323  \begin{tabular}{|c|c|c|c|}
    326324  \hline
    327   rank & coefficients & $E_{min}$ & $E$ \\ \hline
    328   1 & 7 3 5 1 & 0.000306251 & 0.00141414 \\ \hline
    329   2 & 6 3 6 1 & 0.000325488 & 0.00145197 \\ \hline
    330   3 & 8 3 4 1 & 0.000313537 & 0.00141632 \\ \hline
    331   4 & 7 3 4 2 & 0.000336239 & 0.00156376 \\ \hline
    332   5 & 6 4 5 1 & 0.000333702 & 0.00147671 \\ \hline
    333   \dots & \dots & \dots & \dots \\
     325  ~ rank ~&~ coefficients ~&~ $E_{min}$ ~&~ $E$ ~\\ \hline
     326  ~ 1 ~&~ 7 3 5 1 ~&~ 0.000306251 ~&~ 0.00141414 ~\\ \hline
     327  ~ 2 ~&~ 6 3 6 1 ~&~ 0.000325488 ~&~ 0.00145197 ~\\ \hline
     328  ~ 3 ~&~ 8 3 4 1 ~&~ 0.000313537 ~&~ 0.00141632 ~\\ \hline
     329  ~ 4 ~&~ 7 3 4 2 ~&~ 0.000336239 ~&~ 0.00156376 ~\\ \hline
     330  ~ 5 ~&~ 6 4 5 1 ~&~ 0.000333702 ~&~ 0.00147671 ~\\ \hline
     331  ~ \dots ~&~ \dots ~&~ \dots ~&~ \dots ~\\
    334332  \hline
    335333  \end{tabular}
    336334\end{center}
    337335
    338 Our improved metric was able to confirm that the original Floyd-Steinberg
    339 coefficients were indeed the best possible for raster scan.
     336Our improved metric allowed us to confirm that the original Floyd-Steinberg
     337coefficients were indeed amongst the best possible for raster scan.
     338More importantly, using $E$ as the decision variable may have elected
     339$\frac{1}{16}\{8,4,4,0\}$, which is in fact a poor choice.
    340340
    341341For serpentine scan, however, our experiment suggests that
     
    350350             the optimum coefficients $\frac{1}{16}\{7,4,5,0\}$ that improve
    351351             on the standard Floyd-Steinberg coefficients in terms of visual
    352              quality for a given HVS model}
     352             quality for the HVS model studied in section 3.}
    353353    \label{fig:lena7450}
    354354  \end{center}
     
    365365work is only the beginning: future work may cover more complex HVS models,
    366366for instance by taking into account the angular dependance of the human eye
    367 \cite{sullivan}. And now that we have a proper metric, we plan to improve all
    368 error diffusion methods that may require fine-tuning of their propagation
    369 coefficients.
     367\cite{sullivan}. We plan to use our new metric to improve all error diffusion
     368methods that may require fine-tuning of their propagation coefficients.
    370369
    371370%
     
    401400SPIE 3648, 485--494 (1999)
    402401
    403 \bibitem[6]{quality}
     402\bibitem[6]{kite}
    404403T. D. Kite,
    405404\textit{Design and Quality Assessment of Forward and Inverse Error-Diffusion
     
    410409\bibitem[7]{halftoning}
    411410R. Ulichney,
    412 \textit{Digital Halftoning}
     411\textit{Digital Halftoning}.
    413412MIT Press, 1987
    414413
    415414\bibitem[8]{spacefilling}
    416415L. Velho and J. Gomes,
    417 \textit{Digital halftoning with space-filling curves}
     416\textit{Digital halftoning with space-filling curves}.
    418417Computer Graphics (Proceedings of SIGGRAPH 91), 25(4):81--90, 1991
    419418
    420419\bibitem[9]{peano}
    421420I.~H. Witten and R.~M. Neal,
    422 \textit{Using peano curves for bilevel display of continuous-tone images}
     421\textit{Using peano curves for bilevel display of continuous-tone images}.
    423422IEEE Computer Graphics \& Appl., 2:47--52, 1982
    424423
    425424\bibitem[10]{nasanen}
    426425R. Nasanen,
    427 \textit{Visibility of halftone dot textures}
     426\textit{Visibility of halftone dot textures}.
    428427IEEE Trans. Syst. Man. Cyb., vol. 14, no. 6, pp. 920--924, 1984
    429428
    430429\bibitem[11]{allebach}
    431430M. Analoui and J.~P. Allebach,
    432 \textit{Model-based halftoning using direct binary search}
     431\textit{Model-based halftoning using direct binary search}.
    433432Proc. of SPIE/IS\&T Symp. on Electronic Imaging Science and Tech.,
    434433February 1992, San Jose, CA, pp. 96--108
     
    436435\bibitem[12]{mcnamara}
    437436Ann McNamara,
    438 \textit{Visual Perception in Realistic Image Synthesis}
     437\textit{Visual Perception in Realistic Image Synthesis}.
    439438Computer Graphics Forum, vol. 20, no. 4, pp. 211--224, 2001
    440439
    441440\bibitem[13]{bhatt}
    442441Bhatt \textit{et al.},
    443 \textit{Direct Binary Search with Adaptive Search and Swap}
     442\textit{Direct Binary Search with Adaptive Search and Swap}.
    444443\url{http://www.ima.umn.edu/2004-2005/MM8.1-10.05/activities/Wu-Chai/halftone.pdf}
    445444
     
    450449\bibitem[15]{wong}
    451450P.~W. Wong and J.~P. Allebach,
    452 \textit{Optimum error-diffusion kernel design}
     451\textit{Optimum error-diffusion kernel design}.
    453452Proc. SPIE Vol. 3018, p. 236--242, 1997
    454453
    455454\bibitem[16]{ostromoukhov}
    456455Victor Ostromoukhov,
    457 \textit{A Simple and Efficient Error-Diffusion Algorithm}
     456\textit{A Simple and Efficient Error-Diffusion Algorithm}.
    458457in Proceedings of SIGGRAPH 2001, in ACM Computer Graphics,  Annual Conference
    459458Series, pp. 567--572, 2001
     
    461460\bibitem[17]{lsmb}
    462461T.~N. Pappas and D.~L. Neuhoff,
    463 \textit{Least-squares model-baed halftoning}
     462\textit{Least-squares model-based halftoning}.
    464463in Proc. SPIE, Human Vision, Visual Proc., and Digital Display III, San Jose,
    465464CA, Feb. 1992, vol. 1666, pp. 165--176
     
    467466\bibitem[18]{stability}
    468467R. Eschbach, Z. Fan, K.~T. Knox and G. Marcu,
    469 \textit{Threshold Modulation and Stability in Error Diffusion}
     468\textit{Threshold Modulation and Stability in Error Diffusion}.
    470469in Signal Processing Magazine, IEEE, July 2003, vol. 20, issue 4, pp. 39--50
    471470
    472 \bibitem[19]{kite}
    473 T.~D. Kite,
    474 \textit{Design and quality assessment of forward and inverse error diffusion
    475 halftoning algorithms}
    476 Ph.~D. in Electrical Engineering (Image Processing), August 1998
    477 
    478 \bibitem[20]{sullivan}
     471\bibitem[19]{sullivan}
    479472J. Sullivan, R. Miller and G. Pios,
    480 \textit{Image halftoning using a visual model in error diffusion}
     473\textit{Image halftoning using a visual model in error diffusion}.
    481474J. Opt. Soc. Am. A, vol. 10, pp. 1714--1724, Aug. 1993
    482475
Note: See TracChangeset for help on using the changeset viewer.