# Changeset 2293 for research/2008-displacement/paper/paper.texTweet

Ignore:
Timestamp:
Apr 16, 2008, 12:12:05 AM (13 years ago)
Message:
• Final version of the paper. Uploaded at the last minute, of course.
File:
1 edited

### Legend:

Unmodified
 r2292 (LSMB) \cite{lsmb}. HVS models are usually low-pass filters. Nasanen \cite{nasanen}, Analoui and Allebach \cite{allebach} found that using Gaussian models gave visually pleasing results, an observation confirmed by independent visual perception studies \cite{mcnamara}. HVS models are usually low-pass filters. Nasanen \cite{nasanen}, Analoui and Allebach found that using Gaussian models gave visually pleasing results, an observation confirmed by independent visual perception studies \cite{mcnamara}. DBS yields halftones of impressive quality. However, despite efforts to make Boustrophedonic (serpentine) scanning has been shown to cause fewer visual artifacts \cite{halftoning}, but other, more complex processing paths such as Hilbert curves \cite{spacefilling}, \cite{peano} are seldom used as they do not improve the image quality significantly. Hilbert curves \cite{spacefilling} are seldom used as they do not improve the image quality significantly. Intuitively, as the error is always propagated to the bottom-left or Experiments show that for a given image and a given corresponding halftone, $E_{dx,dy}$ has a local minimum almost always away from $(dx,dy) = (0,0)$ (Fig. \ref{fig:lena-min}). Let $E$ be an error metric where this remains true. We \ref{fig:lena-values}). Let $E$ be an error metric where this remains true. We call the local minimum $E_{min}$: \begin{figure} \begin{center} \input{lena-min} \caption{Mean square error for the \textit{Lena} image. $v$ is a simple $11\times11$ Gaussian convolution kernel with $\sigma = 1.2$ and $(dx,dy)$ vary in $[-1,1]\times[-1,1]$.} \label{fig:lena-min} \begin{minipage}[c]{0.8\textwidth} \input{lena-values} \end{minipage} \begin{center} \caption{Mean square error for the \textit{Lena} image ($\times10^4$). $v$ is a simple $11\times11$ Gaussian convolution kernel with $\sigma = 1.2$ and $(dx,dy)$ vary in $[-1,1]\times[-1,1]$.} \label{fig:lena-values} \end{center} \end{figure} we tested two serpentine error diffusion algorithms: Ostromoukhov's simple error diffusion \cite{ostromoukhov}, which uses a variable coefficient kernel, and Wong and Allebach's optimum error diffusion kernel \cite{wong}. and Wong and Allebach's optimum error diffusion kernel \cite{wong}: \begin{center} the error computed at $(dx,dy)$. As $E_{fast}$ does not depend on the image, it is a lot faster to compute than $E_{min}$, and as it is statistically closer to $E_{min}$, we can expect it to be a better error estimation than $E$. $E_{min}$, we can expect it to be a better error estimation than $E$: \begin{center} \begin{tabular}{|l|c|c|c|c|} \hline &~ $E\times10^4$ ~&~ $dx$ ~&~ $dy$ ~&~ $E_{fast}\times10^4$ ~\\ \hline ~raster Floyd-Steinberg ~&~ 3.7902 ~&~ 0.16 ~&~ 0.28 ~&~ 3.3447 ~\\ \hline ~raster Ja-Ju-Ni        ~&~ 9.7013 ~&~ 0.26 ~&~ 0.76 ~&~ 7.5891 ~\\ \hline ~Ostromoukhov           ~&~ 4.6892 ~&~ 0.00 ~&~ 0.19 ~&~ 4.6117 ~\\ \hline ~optimum kernel         ~&~ 7.5209 ~&~ 0.00 ~&~ 0.34 ~&~ 6.8233 ~\\ \begin{tabular}{|l|c|c|c|c|c|} \hline &~ $E\times10^4$ ~&~ $E_{min}\times10^4$ ~&~ $dx$ ~&~ $dy$ ~&~ $E_{fast}\times10^4$ ~\\ \hline ~raster Floyd-Steinberg ~&~ 3.7902 ~&~ 3.1914 ~&~ 0.16 ~&~ 0.28 ~&~ 3.3447 ~\\ \hline ~raster Ja-Ju-Ni        ~&~ 9.7013 ~&~ 6.6349 ~&~ 0.26 ~&~ 0.76 ~&~ 7.5891 ~\\ \hline ~Ostromoukhov           ~&~ 4.6892 ~&~ 4.4783 ~&~ 0.00 ~&~ 0.19 ~&~ 4.6117 ~\\ \hline ~optimum kernel         ~&~ 7.5209 ~&~ 6.5772 ~&~ 0.00 ~&~ 0.34 ~&~ 6.8233 ~\\ \hline \end{tabular} ~ 1 ~&~ 7 3 6 0 ~&~ 4.65512 ~&~ 3.94217 ~\\ \hline ~ 2 ~&~ 8 3 5 0 ~&~ 4.65834 ~&~ 4.03699 ~\\ \hline ~ \dots ~&~ \dots ~&~ \dots ~&~ \dots ~\\ \hline \hline ~ 5 ~&~ 7 3 5 1 ~&~ 4.68588 ~&~ 3.79556 ~\\ \hline ~ \dots ~&~ \dots ~&~ \dots ~&~ \dots ~\\ \hline \hline ~ 18 ~&~ 6 3 5 2 ~&~ 4.91020 ~&~ 3.70465 ~\\ \hline ~ \dots ~&~ \dots ~&~ \dots ~&~ \dots ~\\ ~ 1 ~&~ 6 3 5 2 ~&~ 3.70465 ~&~ 4.91020 ~\\ \hline ~ 2 ~&~ 7 3 5 1 ~&~ 3.79556 ~&~ 4.68588 ~\\ \hline ~ \dots ~&~ \dots ~&~ \dots ~&~ \dots ~\\ \hline \hline ~ 15 ~&~ 7 3 6 0 ~&~ 3.94217 ~&~ 4.65512 ~\\ \hline ~ \dots ~&~ \dots ~&~ \dots ~&~ \dots ~\\ \hline \hline ~ 22 ~&~ 8 3 5 0 ~&~ 4.03699 ~&~ 4.65834 ~\\ \hline ~ \dots ~&~ \dots ~&~ \dots ~&~ \dots ~\\ coefficients were indeed amongst the best possible for raster scan. More importantly, using $E$ as the decision variable may have elected $\frac{1}{16}\{8,4,4,0\}$, which is in fact a poor choice. $\frac{1}{16}\{7,3,6,0\}$ or $\frac{1}{16}\{8,3,5,0\}$, which are in fact poor choices. For serpentine scan, however, our experiment suggests that \begin{figure} \begin{center} \includegraphics[width=0.8\textwidth]{lena.eps} \caption{halftone of \textit{Lena} using serpentine error diffusion and the optimum coefficients $\frac{1}{16}\{7,4,5,0\}$ that improve on the standard Floyd-Steinberg coefficients in terms of visual quality for the HVS model studied in section 3.} \includegraphics[width=0.4\textwidth]{output-7-3-5-1-serp.eps} ~ \includegraphics[width=0.4\textwidth]{output-7-4-5-0-serp.eps} \end{center} \begin{center} \includegraphics[width=0.4\textwidth]{crop-7-3-5-1-serp.eps} ~ \includegraphics[width=0.4\textwidth]{crop-7-4-5-0-serp.eps} \caption{halftone of \textit{Lena} using serpentine error diffusion (\textit{left}) and the optimum coefficients $\frac{1}{16}\{7,4,5,0\}$ (\textit{right}) that improve on the standard Floyd-Steinberg coefficients in terms of visual quality for the HVS model used in section 3. The detailed area (\textit{bottom}) shows fewer structure artifacts in the regions with low contrast.} \label{fig:lena7450} \end{center} Computer Graphics (Proceedings of SIGGRAPH 91), 25(4):81--90, 1991 \bibitem[9]{peano} I.~H. Witten and R.~M. Neal, \textit{Using peano curves for bilevel display of continuous-tone images}. IEEE Computer Graphics \& Appl., 2:47--52, 1982 \bibitem[10]{nasanen} \bibitem[9]{nasanen} R. Nasanen, \textit{Visibility of halftone dot textures}. IEEE Trans. Syst. Man. Cyb., vol. 14, no. 6, pp. 920--924, 1984 \bibitem[11]{allebach} \bibitem[10]{allebach} M. Analoui and J.~P. Allebach, \textit{Model-based halftoning using direct binary search}. February 1992, San Jose, CA, pp. 96--108 \bibitem[12]{mcnamara} \bibitem[11]{mcnamara} Ann McNamara, \textit{Visual Perception in Realistic Image Synthesis}. Computer Graphics Forum, vol. 20, no. 4, pp. 211--224, 2001 \bibitem[13]{bhatt} \bibitem[12]{bhatt} Bhatt \textit{et al.}, \textit{Direct Binary Search with Adaptive Search and Swap}. \url{http://www.ima.umn.edu/2004-2005/MM8.1-10.05/activities/Wu-Chai/halftone.pdf} \bibitem[14]{4chan} \bibitem[13]{4chan} moot, \url{http://www.4chan.org/} \bibitem[15]{wong} \bibitem[14]{wong} P.~W. Wong and J.~P. Allebach, \textit{Optimum error-diffusion kernel design}. Proc. SPIE Vol. 3018, p. 236--242, 1997 \bibitem[16]{ostromoukhov} \bibitem[15]{ostromoukhov} Victor Ostromoukhov, \textit{A Simple and Efficient Error-Diffusion Algorithm}. Series, pp. 567--572, 2001 \bibitem[17]{lsmb} \bibitem[16]{lsmb} T.~N. Pappas and D.~L. Neuhoff, \textit{Least-squares model-based halftoning}. CA, Feb. 1992, vol. 1666, pp. 165--176 \bibitem[18]{stability} \bibitem[17]{stability} R. Eschbach, Z. Fan, K.~T. Knox and G. Marcu, \textit{Threshold Modulation and Stability in Error Diffusion}. in Signal Processing Magazine, IEEE, July 2003, vol. 20, issue 4, pp. 39--50 \bibitem[19]{sullivan} \bibitem[18]{sullivan} J. Sullivan, R. Miller and G. Pios, \textit{Image halftoning using a visual model in error diffusion}. \end{document}