From: u914097@student.canberra.edu.au (Feldman / Mark Jeffrey (ISE))
Subject: Re: Floyd-Steinburg Dithering Alg.
Organization: University of Canberra
Date: Tue, 8 Mar 94 12:28:05 GMT
b00802@cfi.waseda.ac.jp ("Toshiko Tani") writes:
> I would like to write a program using the Floyd-Steinburg dithering
> technique to enhance images after I enlarge them.
I'm posting it so anyone else who wants it can get it as well (as to
everyone else I'm also e-mailing him telling him it's here so don't flame
me).
The following info is from "Bitmapped Graphics" by Steve Rimmer,
published by Windcrest/McGraw-Hill ISBN 0-8306-3558-0. A second edition
has come out since but I don't have the ISBN for it. It's a great little
book and covers numerous popular file formats, dithering, screen and
printer drivers, PostScript etc.....
Ok, now to business. The Floyd-Steinberg filter looks like this:
x 7
3 5 1
In this case all the numbers add up to 16. Now you look at the pixel at
position x and threshold it, i.e. if your grey values are between 0 and
255 then you compare it to 127. If it's lower or equal to it then you set
the pixel to 0, if higher then you set it to 255 (or 1 if you're working
in a monochrome mode). Now you calculate the error as being the
difference between the original grey scale value and the value that you
changed it to. You then add 7/16ths of this error to the pixel to the
right, 5/16ths of the error to the pixel underneath etc... You then move
to the next pixel and do the same thing and so on and so on untill you've
done it to every pixel in the image. Note that the total amount you
spread around equals the error value, so your image intensity stays the same.
Now Rimmer says that that you can get better results by using filters
which "communicate" with more of the surrounding pictures.
Here's a few more filters he gives:
Stucki
x 8 4
2 4 8 4 2
1 2 4 2 1
This one is slower than Floyd-Steinberg since it can't be done with bit
shifts and subtraction.
Burkes
x 8 4
2 4 8 4 2
This one is a compromise between Floyd-Steinberg and Stucki, spreads
error over more pixels but still can be done with shifts and subtractions.
Sierra
x 5 3
2 4 5 4 2
2 3 2
Jarvis, Judice and Ninke filter:
x 7 5
3 5 7 5 3
1 3 5 3 1
Stevenson and Arce
x 32
12 26 30 16
12 26 12
5 12 12 5
Usually gives the best results but it's slow as all hell.
Rimmer also says that using a serpentine scan (ie left to right on first
line, right to left on next, left to right, right to left etc...)
produces "different" results. He also recommends some good reading on the
subject.
Oh yeah, the book also discusses how enlargening images before dithering
them often makes them look heaps better.
Unfortunately in this edition he doesn't discuss things like 256 color ->
16 color, but I believe that the technique is the same, you just have to
match each color to the "closest" 4 bit color available and spread the
error for each RGB component. If you get a color version working I
wouldn't mind seeing the results.
From: sloan@cis.uab.edu (Kenneth Sloan)
Subject: Re: Floyd-Steinburg Dithering Alg.
Organization: CIS, University of Alabama at Birmingham
Date: Sat, 12 Mar 1994 06:24:27 GMT
mcphee@gaul.csd.uwo.ca (LESLIE MCPHEE) writes:
> I have implemented the F-S dithering algorithm along with a variation on
> Xiaolin Wu's quantizer as described in Graphics Gems II. It works
> great BUT is VERY slow! (about 8-9 min. for 640x640 image on an RS6000)
> The problem lies in finding the nearest colour in the LUT to the colour
> with the diffusion error addad to it (640x640 pixels each doing 256
> comparisons).
>
> I've heard Image Alchemy also uses F-S dither, but does it literally
> hundreds of x faster than mine. The image results are ~equal.
>
> Does anyone know what tricks they or anyone else uses to speed this search?
> I figure they must have some kind of tree structure to find the nearest
> LUT colour faster but don't know.
You have half of the answer, above. Given your 256 color palette, you
can preprocess it so that each probe (for the "nearest" color) is faster
than doing 256 distance computations.
Now for some heresy: you probably do *not* need a custom palette for
each image (you mention a very good quantizer - my claim is that you
don't need *any* quantizer; simply use the same palette for every
image). Now, if you do this, you may be able to come up with a palette
which is simultaneously "good enough" *and* admits a very fast
"best-color" computation.
More heresy: if you are doing F-S error propagation, you may discover
that you do not have to find the absolute *best* color for each pixel.
The error-propagation *expects* errors to be made, and is designed to
make them "wash out" over small areas. So - you can use a fast "best
color" finder that is not necessarily strictly correct. If you get the
second-best color - no big deal.