From: (David Gorgen)
Subject: Need help: Z-buffering lines & areas together
Date: Tue, 27 Apr 1993 22:37:51 GMT

I'm asking for help on a sticky problem involving unreasonably low apparent precision in Z-buffering, that I've encountered in 2 different PEX implementations. I can't find any discussion of this problem in any resources I can lay hands on (e.g. the FAQ, Gaskins's _PEXlib_Programming_Manual_, vendors' documentation).

I'm posting this article by itself on, and virtually the same article with a test program demonstrating the problem on The problem is hard to describe without pictures, hence this article is longish. If you can run PEXlib 5.x programs and are interested, I encourage you to build and run the test program in to see the effect yourself and play with my approach to dealing with it. (It depends on the utility code from the above Gaskins book; instructions for fetching it via anonymous FTP are given.)

The problem to be solved is to eliminate or minimize "stitching" artifacts resulting from the use of Z-buffering with polylines that are coplanar with filled areas. The interpolated Z values along a line will differ slightly, due to roundoff error, from the interpolated Z values across an area, even when the endpoints of the line are coincident with vertices of the area. Because of this, it's a tossup whether the Z-buffer will allow the line pixels or the area pixels to be displayed. Visually, the result tends to be a dashed-line effect even though the line is supposed to be solid.

Using the PEXlib API, my approach to a solution is to use two slightly different PEX view mapping transforms, in two view table entries, one for the areas and one for the lines. The PEX structures or immediate-mode output must be organized so that one view table index is always in effect for areas, and the other is always in effect for lines. The result is a slight shift in NPC Z coordinates for the lines, so as to attempt to bias the tossup situations in favor of the lines.

This shift is effected by moving the front and back clipping planes used in the PEXlib view table entry for lines just a hair "backwards" (i.e. smaller VRC Z coordinates), compared to their positions in the view table entry used for areas. This means that when a point is transformed to NPC, its Z value will be slightly bigger if it comes from a line than if it comes from an area, thus accomplishing the desired bias.

I would expect the Z roundoff errors which cause the problem to amount to a few units at most, out of the entire dynamic range of the Z-buffer, typically from 0 to 65535 if not 16777215 (i.e. 16 or 24 bit Z-buffers). Therefore, it seems that a tiny fraction of the range of Z in VRC between the front and back clip planes ought to suffice to reliably fix the stitching.

But in fact, experience shows that the shift has to be as much as 0.003 to 0.006 of the range. (Empirically, it's worst when the NPC Z component of the slope of the surface is high, i.e. when it appears more or less edge-on to the viewer.) It's as if only 8 or 9 bits of the Z-buffer have any dependable meaning! This amount is so great that one problem is replaced by another: sometimes the polylines "show through" areas which they are supposed to lie behind.

I've observed the problem on both Hewlett-Packard and Digital workstation PEX servers, to approximately the same degree. The test program demonstrates the problem on an MIT PEXlib 5.x implementation; this version is known to compile and run on an HP-UX system with PEX 5.1.

Open questions:

  1. Why does this happen?

  2. What to do about it?

Any help would be immensely appreciated!

From: (Mark Einkauf)
Subject: Re: Need help: Z-buffering lines & areas together
Date: Wed, 28 Apr 1993 15:57:00 GMT

We here at IBM have the same problem with our workstations. I was also shocked when I first realized that you have to offset lines from fills by about 16 bits (assuming 24 bit z buffer). This seems huge, but is only 1/256 of the dynamic range. In those terms it doesn't seem so bad. What is happening is that the interpolation in z is not totally linear, due mainly to roundoff, I believe. So the polygon is not planar in z, but is more like a Ruffles potato chip. Ditto with lines. When you start/end at different x/y values, the "ridges" are out of phase, resulting in the stitch effect. You have the same problem if you try to draw 1 polygon right on top of another, but with different vertices. You will likely see a smeared effect where they overlap.


  Try Polygon 1: (100,100,100) (100,200,100) (200,200,100) (200,100,100)
      Polygon 2: (125,125,100) (125,175,100) (175,175,100) (175,125,100)

Your implementation is correct. In fact, we do a similar trick when rendering primitives that have lines and polygons - such as NURBS surfaces with isoparametric lines. Without the trick, the lines appear stitched, as you say. When the application draws lines/polygons independently, the system does not have the smarts to automatically do the z shifting, so the application must do it. This is what you have discovered and are doing. Bravo!

(Note to IBM'ers: The information given here has been previously disclosed through proper channels so I'm not giving away any new unpublished info.)

From: (David Gorgen)
Subject: Summary: Need help: Z-buffering lines & areas together
Date: Fri, 7 May 1993 20:36:32 GMT

A couple weeks ago I posted a message (and PEX test program) asking for help with the classic "stitching" artifacts in Z-buffered pictures combining filled areas and lines coplanar with them. Thanks to the following respondents:

Several people asked for a summary, so this is a summary of what I now know about the problem. Unfortunately, I still don't know what I can *do* about it in my particular circumstances; many of the responses had suggestions, but they don't really apply in my case.

The problem as exhibited in my original test program is now known to appear in PEX servers on DECstations, HP9000/7xx workstations, and SHOgraphics PEX terminals. Lines drawn across filled areas may appear dashed where they are really solid, because Z buffer calculations sometimes put the area pixels in front of the line pixels, and other times vice-versa. My attempt to fix this was to use two PEX View Indices, one for lines and one for areas, and to bias their View Mapping matrices so that areas fell infinitesimally behind lines coplanar with them, in view space. This failed because the bias had to be so large that the "infinitesimal" was sufficient to cause serious poke-throughs, where lines that were supposed to lie well behind areas showed through them instead.

The problem stems from differences in Z values computed for those pixels touched in vectors and in polygons. Suggested causes included:

Suggested solutions were:

Unfortunately, PEX cannot express any of these techniques. If a PEX implementation deigns to support Z buffering at all, then all drawing operations either both read and write the Z buffer, or do neither. And you can't update the Z buffer without also updating the screen.

POINTED REMARK: OpenGL reportedly has NO PROBLEM with these kinds of techniques!!!

In fact, I have a still more serious problem, in that all the above suggestions (except maybe the stencil buffer one, I don't know) are based on the premise that my code "knows" what lines lie on what faces. This was true regarding the test program, of course; but the actual circumstances are that this code will live inside a library, with an already-defined API, which does not have any way of expressing the association between particular lines and surfaces.

That was why I had such high hopes for a purely view-space solution to the problem. Oh, well. If I augment the library API I may then be able to adopt one or more of the above partial solutions.

Any further ideas will be gratefully accepted.

From: srnelson@speedsail.Eng.Sun.COM (Scott R. Nelson)
Subject: Re: Summary: Need help: Z-buffering lines & areas together
Date: 10 May 1993 15:51:46 GMT (David Gorgen) writes:
A couple weeks ago I posted a message (and PEX test program) asking for help with the classic "stitching" artifacts in Z-buffered pictures combining filled areas and lines coplanar with them.

[See the original article for the summary of responses]

The summary comes close to identifying the actual problem, but misses it slightly. If you try to combine lines and surfaces and you are doing correct per-pixel sampling for each using a DDA algorithm based on the line equation and the plane equation, it doesn't matter how much precision you have in Z, you will still get rendering artifacts.

I'll try to explain why this problem occurs. To begin with, all of the following diagrams assume a right-handed coordinate system where X is positive to the right, Y is positive upwards and Z is positive towards the viewer. All of the (very poor) ASCII drawings take a horizontal cross-section of the displayed region as viewed from the -Y axis.

Assume you have a surface that slopes away from you towards the right and you are trying to place a line right on the surface. If lines were really infinitely thin and correctly sampled, you would not have a problem. But, lines are typically one pixel wide and sampled up to half a pixel to either side of the true line center (antialiased lines are, of course, wider and exaggerate the problem). When you mix lines and surfaces, the problem arises because a line has slope only in two dimensions while a surface has slope in three dimensions:

    Z       \
              \   <-- The surface
    ^           \
    |            ---   <-- The line, sampled as 1-pixel wide
    |               \
    +-----> X         \
Figure 1

It doesn't matter if you are using an infinitely precise Z-buffer or some other hidden surface algorithm. If you are sampling on a per- pixel basis some line samples will fall behind the surface.

The simplest trick for making the line visible is to move it forward in Z just a little bit:

    Z       \
    ^           \---    <-- The line, sampled as 1-pixel wide
    |             \
    |               \   <-- The surface
    +-----> X         \
Figure 2

Now the line will always be sampled in front of the surface. However, as the slope gets steeper, you have to bring the line out out further to avoid rendering artifacts.

    Z          \
    ^            ---    <-- The line, sampled as 1-pixel wide
    |             \
    |              \    <-- The surface, now steeper
    +-----> X       \
Figure 3

The example in Figure 3 has the same line offset as that in figure 2, but because the slope is now steeper, some of the line sample points will fall behind the surface again.

The other problem is that as you bring the line further out in Z, lines for back surfaces will begin to poke through wherever the object is quite thin. A good example of public domain 3D objects that can easily show this problem are the X29 airplane or the Dart.

Depending on the graphics library you are using, you might have the option to adjust the Z value after perspective division takes place during the viewport calculation. This at least guarantees that the lines get offset by the same amount everywhere in the picture. To mix lines with surfaces you are forced to compromise between how many pixels you are willing to lose because you didn't have a large enough offset for all of the steep areas and how many pixels you are willing to have show through in the thin areas.

Unless you want to perform your own hidden line calculations before drawing the lines, you're stuck with a compromise that will never be completely free of artifacts. One possible solution might be to specify a quadrilateral to represent the line, making sure that it is exactly one pixel wide (across the minor axis) in screen space and has the same plane equation as the surface. This would work with a much smaller offset than the line, although some offset is still required to avoid Z-buffering artifacts due to precision errors. This solution doesn't work well with perspective or if you intend to scale the object.

Turning off the Z-buffer update but leaving the Z comparison enabled helps a small amount for the case of a surface being drawn in front of another one containing an edge (the thin surface example mentioned above), but it only helps in an order-dependent manner. That is, if you draw the back one first the edge won't show through, but if you draw it after the front surface it will.

That was why I had such high hopes for a purely view-space solution to the problem. Oh, well. If I augment the library API I may then be able to adopt one or more of the above partial solutions.

Any further ideas will be gratefully accepted.

Short of using texture mapping techniques to put the line on the surface, there is no completely correct solution. You are forced to live with a compromise. Hopefully you can pick a Z offset based on your overall Z range that will produce satisfactory results nearly all of the time, but I firmly believe that there is no correct solution to your problem as long as your lines and surfaces are drawn separately.