The Art of Demomaking - Issue 13 - Polygon Engines
by (15 November 1999)



Return to The Archives
Introduction


Wow, we're in our 13th week already... Doesn't time fly when you're having fun?

This week's menu consists of various 3D related topics. Firstly, we will talk about the process of rendering polygons in software. Then I will briefly describe how to make your polygons look nicer, and you will also find out how a rendering pipeline works.


Rendering Polygons In Software


The most efficient way I know of rendering polygons is by splitting them into horizontal spans. A horizontal span is basically a... what's that?!? You already know about spans? What a coincidence :) That'll save me a lot of trouble.

The first thing I should point out is that the technique is limited to convex polygons, which means that all angles have to be less than 180 degrees. The implication in this case is that for any horizontal line you draw across the polygon, it will only intersect with 2 edges. This is extremely useful when it comes to rendering the actual spans. If you need an engine that is able to render concave polygons, you must split it into smaller convex polies.

So before we draw our spans, we need the appropriate information. In most cases for each line Y, this information consists of the X and Z coordinates and the texture coordinates (TX,TY). All this is stored in what we call an edge table. The edge table has 2 entries per line, one for the start of the span, and one for the end. Now before we can draw the spans, we must get the right information in the edge table. This is simply done by interpolating along each edge. So you find the X, Z, TX and TY values along that edge for each line, and store it in whatever slot of the edge table is free. Now given the full edge table, we can easily interpolate our information along the spans while we draw.

The span renderer we coded last week doesn't take into account the z component when loading the texels. This means that the texture we apply onto the polygons is perspective incorrect. You can notice this when the polygon forms a sharp angle with the camera. The answer would be to implement perspective correction. So every 8 pixels for example, you would recompute the correct texture coordinates, and use those to interpolate instead. I'll briefly mention perspective correctness next week, but until then affine texture mapping will have to do. For the demo, this works perfectly, since there is a high count of polygons, so the distortion of the texture is not that obvious.


Clipping


The technique I described above works perfectly as long as your polygon remains inside the screen. If it isn't then the program would probably crash. To avoid this we must introduce clipping. In essence, clipping just involves discarding the invisible part of the polygon. As usual, there are different techniques to do this. Firstly at span level. If the span is fully at the left or right of the screen, then we just exit. If the span is partly inside the screen and partly goes out the right side of the screen, then we just exit the rendering loop early. If the span in partly inside, and partly goes out the left, then we must readjust the starting values for our interpolation, so we can start at the first pixel on the line. This way we can avoid running the interpolation loop unnecessarily until we actually hit the first pixel. Here's what the pseudo-code looks like:


     procedure drawSpan( int x1, c1, x2, c2 )
        assume ( x1 < x2 )
        if ( x2 < 0 )  return
        if ( x1 > RESX )  return
        if ( x1 < 0 )  begin
           clipVal = -x1 / ( x2 - x1 )
           c1 = c1 + ( c2 - c1 ) * clipVal
           x1 = 0
        end
        deltaC = ( c2 - c1 ) / ( x2 - x1 )
        if ( x2 > RESX ) x2 = RESX
        for x = x1 to x2 begin
            draw pixel with colour c1
            c1 += deltaC
        end
     end
 


Clipping while interpolating along the edges is pretty much the same technique, so I'll spare you the pseudo code.


Quality Improvement


Making polygons look even better has been the goal of many programmers over the years. And quite unsurprisingly, there is a wide variety of techniques to increase visual satisfaction. Some are easy to implement, some you can't do without once you get used to them. I'll describe only a few of them now.

Edge Antialiasing
This is extremely simple to implement. You only need to perform the antialiasing on the first and last pixels of the span. Based on how much the theoretical span covers the pixel, you blend it's colour with the existing colour by interpolating it linearly. Just for interests sake, full screen antialiasing can be performed with a filter, or by blending a couple of frames together.

Mip-Mapping
Proper mip-mapping is quite expensive to produce. For each pixel you have to compute the area covered by the texel, and select a mipmap given that information. You can then fetch your texel by linearly interpolating between the two closest mipmaps, or just using the closest mipmap for extra speed. I personally use a cheap hack (yes, another!) which is based on the value of the delta used to interpolate through the texture. I divide the step size by two until it's within the range [0..1], selecting a smaller mipmap every time. It is then quite easy to perform bi-directionnal mipmaping, simply given the magnitude of dtx and dty.

Bilinear Filtering
This is pretty much standard in modern hardware, but still quite rare for software rendered engines. The quality improvement is second to none, but you get a massive speed hit, since you have to load 4 times as many texels. Doing bilinear filtering in 3D works on the same concept as in 2D, so refer to the previous tutorial to find out about it.

Subpixel and Subtexel Accuracy
Usually when you project your vertices onto the screen, you round the float value to the closest integer. Then you perform the rasterisation of your polygon based on that rounded value, but still assuming it's correct. Subpixel accuracy involves not rounding the coordinates straight away, and doing the interpolation with correct values. Then when you need to round those values to find the right pixels, you do so by using the ceil() function. You then admit that value is not correct, and adjust all other interpolated values based on it. In effect we draw a pixel only if it's centre is inside the polygon. Practically this means you get smoother movement of the polygons on the screen, less cracks between the polies.

Here is the pseudo code to do all this:


         procedure drawSpan
             edge table contains x and c
             ix1 = ceil( x1 )
             ix2 = ceil( x2 )
             deltaC = ( c2 - c1 ) / ( x2 - x1 )
             adjustValue = ix1 - x1;
             c1 += deltaC * adjustValue
             draw from ix1 to ix2
         end
 


Ok, that's the basic idea anyway. I can't really describe all these techniques in detail, it would take a while, and i'm not sure they fit into the context of this column. For those of you that are interested however, there is lots of information on the web about all this, so you should have no trouble implementing these features.


The Rendering Pipeline


As you probably noticed, the tiny polygon engine I provided this week can't handle much more than an object show. Nowadays good effects cannot be based on that sort of engine... you really need to design a fully blown engine, capable of handling full worlds to impress people. But for educational purposes an object show is fine. The code itself is pretty bulky (about 4 times the longest source code so far), even though I only implemented the bare minimum, hacked together quickly. If you started to add more features, it would very quickly get quite chaotic, hence the need for some structural redesign. I'll guide you through the principles of the pipeline so far, and show you what needs changing to make it more elegant. A few extra C++ classes would also help.

Each object is defined as vertices, vertex normals, and polygons. Before you get confused, i'll explain what a vertex normal is. This is a mathematically incorrect definition. A vertex cannot have a normal, it's just the terminology we use. The vertex normal defines the normal of the theoretical surface at that given point. We use this to compute our shading. Then we have the polygons that contain pointers to the vertices that define it, the centre vertex (usually the intersection of the diagonals), and it's face normal. The face normal is used to cull the polygon, i.e. not draw it if it's facing away from the camera.

Now, each frame we must recompute some data, relative to the position of the object, and the camera. The camera doesn't usually move, since it's an object show. The objects just spin in front of it. First we rotate all our points and their corresponding normals. We must now clear the screen and the zbuffer. Then for each poly, we check if it's visible by checking the dot product of it's normal with the vector from the camera to the centre vertex. If it is visible, we draw it and let the zbuffer sort out any depth problems. That's about all this week's demo can do. If works fine as long as you don't set your expectations too high.

Needless to say that pipeline isn't as efficient as it could be. Insert a couple of interesting data structures, a few good algorithms and you have a full featured engine :) First start by splitting your world into sets of polygons. How you create the sets is pretty much exclusively based on what type of world you want to render. For example, and indoor engine would be portal based (split into lots of different rooms), and an outdoor engine would use a quadtree (multiple levels of adjacent squares). Then given a camera position, you would use some visibility scheme to find which sets of polygons are visible. The algorithm would also sort these sets by depth. For each set of polies, starting from the closest, you would rotate all the vertices, and draw the polies with minimum overdraw (see last week's tutorial). At this point you need an occlusion algorithm, that would discard any invisible sets of polies. For all the sets that are still visible, process them in the same way. Once all polies have been draw, post processing can take place. Than could include some sort of filter, or layer merging to obtain effects like motion blur. If no post processing is needed, then you can draw straight to video memory, and just flip the double buffer to display you frame.


Final words


Now you know how polygons are rasterised, you really need to integrate them into a proper 3D engine. The one I have implemented should give you the basic idea, but if you ever did want to do anything more interesting, you'd probably have to start from scratch.

This tutorial was meant as a brief introduction to 3D, simply to point you in the right direction. The Art Of Demomaking is not the right place to talk about proper 3D engines anyway. If you are interested, it's up to you to look up more information about these techniques, and how to implement them. The Phantom's Portal Engine Column is a good place to start.

Next week we'll leave polygons behind, and i'll quickly introduce you to raytracing and how to hack your way into making it realtime.

You can download this week's demo and source code package right here (137k)

See you next week,
Alex


Article Series:
  • The Art of Demomaking - Issue 01 - Prologue
  • The Art of Demomaking - Issue 02 - Introduction To Computer Graphics
  • The Art of Demomaking - Issue 03 - Timer Related Issues
  • The Art of Demomaking - Issue 04 - Per Pixel Control
  • The Art of Demomaking - Issue 05 - Filters
  • The Art of Demomaking - Issue 06 - Bitmap Distortion
  • The Art of Demomaking - Issue 07 - Bump Mapping
  • The Art of Demomaking - Issue 08 - Fractal Zooming
  • The Art of Demomaking - Issue 09 - Static Texture Mapping
  • The Art of Demomaking - Issue 10 - Roto-Zooming
  • The Art of Demomaking - Issue 11 - Particle Systems
  • The Art of Demomaking - Issue 12 - Span Based Rendering
  • The Art of Demomaking - Issue 13 - Polygon Engines
  • The Art of Demomaking - Issue 14 - Perspective Correct Texture Mapping
  • The Art of Demomaking - Issue 15 - Music And Synchronization
  • The Art of Demomaking - Issue 16 - The Final Product
  •  

    Copyright 1999-2008 (C) FLIPCODE.COM and/or the original content author(s). All rights reserved.
    Please read our Terms, Conditions, and Privacy information.