As a rarity today to me, I've been doing some of that strange stuff called thinking. My, oh my, how the cobwebs can build up oh so quickly when we are coding, and synapses fire into action to actually think. Of course, I might be being heavily sarcastic… But now I'll leave you to guess which of the above is poetic, which vitriolic and which just plain bastardly. Anyway, past that idle rambling, and on to the goodies. There are a few things I've been mulling over of late. The major one is the lack of wide support for destination alpha. This just annoys me. I myself still have my Voodoo3 in my dev machine, and I know that likely be, those will still exist in systems down the track for a year or two to come. People have complained before, but the lack of destination alpha in any form, combined with the lack of stencil buffer is very irksome. I used to be of the school that all algorithms that used either destination alpha or the stencil buffer could probably be achievable via other means, mainly software. But, with the huge speed of accelerators now, and the realisation that the other means of doing things are conceptually so much slower, I wonder if this emission is greater than that of the extra colour, depth or texture information. Still this exists in the past and is unchangeable for now. As mentioned in John Carmack's .plan update, destination alpha, and render to textures are two very important features, and will be relied on heavily. The render to texture solution works reasonably already (although, from recent information from Brian Sharp, this appears to be about to change) but slowly. Here is an example of a feature that can be achieved using read back textures and destination alpha. We take a simple light (point or spot), for this example I will use a spot light, (but point lights are easy to do) and we want to shadow for it. Now, simple enough to call our spotlight origin the camera, the spotlight direction the view vector, and yes, our fov both vertically and horizontal is the spread of the light. The light will be rendered as a simple projected texture. What we want at the same time is a projected shadow map that goes along with it. Logic would tell use we could use either a z-buffer method or the back to front rendering method to render polygon ids into a shadow texture. Then, when a polygon is rendered with the projected texture on it, we simply check if the id is the same as that of the shadow map before rendering the pixel. Doesn't sound like it would work well in hardware. But, oddly with destination alpha, alpha functions and read back textures it works fine. What you do is this you render all the in light frustum polygons to the z-buffer (using the same projection as the texture). Then, given 256 polygons (or non self shadowing objects) in the light frustum, you write them to the screen with their alpha as the id (making sure that for each polygon you store the id, and the texture). Then clear the section, and draw the next 256. When finished you should have a set of shadow id textures. (Note, in most cases, if you use a simple visibility algorithm and group objects together you can get under 256, and avoid the clear step, and write to the z-buffer in the same step). At render time, you use the same texture coords for the shadow projection as you do for the light projection. Using multi-texture you should be able to use the alpha function equals, with the polygon id as the needed parameter, and that will block out the shadowed areas. If using a multiplication blend or addition blend to the frame buffer, you should mask out the alpha there, or it may interfere. Beyond rambling about hardware stuff, as I am prone to do, I think that I may start doing a few high-quality rendering experiments. Some real-time, some not, but all with a focus on high quality (There again goes the high priced commodity of free time). If I do, I am tossing up whether to throw them together in a simple open source thing, and shove them on the net to see what happens. Povray has a very high following (despite the fact that it is fairly old technology internally), and it would be nice to see an open source initiative gaining on it. There would probably be a high degree of modularity which would make for some funky user plug-ins, but I wouldn't have time to do an actual editor. The whole system would be very scalable, and I would write in hardware API support (which would be a reasonable thing for polygonal/micro-triangle rendering, and maybe later for ray casting and voxelisation), both via direct support for a built in API, and for other APIs like OGL etc. Still, it is just a pipe dream on my part. So if anything actually eventuates, it will be interesting. Conor "DirtyPunk" Stokes
|