Not logged in, Join Here! or Log In Below:  
News Articles Search    

Submitted by Loic Baumann, posted on November 24, 2004

Image Description, by Loic Baumann

This is a series of screenshots of the Deferred Shading Renderer I'm working on. For each row:
  • Two renderings from two close points of view, with a different depth-of-field.
  • A four spots rendering with shadows. A parallax mapping wall. A face with ambient occlusion rendering.
  • An indoor scene rendering, the whole scene has an ambient occlusion map. And another multi-lights scene with soft shadows rendering.
  • About the renderer:
  • Actually Shader Model 3.0 compliant only, there's not much work to do to make it work on the ATI Chips. :)
  • Four Rendering Targets are used, each one has a depth of 32 bits:
  • Albedo, it stores the computed albedo using the graphical object's shader.
  • Z, it stores the 32bits depth of the pixel, that's a shame we can't use the Z-Buffer for that.well.
  • Normal, it stores the X and Y components of the pixel's normal, the Z is computed on the fly.
  • Material settings, actually 8bits components for Specular Intensity, Specular Power, Ambient Occlusion Factor and last one is free.
  • Some random features: Gamma correction, optimized light rendering using projected bounding volumes, HDR (still in progress), Soft or hard shadow mapping on Point/Spot light (direct, being in progress, damn PSM.), NormalMap/Parallax mapping, Ambient Occlusion Mapping, Standard Lambert Lighting (I'll write a new set of custom lights more suitable and faster later), Depth of Field, Projected Textures.

  • About the deferred shading:

    Many people think what is good about Deferred Shading are the early pixel culling and doing the lighting in screen space. Of course, these are good things compared to the standard way when you have to do a first pass to fill the Z-Buffer, and then ideally just one more pass to light everything.

    My point of view is slightly different, I of course like these benefits, but what I enjoy the most about Deferred Shading is the simplicity it enables in many aspects of the real-time rendering.

    You have three separated stages of rendering: the MRTs creation, the lighting, and the post-process/tone-mapping.

    This is so clear, and so.right (similar to broadcast renderers). You can write complex shaders for the albedo computation, different lighting models, and have fun with post-process. Each stage has its own set of HLSL shaders, and it's highly pluggable/scalable (which is a priority for the technology I'm working on). The whole architecture can be strong and stable.

    Concerning the speed of rendering, it relies on pixel power mainly, and we know this is going to improve a lot. You have some effects like Depth-of-Field almost for free (done during tone mapping). Look at the timings on my log page to get a better idea.

    There's a lot more to talk about the architecture of this technique, and its real benefits, but I don't want to make this description too long! (I know, that's already the case! :) )

    Some links:

    My log page with more screenshots:

    Description of the whole development environment:

    Image of the Day Gallery


    Message Center / Reader Comments: ( To Participate in the Discussion, Join the Community )
    Archive Notice: This thread is old and no longer active. It is here for reference purposes. This thread was created on an older version of the flipcode forums, before the site closed in 2005. Please keep that in mind as you view this thread, as many of the topics and opinions may be outdated.

    November 24, 2004, 03:46 AM

    hey looks very cool.

    I'll plug my own technique for infinite directional shadow mapping since you say your having problems with PSM:


    November 24, 2004, 07:47 AM

    Generalized Trapezoidal Shadow Mapping - patent pending - well if that doesn't encourage people to use your method - i dont know what will! :)


    November 24, 2004, 09:22 AM

    It also seems rather a lot like the perspective shadow mapping technique written up and demonstrated at Siggraph 2003.

    Victor Widell

    November 24, 2004, 11:19 AM

    The Depth Of Field effect looks a bit... aproximated. (OK, I think it is ugly. Sue me. ;-)

    How is it computed?

    Jean-Francois Marquis

    November 24, 2004, 02:05 PM

    How do you handle (if you do) translucent objects?

    I am very interested in deferred shading but handling translucency scares me. Would I have to have a more "normal" rendering pipeline to display translucent objects? Maybe there is something to do with a separate render for translucent objects only.

    And what about additive blending, that's also very common.


    November 24, 2004, 04:55 PM

    For the DoF: I interpolate the blur buffer and the "lighting buffer" (the one that contains the result of the whole lighting + fx).
    On the picture, well, the DoF Power was high, and maybe the blur buffer is too blury for your taste.
    I thought of that too, but it's easy to keep an intermediate "blur buffer" when doing the multi-pass bluring, to use a more "sharp" version.

    For Shadows, yes I do care about the patent pending of the TSM.
    For Codeg, I'd love to use something that already exists if it's good! :)
    I've looked most of the papers about shadow mapping, the article of Kozlov in the GPU Gems is definitly the most advanced/concrete I've read. And the nVidia's Gary King sample is a good starting point too.

    Yes, this is a good point...
    For me, transparency is most of the time used for particles effects, and glass materials (cars, windows, ...).
    For the first one, I prefer doing that in Post Production (as it's not really dependent of the lighting). Using deferred shading, as you have access to the pixels' Z, you can fix the graphic artifact that occurs when a particle crosses a solid surface, which is great.

    For Glasses well and other stuffs (I didn't think about/accounter), I have two options:

    1) Use a "standard rendering path", done directly on the lighting buffer. This is your call to support per-pixel lighting, one/many lights.
    2) Create a second set of MRTs and having fun doing each "transparency part" one by one. Of course, the cost in term of memory and speed is high...Too high for the result in my opinion.

    And of course there's a secret third option: doing a mix of the two previous.

    When programming a DS Renderer, really think about you CAN and HAVE TO do a lot of post-prod, because it's possible, and that's the way to produce great graphics (just look at the importance of post-prod in broadcast rendering).

    I'll write later on my log page about the system I made, which "shares" all the intermediate buffers in a post-prog plugin interface, computing one given buffer only when at least one plugin needs it. You quickly find yourself using a given buffer (like the blur one) for many things, and of course the MRTs database is very useful in that stage too.

    One last thing about DS, I may be wrong, but for me this is the technique to use on future consoles, mainly because the screen resolution is smaller than on PC.


    November 26, 2004, 01:49 PM

    The problem with DOF using a single blur buffer is that the amount of blur actually varies based on how out-of-focus something is. Thus, to get a good depth of field, you really need to distribute the color and occlusion of a pixel further, the more out of focus it is. Unfortunately, most DOF approaches start with the destination pixel, rather than the source pixel -- this is the same problem that Parallax Mapping has.

    I think translucent objects is the real bane of most of the modern rendering methods. Iterating the scene once per light for opaque things, and then iterating once per light per translucent object is really the only way to get it "right," but that's really expensive (and shadowing is hard).

    This thread contains 7 messages.
    Hosting by Solid Eight Studios, maker of PhotoTangler Collage Maker.