Not logged in, Join Here! or Log In Below:  
 
News Articles Search    
 

 Home / 3D Theory & Graphics / Render-to-depth-texture on Radeon 9600? Account Manager
 
Archive Notice: This thread is old and no longer active. It is here for reference purposes. This thread was created on an older version of the flipcode forums, before the site closed in 2005. Please keep that in mind as you view this thread, as many of the topics and opinions may be outdated.
 
Reedbeta

March 09, 2005, 05:59 PM

So I've been trying to implement shadow mapping. However, my Radeon does not support WGL_NV_render_depth_texture (or at least it doesn't advertise it). Curiously enough, I can't seem to find an ARB or ATI extension that corresponds to NV_render_depth_texture - and there's nothing mentioned about rendering to depth textures in the specs for other related extensions. Is there some poorly-documented trick that allows binding a pbuffer to a depth texture on ATI cards? Or am I stuck with CopyTexSubImage?

The Radeon 9600 supports rendering to RGB(A) textures and floating-point textures, so the omission of depth textures seems very strange. My drivers and such are all up to date.

Also, a side question: my Radeon advertises WGL_ATI_render_texture_rectangle, but this extension is not listed in the SGI registry. I thought it might just be a renamed implementation of WGL_NV_render_texture_rectangle. Does anyone know if this is the case?

 
Reedbeta

March 10, 2005, 08:00 PM

*bump*

Any takers? No one has implemented shadow-mapping on a Radeon 9600?

 
XycsoscyX

March 11, 2005, 01:25 AM

Sadly, I don't have experience in OpenGL, but I do have a Raedon9600 and have shadow mapping working in my engine. I had the same problem in Direct3D though, because even there, render target depth textures aren't supported. My solution was to use a regular floating point texture as a render target, and using pixel shaders, I stored the depth value as a component in the texture, and retrieved this value later. Ultimately, it didn't matter that it wasn't a depth texture, because it didn't need to be at that point, it was just a floating point texture that I happened to store the depth in.

As for a working, pratical example, have you checked out Pauls Projects at www.paulsprojects.com? His OpenGL shadow mapping tutorial just uses ARB_depth_texture and ARB_shadow, without needing the XX_render_depth_texture extension.

 
Reedbeta

March 11, 2005, 02:29 AM

Thanks XycsoscyX (damn, your name is hard to spell - what's it mean, besides being a palindrome?) I have already gotten it partially working using CopyTexSubImage2D, but I will check out the floating point texture option sometime, as it should be faster. I suspect Paul's demo also uses CopyTexSubImage2D.

 
fox

March 11, 2005, 06:12 AM

Like XycsoscyX, I don't have that much experience with OpenGL, but all I can say is that in Direct3D you should use a floating point target for shadow maps.

Nvidia card also supports rendering to depth format maps, which is their "Ultra Shadow" (or whatever it's called) shadow optimization. You gain a lot of speed boost when rendering to real depth maps (I saw something about 32 pixels per clock on an 6800GT/Ultra).

I guess ATI has no shadow speedup thing like that, and I guess the render-to-depth-format-thing is the WGL_NV_render_depth_texture in OpenGL.

So, I'm sorry to say, I don't know how to do that in OpenGL, more than just wondering if copying pixels in any way is extremely slow?

 
Axel

March 11, 2005, 06:25 AM

Nvidia card also supports rendering to depth format maps, which is their "Ultra Shadow" (or whatever it's called) shadow optimization.

AFAIK Ultra Shadow is a stencil shadow thing only.

 
XycsoscyX

March 12, 2005, 05:05 PM

First off, it's www.paulsprojects.net, not .com. Sadly, looking over it, it does use CopyImage to copy from the frame buffer to the texture. It seems like ATI chose to support floating point textures rather than depth textures at the point with the Radeon 9600 (prolly to be different than NVidia). It is even possible to do it without floating point textures (they actually slow down shadow mapping in my engine even more). You just lose precision on the depth check when using 32bit textures instead of floating point ones. The main issue is using pixel shaders (or rather fragment programs in OpenGL), to calculate and store the depth in a component of the texture, rather than using a depth buffer directly.

 
Morgan

March 12, 2005, 07:48 PM

The Pixel Buffer extension is supposed to come out this month. I think it does what you want on all cards.

-m

 
This thread contains 8 messages.
 
 
Hosting by Solid Eight Studios, maker of PhotoTangler Collage Maker.