Not logged in, Join Here! or Log In Below:  
 
News Articles Search    
 

 Home / 3D Theory & Graphics / Fast depth-of-field effect in OpenGL - How to do it? Account Manager
 
Archive Notice: This thread is old and no longer active. It is here for reference purposes. This thread was created on an older version of the flipcode forums, before the site closed in 2005. Please keep that in mind as you view this thread, as many of the topics and opinions may be outdated.
 
Danny Holten

March 26, 2005, 03:04 AM

Hello,


I was wondering what would be the best way to implement a decent quality, real-time depth-of-field effect in OpenGL in such a way that it does not require the support of pixel shaders (ARB fragment program).

At first I thought about using the accumulation buffer to render multiple passes of a scene from a number of jittered camera eye positions and a common camera target position (the point of focus), but the accumulation buffer seems too slow for this task.

I am now thinking about using multiple render-to-texture passes (pbuffers) and blending these together to see if this will achieve decent quality and real-time performance, but there might very well be another way that I'm not aware of.

Does anyone know of a way to achieve a decent quality, real-time depth-of-field effect in OpenGL without using pixel shaders? Any help would be greatly appreciated. Thanks in advance.


Regards,

Danny Holten.

 
Scali

March 26, 2005, 07:02 AM

What I do on DX7-class hardware is the following:

1) Render scene normally, and store the distance to the focal point in alpha (basically it's a scaled and biased z-coordinate, you can do this with the texgen options and a simple 1d-texture, so it can run completely on the GPU).

2) Copy scene to texture (could be lower res than the screen, to improve performance). Apply a filter to that texture. I use a boxfilter for it, because it's the fastest to implement on an old non-shader card. A gaussian filter may be a bit more accurate, but will be slightly slower.

3) Render the filtered texture to screen using a fullscreen quad, and alphablending. Using the alpha as set in 1). Set it up so that it is 100% the original scene at the focal point, and 100% the blurred scene at the max distance from the focal point.

It's fast and reasonably okay quality (although it 'bleeds' a bit around the edges ofcourse... but if you don't apply too much blur, it's not that bad).
You could also repeat these steps as many times as you like, each time increasing the amount of blur in 2). So you will use different textures, like one for 0-50% blur, and one for 50-100% blur... or even more. This will give an even better quality of blur, ofcourse at a slight extra cost.

 
Danny Holten

March 27, 2005, 07:24 AM

Hello,


Thanks for the reply, I'll see if I can get it to work. As you've already stated, some bleeding will probably occur, which is, of course, typical for 2(.5)D depth-of-field effects. However, a 2(.5)D approach still seems to be the best option at the moment, since rendering the scene from multiple jittered camera positions and subsequently blending the individual results together to form the final texture seems too expensive at the time being (after implementing your suggestion, I'll give the latter method one more try, since I've just read at http://www.opengl.org/documentation/extensions/EXT_framebuffer_object.txt that OpenGL's EXT_framebuffer_object extension is ready for use, which might be used as part of a (faster) alternative to the accumulation-buffer approach that I mentioned in my original post).


Regards,

Danny.

 
This thread contains 3 messages.
 
 
Hosting by Solid Eight Studios, maker of PhotoTangler Collage Maker.