Not logged in, Join Here! or Log In Below:  
 
News Articles Search    
 

 Home / 3D Theory & Graphics / Volumetric Fog Account Manager
 
Archive Notice: This thread is old and no longer active. It is here for reference purposes. This thread was created on an older version of the flipcode forums, before the site closed in 2005. Please keep that in mind as you view this thread, as many of the topics and opinions may be outdated.
 
Phantom

March 01, 1999, 09:17 AM

Hello,

Just a little kick-off question: There's a discussion going
on in comp.graphics.algorithms about volumetric fog in Quake III.
So far, no-one seems to have a good idea about how Q3A implemented
it. I'm not really interested in how Carmack did it, but fog IS
an interesting subject. Does anyone know of some good ways to
do it?

Greets,
The Phantom

 
Jaap Suter

March 01, 1999, 02:29 PM


What does volumetric fog mean and what's the difference between the normal fog every 3dapi has implemented. Does it mean that fog is relative to the world you are in and not every frame the same?

Jaap Suter

p.s. We kunnen ook wel Nederlands praten maar laten we dat maar niet doen want dan kan de rest het niet meer volgen (welke rest?). Trouwens bedankt voor je vette columns en ik hoop dat er nog vele volgen.

 
Phantom

March 01, 1999, 04:01 PM

Jaap Suter wrote:
>> What does volumetric fog mean and what's the difference
>> between the normal fog every 3dapi has implemented. Does
>> it mean that fog is relative to the world you are in and
>> not every frame the same?

That's correct. With volumetric fog, you can have passages
that are partially 'foggy' (for example, because there's
a steam leak somewhere), and partially perfectly clear.
So, objects behind the fog are 'shaded' based on the
total thickness of the fog volume(s) between the object
and the camera. Problem is, how to get this fast. Some
guys seem to be using gouraud shading, which they apply
after determining the 'fogged color' for each vertex of
a polygon. Other guys (Crystal Space?) project a
transparent cloud over the already drawn object.

>>Jaap Suter
>>p.s. We kunnen ook wel Nederlands praten maar laten we dat maar niet doen want dan kan de rest het niet meer volgen (welke rest?). Trouwens bedankt voor je vette columns en ik hoop dat er nog vele volgen.

Dat zouden we kunnen doen... :) Maar da's nie aardig.
Bedankt voor je compliment. Zal ik je PS maar even
vertalen? :)

(He says: "We could talk in Dutch, but let's not do
that, because 'the rest' wouldn't be able to
understand it..." and then some nice things about
my portal columns :)

 
Jaap Suter

March 02, 1999, 02:30 AM

I think the gouraud shading method is indeed not a very bad one. The
only problem though is that one can't have poly's partially in fog and partially out. In
your example (the one with steam ) the seam between no fog and fog is really sharp and that
can't be done with gouraud shading. I think a combination of rendering a new poly over the area
that should be fogged with that part of the zbuffer as some sort of texture could be working?
Or isnt it?

Jaap Suter

 
Phantom

March 02, 1999, 04:29 AM

Jaap Suter wrote:
>> I think the gouraud shading method is indeed not a very bad one.
>> The only problem though is that one can't have poly's partially
>> in fog and partially out. In your example (the one with steam )
>> the seam between no fog and fog is really sharp and that can't
>> be done with gouraud shading. I think a combination of rendering
>> a new poly over the area that should be fogged with that part of
>> the zbuffer as some sort of texture could be working?
>> Or isnt it?

Well, volumetric fog is usually not used for very small areas like
a steam cloud. The example of the steam leak was intended to give
a reason for a partially fogged corridor. With that kind of fog,
boundaries are really fuzzy, so it's really not a problem when
a polygon is gouraud shaded.
The other approach (with overlapping transparent polygons) has two
disadvantages:
1. The boundaries of the volume must be fuzzy. That means that you
have to adjust the textures dynamically for polygons that are at
the boundaries of the fog area.
2. Software rendering is very slow for alpha blending, especially
when you don't want to use 50%/50% mixing all the time.

 
Jaap Suter

March 02, 1999, 05:02 AM

"Software rendering is very slow for alpha blending"
Actually i think software rendering is passe, We arent using the pcspeaker for midi either arent we? The only thing we need is cards that support up to eight texture blending stages. Then we can make some real cool effects. And volumetric fog would be easy.

I've seen unreal do volumetric fog in one of the early levels. Green steam is coming out of the walls somewhere (i dont remember) How did they do that?


Jaap Suter



 
Colas

March 02, 1999, 05:08 AM


Just a word about fog

I have read in the kewls docs available an the opengl.org site
-a fake volumetric fogging can be done by mapping some trasparent
objects wich flys slightly in the world.


the most important is the map: procedural texture generation.

i have dealed with perlin noise functions to generate some 3d textures (for a realtime
raytracer)
look about these, its difficult to obtain a goodloking texture but this is possible with
a little time :-)

what is the trick : if you can generate an 3d texture, for each frame,
you can generate a 2d texture wich is a layer of the 3d texture (z = time)

you will see that the animation provided by each layers will produce the smooth movement
of the fog.

(sorry for my terrible english)

Colas

 
Phantom

March 02, 1999, 06:12 AM

Jaap Suter wrote:
>> "Software rendering is very slow for alpha blending"
>> Actually i think software rendering is passe, We arent
>> using the pcspeaker for midi either arent we? The only
>> thing we need is cards that support up to eight texture
>> blending stages. Then we can make some real cool effects.
>> And volumetric fog would be easy.

Software rendering is still cool, I think, and not just
because not everyone owns a 3D accelerator. A coder capable
of software rendering is able to use an accelerator more
effectively, because he knows the process involved. For
example, if the nVidia guys knew about software rendering,
they wouldn't release drivers that cause gaps between
polygons, because they would know about things like sub
pixel accuracy.
I hope that some day accelerators will have programmable
processors, so that you can do software rendering tricks
like true bumpmapping using the hardware. Some things just
can't be solved using a generic approach, and that's just
the only things that accelerators are good at.

>> I've seen unreal do volumetric fog in one of the early
>> levels. Green steam is coming out of the walls somewhere
>> (I dont remember) How did they do that?

That's not volumetric fog, it's 'procedural texture
mapping'. Basically, you place polygons with a texture
that constantly changes in an algorithmic way, and you
'draw' steam on these polygons. The polygons are then
rendered with an alpha map.
If you want to know more about that, please create a
new subject, this conversation is getting rather long :)
Looks bad in my browser. :))

 
cschrett

March 02, 1999, 07:17 AM



Phantom wrote:

>>Software rendering is still cool, I think, and not just
>>because not everyone owns a 3D accelerator.

:-)


>>I hope that some day accelerators will have programmable
>>processors, so that you can do software rendering tricks
>>like true bumpmapping using the hardware. Some things just
>>can't be solved using a generic approach, and that's just
>>the only things that accelerators are good at.
>>

yes, about bumpmapping. i know a technique of bumpmapping wich fit especialy for 3d card
called displacement maps maybe.
see the donut example of 3dfx2
i know nothing about it and i m looking for informations.
what i want is to do is implementing an *free direction* bump mapper in software



>>That's not volumetric fog, it's 'procedural texture
>>mapping'.

I have just posted an reply on procedural texturing somwhere in the articles
(the third entry) so it's not visible sorry for this mistake :-)


Colas

 
Jorrit Tyberghein

March 04, 1999, 09:39 AM


Phantom wrote:
>>Hello,
>>
>>Just a little kick-off question: There's a discussion going
>>on in comp.graphics.algorithms about volumetric fog in Quake III.
>>So far, no-one seems to have a good idea about how Q3A implemented
>>it. I'm not really interested in how Carmack did it, but fog IS
>>an interesting subject. Does anyone know of some good ways to
>>do it?

In Crystal Space we will use two methods to implement volumetric fog.
The first one is already implemented but this is not easy to implement
in hardware. With this method we implement volumetric fog as follows:
- First volumetric fog in CS is restricted to convex volumes.
- First you draw all objects in the sector.
- Then you draw all back faces of the volumetric fog object by
just updating the Z buffer (so nothing is actually drawn).
- Then you draw all front faces of the volumetric fog object
with a special routine which calculates the Z distance between
the value in the Z buffer and the Z value of the front facing
polygon. From this difference, the color and the density of the
fog you can calculate how much you should fog.

This method is not very fast but on a fast computer it works very well.
Also we have some ideas about optimizing this (use more interpolation).

But for hardware acceleration you can't easily update/query the
Z buffer like this (at least not efficiently). So we're going to do
it differently there. We take the volumetric fog object in camera
space and subdivide it into several layers parallel to the view plane.
We intersect every layer with the volumetric object. This results in
a convex polygon which can be drawn with alpha blending. The amount
of blending depends on the number of layers choosen (i.e. the distance
between the layers) and the density/color of the fog.
The number of layers that is chosen will affect the quality of the
fog but also the speed (more layers to draw).

This last method has not actually been implemented yet so I can't say
if it will work ok or not.

Greetings,

 
Mac / Viruta Team

March 09, 1999, 10:39 AM



What about making the volumetric fog out of 'slabs' conforming a convex hull?

To obtain the fog value of a vertex, you clip the ray going from the camera to the vertex against the convex hull. The length of the ray that's in the fog is the fog's intensity you will give to the vertex. Only one ray per vertex and possibility of fast discarding (clip fog with frustrum, or world hierarchy with fog). This will almost make you to calculating fog on vertexes that have fog.

I think unreal does this but with spheres, instead of convex hulls....

Mac / Viruta Team
David.Notario@pyrostudios.com

 
This thread contains 11 messages.
 
 
Hosting by Solid Eight Studios, maker of PhotoTangler Collage Maker.