Not logged in, Join Here! or Log In Below:  
 
News Articles Search    
 

 Home / 3D Theory & Graphics / Generating height maps using z-buffer. Or... Account Manager
 
Archive Notice: This thread is old and no longer active. It is here for reference purposes. This thread was created on an older version of the flipcode forums, before the site closed in 2005. Please keep that in mind as you view this thread, as many of the topics and opinions may be outdated.
 
radarsat1

May 17, 2005, 07:24 PM

Hi,

I would like to generate a simple height-map of a 3D scene in real-time. I was hoping to use OpenGL to take advantage of hardware acceleration. I figured that what I want to do is get a pixel-per-pixel view of the distance of a scene from the camera, which is essentially what the zBuffer looks like after a fully-rendered scene. I haven't tried it yet, but after doing some research I saw that everyone claims that reading the Z buffer is awfully slow, and not possible for real-time applications. (Not to mention it is not portable across different graphics cards.)

So. Does anyone know an efficient way I might do this? I basically want to render a scene, but instead of getting lighting and shadows I simply want "distance to camera" in the form of a gray-scale image.

Is it possible to take advantage of OpenGL for this?

Thanks.

 
Tyrian

May 17, 2005, 08:18 PM

I use following method in Maya to produce heightmaps, maybe you can use similar in opengl.

1. Set target objets material to pure white ambient.
2. Put Ortho camera looking to target.
3. Create directional ambient light pointing from camera to target mesh, with linear attenuation that starts just before target and stops just after target.
4. Take a pic.

Ps. I have heard that zbuffer is not linear but logarithmic.

/Tyrian

 
Jordan Isaak

May 17, 2005, 08:22 PM

If you don't mind having only 256 heightfield levels, you can just render your scene into the colour buffer with the colour being dependent on distance from camera. You can do this by either:

a) Setting vertex colour based on distance to camera.
b) Texturing using a colour ramp texture with OpenGL's automatic texture coordinate generation on.

If you need more levels for your heightfield, recent video cards can render to higher colour resolution framebuffers, which you can take advantage of to get more levels. Either that or you could mess around with pixel shaders to encode the height across multiple colour channels.

j

 
PixelClear

May 17, 2005, 11:02 PM

Hi,

You might also want to render the depth in the color buffer by computing the distance to the camera per vetex in a vertex program.

for example in glsl :

Vertex shader:

  1.  
  2. uniform float distanceScale;
  3. varying float distance;
  4.  
  5. void main(void)
  6. {
  7.    gl_Position = ftransform();
  8.    // We multiply with distance scale in the vertex shader
  9.    // instead of the fragment shader to improve performance.
  10.    vec3 vViewVec = -vec3(distanceScale * gl_ModelViewMatrix * gl_Vertex);
  11.    distance = length(vViewVec);
  12. }
  13.  


Pixel shader:
  1.  
  2. varying float distance;
  3.  
  4. void main()
  5. {
  6.    gl_FragColor = vec4(distance, distance, distance, 1.0);
  7. }
  8.  


this will be very efficient and will run in most hardware. To gain some extra precision you may want to use a floating point render target.

Phil

 
radarsat1

May 18, 2005, 11:31 AM

Thanks! These are all very good suggestions!

 
This thread contains 5 messages.
 
 
Hosting by Solid Eight Studios, maker of PhotoTangler Collage Maker.