Not logged in, Join Here! or Log In Below:  
News Articles Search    

Submitted by Ville Miettinen, posted on March 26, 2001

Image Description, by Ville Miettinen

We've been pretty busy recently writing new demo material for Umbra, our visibility determination system for dynamic environments. Above are shots from two of the demos; the top row displaying an urban scene with 16,000 buildings and 4,000 moving cars (and a rather hacked traffic light system). The bottom row contains shots of Grand Canyon -- we wanted to see how well generic occlusion culling algorithms work with terrains. The terrain is by no means static; one can pick up pieces of it with a mouse and see how the changed geometry affects the visible set of the terrain.

More images, these and other demos, the new 500-page manual of the system, API headers etc. can all be downloaded from .


Image of the Day Gallery


Message Center / Reader Comments: ( To Participate in the Discussion, Join the Community )
Archive Notice: This thread is old and no longer active. It is here for reference purposes. This thread was created on an older version of the flipcode forums, before the site closed in 2005. Please keep that in mind as you view this thread, as many of the topics and opinions may be outdated.
Ville Miettinen

March 27, 2001, 11:51 AM

>As you say, perhaps it would be acceptable if the posters were
>willing to discuss their programs in detail.

>IMO if you post an IOTD you are prepared to discuss it with fellow
>coders in details, else your just showing off :-)

>This vis stuff is really very advanced and so cool that its just
>amazine. Sadly, its just being advertised and not explined.

The last 150 pages of our book (downloadable from the address explain the most
intimate details of all of the algorithms used in the system. Also
the binaries and source code for the demos can be downloaded from
that address.

Our most sincere apologies for not having answered the remaining
questions any earlier - we just came back from GDC where we gave
over a hundred presentations of the system. If any of you are
coming to E3, we'd be most happy to give you a demo and answer
further questions you might have.


David Olsson

March 27, 2001, 12:12 PM

Is it really worth to bother about occlusion culling ?
I mean lod and frustum culling is enough for me. Lod is even much more important then frustum culling.
With todays and future TnL cards, is it really worth bother to not render things behind a big box in front of you. I mean, with massive lod, that stuff behind the box will often be less than 10% of total rendered geometry. And if you stand in front of a wall, does it really matter if you only have to render 10 polygons, I mean, todays GPU:s work in parallell with the cpu, so you don't have to waste that much cpu power. (Maybe you wast more on the occlusion calculations). Besides I'm more interested in the lowest framerate of a game, not average. Since fps that jump around are even more annoying than low framerates.

Looking forward to a serious answer.


March 27, 2001, 12:37 PM

It all depends on the balance between the time it takes to determine wether or not an object is occluded and the time it takes to draw it.

Just the fact that it's faster to determine that an object is occluded doesn't make it better - if it still takes a lot of time when an object isn't occluded, you'd have to add that to the draw-time for that object - so it might still slow stuff down.

Ville Miettinen

March 27, 2001, 12:37 PM

> QRock, umbra has nothing to do with a platform-independent
> rendering layer like Renderware.

Actually there is a hint of truth in what QRock said.. Criterion Software (the makers of RenderWare) are the exclusive distributors of Umbra =)

> the exception of the reflections demo which ran at something like > > 1fps for me! (GeForce2 GTS AGP 32Mb, PIII-933, 256Mb RAM) I
> don't know what happened there.

The reflection demo uses two OGL hardware features that might have caused your driver to switch into software emulation mode: stencil buffers and a user clip plane. IIRC Visualizer uses the same bitdepth as the desktop; if your desktop happens to be in 16bpp mode, try changing it from the control panel into 32bpp.

> Hmmm, i'm sceptical... i'd go even further! This will be integrated > straight into the hardware and drivers. Your app will say "Here's
> the whole world divided in chunk of polies, and the
> camera is here." and everything will be handled transparently. Just > like manual texture management is
> becoming a thing of the past (virtual texture memory, mmmm :)

I just spent the last week talking with head architects of all of the major 3D HW manufacturers and I can safely assure you that that is not going to happen in the near future. Current and upcoming hardware are providing some _support_ for occlusion culling; however the world traversal and associated spatial database management are way too complex (and thus expensive) to be implemented in hardware.



March 27, 2001, 01:35 PM

Looks Good But I bet you need a beefy system to run it ^_^.


Joachim Hofer

March 27, 2001, 02:08 PM

I think you didnīt think about overdraw. Everything you send to the graphics card will probably at least have to be tested for rendering (A ZBuffer test). And that takes damn much time. So if you have an overdraw of 8 or more (which sounds few for large scenes with no occulsion culling at all) at a resolution of 1024x768, you will need at least 190 MPixels/sec to get 30fps (if I calculated it correctly:)


March 27, 2001, 02:19 PM


I bet everyone is going to respond to this with "HEY! WHERE'S THE NEXT ARTICLE OF YOUR COLUMN!" :) But anyway, I want to say something about the topic of the neccessity of culling:
Of course, nowadays 3D cards can easily draw a Quake level without culling anything. And yes, if you're looking at a blind wall, and it has 10.000 polygons behind it, you can just render the lot and still get 60fps. Problem is:
1. Maybe it's 10.000 objects, not 10.000 polygons.
2. Maybe you wanted to draw the wall with 10.000 polygons for bumps.
In both cases, it would be worthwhile to cull the 10.000 'things'. But only if culling is faster than drawing (boy, I hate to agree with LED! I'll have my bat ready for you! This is going to be a game on my territory! HAHAHAHA!)
And the good thing about Umbra is, as you could have known if you had read the manual, that Umbra finds a perfect balance between culling what matters and leaving what's not worth the culling. In short: These guys know what they are doing, and you can be sure that they will never spend a second on culling if drawing would have taken 0.9 seconds.
And this kind of culling will ALWAYS be worthwhile. Even if your card can render a zillion polygons. Because you want to draw 10 times a zillion polygons. Don't ask me why. :)

And o yes, I got all sucked down into my iPaq. Still love it. Check out for a demo. Runs on Win9x too. In the meantime I'm also reading about readtime raytracing. I'll finish that article one day. :)

- Jacco.

Marco Al

March 27, 2001, 02:33 PM

"Current and upcoming hardware are providing some _support_ for occlusion culling; however the world traversal and associated spatial database management are way too complex (and thus expensive) to be implemented in hardware."

3D cards are becoming too complex to implement entirely in fixed function hardware period, we already have little programs running on them... that just needs to be scaled up "a bit" ;)

Without an appropriate API the discussion is entirely academic anyway, any one company cant manage to make such a major change to D3D on its own IMO... so we are stuck with incremental API changes. Of course if D3D wont support something there is not much point in putting much effort into it. If m$ had stuck to the plan and introduced their scenegraph API + new driver model I imagine the hardware companies might have been coming from a different direction, now its awfully easy to blow it off.

IMO rasterization and VSD need to be able to interact, the inability to do so has created some wonderfull algorithms... but a lot of redundant work is being done at the moment.

Marco Al

March 27, 2001, 02:35 PM

"readtime raytracing" is that where you go read a book and when you are finished the frame is ready too? :)

Arne Rosenfeldt

March 28, 2001, 06:24 PM

People wondering about the sense of Occlusion Culling can try the Umbra demos with culling turned on/off (GF2 P700?)
In the scout demo I think it's very impressive

Or compare Descent1 normal view (culling) vs Map-View (no culling)
on a p60...


March 29, 2001, 09:42 AM

L.e.Denninger & Numero27:
I am agreeing with Numero27 on this one. With graphics cards getting so fast, and scenes getting so complex.. the work to do visibility determination is getting to be less and less important. Take a card like the Kyro/Kyro2. It has 0 over-draw.. so why would you write all the stuff in software to take out surfaces that are covered, when the hardware will do it for you? NVidia, and ATI are both starting to implement their own derivitives also. I think that it will come to a point when it's faster just to do simple possible visibility sets, and send it all to the card. All this extra stuff only bogs down the CPU which is starting to be more and more of the bottle-neck as graphics cards get faster and faster. It's just a matter of time until it will be a waste of time to even try to do a remotely close to Perfect Visibility Set. I see where he was going with this, and I completely agree. :)

Just my .02$


March 29, 2001, 09:50 AM

Jacco - fat luck, I won't be there tonight at Davilex :)

Marco Al

March 29, 2001, 04:10 PM

is what you need, transforming everything is not an option. The game engine usually has more information to do that efficiently than a 3D card which at best could try to infer structure. Building a scenegraph from standard D3D/OpenGL command streams to me seems hard. M$ was going to introduce a scenegraph API with a driver model which allowed hardware to use that information directly, but it died.


March 29, 2001, 05:31 PM

"With graphics cards getting so fast, and scenes getting so complex.. the work to do visibility determination is getting to be less and less important."

I don't understand this argument. Even if the card can do fast occlusion culling, the data still needs to get to the card. The bus to the graphics card is becoming more and more of a bottle neck as scenes get more complex (as you point out is the trend). People tend to argue that the need for occlusion culling is insignificant compared to just frustum culling, but I don't agree. It's not hard to think of scenes which will result in a large amount of geometry without some form of occlusion culling.



March 30, 2001, 10:19 PM

Fahrenheit? It didn't die, it was murdered. It was Microsoft's rather shallow attempt at subverting SGI and OpenGL, the same way Microsoft "partnered" with IBM on OS/2. Fortunately, SGI didn't let MS pull the same trick to them. Unfortunately, SGI decided to get out of the graphics market soon after that anyway. :(

Jaap Suter

March 31, 2001, 03:21 AM


didn't notice this joke until now.


This thread contains 46 messages.
First Previous ( To view more messages, select a page: 0 1 ... out of 1) Next Last
Hosting by Solid Eight Studios, maker of PhotoTangler Collage Maker.