Building a 3D Portal Engine - Issue 16 - More On Portals
by (16 April 1999)
|Return to The Archives|
Let's go back to the Portal stuff again. After all, this is Phantom's guide to writing a Portal engine, isn't it? Despite that, almost half of the articles are about completely different topics... The reason for that is simple: No algorithm is perfect. Using portals for polygon culling definitely isn't. But I believe that it is one of the best ways to get into visibility determination and rendering quickly, as the technique is rather straight-forward. The problems that you'll encounter when you build your own portal engine will make you look for better things, and that's where you should get creative. And when you get creative, you need to know what your tools are. OK, I feel comfortable again about tossing random topics at you. :) You too? Let me know, use that messageboard!|
OK, so Harmless told you that portals are slow for complex scenes. Thanks man! :) But hey, he's right. I had some long discussions with him by e-mail (you can have a look at it at his site), and he actually proved in a mathematical way that portal rendering is not THE way. OK. But, a portal engine allows you to do real-time shadow casting, real-time visibility determination, easy collision detection and a lot more. So, it might be worth the effort to explore some more ways to get it fast.
I'm going to explain a way to get 'pure portal rendering' faster, but the described method will work for variants too.
When I was programming a portal engine at Lost Boys (a multimedia company in Amsterdam), I was working with a dude named Jurgen Kluft. He spend most of the time working on a 'portalizer', which proved to be a rather complex task. At the end he decided to use a BSP compiler instead. In the meantime, we discussed many ways to improve our initial portal rendering algorithm. One of the best ideas that Jurgen came up with was the concept of 'active edges'. Active edges are portal edges that actually matter. Let me explain: When you render a scene, you start in the sector (or cell) that the camera is in. If this cell contains portals, you render the sectors beyond these portals by generating a new frustum. You do this by generating planes from the viewpoint to the edges of the portal. All polygons in the sector beyond the portal are then clipped against these planes. Problem (or rather, opportunity) is, not all of these planes will be used to clip polygons. Have a look at this scene:
In case you didn't notice, it's my famous room with a pillar again. :) Now, we prepare this room for pure portal rendering. To split this scene in convex polyhedrae, we add portals from the corners of the pillar to the corners of the room.
Now, imagine we are rendering from a viewpoint left of the pillar, and we are looking to the north. The portal that we can see will cause the frustum to be altered, and the space north of the pillar will be clipped against this new frustum. One of the edges of the portal polygon is the edge that starts and ends in the corner of the room. This is an interesting edge: If we construct a plane to it, none of the polygons in the sector beyond the portal will ever be clipped against it. It is an 'inactive' edge.
But there are more inactive edges: The diagonal edge on the floor, and the one on the ceiling. Actually, there's just one active edge: The one at the corner of the pillar. That's quite interesting, as we have just reduced the amount of planes to be generated by 75%.
So, when is an edge active, and when not? That's rather easy. A portal edge is always shared by two polygons. If the angle between these polygons is smaller than or equal to 180%, the edge is inactive. Otherwise, it's active. Or, to put it differently: A portal is always used to solve concavity in a scene. Only edges that cause this concavity can cause clips in polygons behind them. Thus, only these edges need to produce clipping planes.
Now, let's return to the 'portalization' problem. A typical Quake level consists of a lot of convex solid polyhedrae, so called 'brushes'. The space between these volumes is empty space, and it's usually not convex. Splitting this space with portals is easy to do by hand, at first sight. Doing this splitting automatically is a disaster, as Jurgen found out. A BSP compiler can do this automatically, but it produces many portals, and many splits of polygons. But how do these portals look? That's interesting: A typical scene 'portalized' by a BSP compiler contains many more '180% edges' than an 'ideal' portal set would contain. Every splitted polygon for example adds only an inactive portal edge to the set. Applying the concept of active edges thus helps to improve the performance of a BSP set, so that it looks more like the speed of an ideal portal set.
There's one thing I need to mention. While I was writing this text, I had a case in mind where a portal polygon contained only a single active edge (the portal attached to the pillar, actually). Now, if we would encounter this polygon while rendering the scene, our frustum would thus consist of only a single plane. But what about our screen boundaries? Wouldn't this potentially allow polygons beyond the portal to be partially off-screeen?
The answer is: Yes and no. Polygons beyond the portal can only be partially off-screen when either an active edge is (partially) off-screen, or when we encounter another portal in the sector beyond the portal. The second case is no problem at all, since we would generate new planes in that case any way. The first case is easy to solve: Whenever an inactive edge is clipped, part of it becomes active. The part that becomes active is the part that is parallel to the plane that causes the clip. And in the hypothetical problem case I just mentioned, that would be the screen.
This will not make your portal engine lightning fast. It does not prove that Edward Kmett is wrong. But it can help to get a portal engine a lot faster, and that's always a good thing.
I have one more thing on my mind. This is a column, after all, so in my humble opinion, I can say whatever I like. :) Currently, there's a lot of research going on on the subject of 'visibility determination'. Most games use rather old techniques, like BSP's, potential visibility sets, and so on. Academic researchers seem to be unable to come up with something better, they are playing with visibility skeletons and other mighty complex stuff. I myself am experimenting with some odd algorithms to determine visibility, preferably as fast as possible of course. After all, it's ridiculous that no single game these days can produce more than a few hundreds of polygons, while cards are perfectly able to display thousands. In case you're interested in my latest attempts: It involves a weird mix of one of the 'fine occlusion culling' techniques that Edward mentiones, namely a variant of the beamtree, with an octree or BSP structure. Mail me if you want details.
Problem is, there are TWO game companies developing much better algorithms. The first is TRI, with their KAGE technology, the second is the company that is developing Decay. Both engines seem to be able to determine a visible set in realtime, without sacrificing polygon count or dynamic worlds. The Decay engine even doesn't require ANY preprocessing of the dataset. How do they do that? How is it possible that apparently two companies discovered algorithms that are FAR better than anything that is being used right now, either in games or by academic dudes? I find this frustrating: Knowing that there's something so much better and possibly simpler, and not being able to investigate. Where are the days that people shared ideas, to improve them even further? Carmack proved that sharing ideas doesn't hurt a good developer...
Anyway. Let's hope that some day these awesome algorithms will be described by their inventors, or that someone that doesn't need to make a living out of it invents something really cool... It must be there, it just needs to be discovered.