Not logged in, Join Here! or Log In Below:  
 
News Articles Search    
 


Submitted by Rob James, posted on September 08, 2000




Image Description, by Rob James



My name's Rob James and I'm working on a new demo which amongst other things is attempting to do some Terrain rendering (oh not not another terrain!) Hopefully this one's a bit different because it's ray-traced in realtime and doesn't need a Hardware 3D card :)

The terrain is based on the HybridMultiTexture code described in the excellent book, 'Textures and Modeling - a Procedural Approach' by perlin, ebert and musgrave. The core of this terrain algorithm is a sequence of weighted calls to the standard Perlin Noise function, which traditionally can be too slow for real-time use. This tiny demo renders a practically infinite terrain via a 128x256 texture which is rendered into a 640x380 window. Both the Perlin Noise function (Noise2) and the HybridMultiFractal Terrain function have been implemented using Intel SIMD (for 4-at-a-time power!) so it will ONLY RUN ON Pentium III class cpus (Celeron II is cool too). Using Intel Compiler 4.5 and hefty use of VTune (both demo's from Intel Site) I have managed to get the code upto 16fps on a PIII-650 and 19fps on a Celeron-750. If there are any SIMD gurus out there that can cast an eye across my initial efforts I would be happy to post the code.

ta ta

RobJames@europe.com


[prev]
Image of the Day Gallery
www.flipcode.com

[next]

 
Message Center / Reader Comments: ( To Participate in the Discussion, Join the Community )
 
Archive Notice: This thread is old and no longer active. It is here for reference purposes. This thread was created on an older version of the flipcode forums, before the site closed in 2005. Please keep that in mind as you view this thread, as many of the topics and opinions may be outdated.
 
Axeman

September 08, 2000, 12:38 AM

Dang.. I'm impressed!

 
Jaap Suter

September 08, 2000, 01:43 AM

Realtime raytracing. Very cool!
Jaap Suter

 
Kim Pallister

September 08, 2000, 02:21 AM

Looks pretty sweet. Send me the SSE code and I'll have a look at it (though probably not for the next two weeks or so, very busy at work).

I'm curious as to the performance you are seeing. I have a terrain demo (I should post an IOTD tommorrow) in which our height map is 240x240 verts (on a high end system we can go up to 320x320, where each sample is interpolated from over four samples, over eight octaves(!). the terrain is added in increments of 24x240 verts as the viewer gets a certain distance over the landscape, so it's not every frame.

We're doing a couple thousand trees as well, of which the closest ones are very high detail, and we're doing some per-frame texture generation, so not all our cycles go to the terrain.

Anyhow, it's apples to oranges comparison since we're using HW to rasterize the thing, but I am curious to know how expensive you find the noise generation compared to the rest of your app.

Thanks,

KP



 
MadKeithV

September 08, 2000, 03:40 AM

Looks like a really roundabout way to do a voxel landscape to me though...

Do you have all three degrees of freedom?

 
Rob James

September 08, 2000, 08:04 AM

Voxel ? I'm not sure - this term seams to be used for any ray-cast heightfield terrain. Anyone got a definition for us ?

Limited DOF, but thats because of I wanted speeeeeed rather than flexibility. You can alter x,y,z and angle of incline (look up or down) but I realy need to start recalulating far-plane if I do this.


The intel tools are realy great the last big FPS increase I got was using the Intel compiler 4.5 using the -Qrcp (?) swicth to turn off the cpu rounding mode switching which happens when you do something like

myint = (int)myfloat

With this switch the cpu rounds to nearest value rather than round down but if myfloat >0 you can just do

myint = (int)(myfloat - 0.5) to ensure you get the same results.


The real crux of this demo is that the landspace is 100% generated using Layers of Perlin Noise, each layer is weighted by the noise value of the previous layer so high up bits of terrain are lumpy (more octaves) while low down bits are smooth (less octaves). I jump out of the octave loop for a particluar vector if the weighting becomes less than 'a small number' since at the point additional octaves have little effect on the height...

The app spend about 48% in the Noise2 function and 35% in the GetTerrainHeight function (including SIMD Normal calcs).

The loop which generated the heightfield array values is cool because it used deferred results from the the SIMD GetHeight function, which only starts doing stuff once it's 'pipe' is full (4 vectors). Each vector passed in is 'tagged' with an id which tells the function which element of the height field array goes with which vector.

The GetHeight function then does the noise octaves passes across all four vectors until AT LEAST ONE (up to all 4) has completed, ie all 8 octaves have been calc'd or current noise weigth < small value. It then returns between 1 and 4 results back to the terrain height array loop and again waits until it's pipe is full before doing any more passes. This ensure that even though consequtive points may need different numbers of octaves calculated the SIMD Noise function is never doing wasted octaves.

I hope this makes sense - perhaps I should pose some code!! :)

Rob James

 
Elixir

September 08, 2000, 08:51 AM

I think I would be a lot more impressed if it would run on my Athlon system. ;)

 
Ingenu

September 08, 2000, 09:03 AM

So I can do something like that with this famous book I read :o)

Nice shots, I hope we'll be able to do realtime raytracing very soon.

BTW voxels are defined as height fields. They are altitude values...
Voxels are a special case of raytracing I think...

Can I email you to get some help on random terrain generation ?
(don't be afraid I own and already read the book too ;o) )

 
Alex J. Champandard

September 08, 2000, 09:31 AM

Hey,

The algorithm you use seems very similar to the QEARB (is this the right accronym?) described in the same book. As you pointed out, it is very efficient simply because of the lazzy layered approach to the rendering. Precision is of course reduced in the distance, which speeds things up very nicely.

Voxels are volume elements, which can (among other things) represent landscapes, and this can be done (among other ways) with a height map. How you render them is not really an issue (well as far as the concept/name is concerned ;) In this case i'd say they are not voxels, simply a parametric surface (just like in terraVox which used a parametric surface defined by a height map... hehe, yes i made the same mistake ;)

Regards,
Alex

 
NovaCoder

September 08, 2000, 09:31 AM

Realtime ray-tracing, it's the future.....

It's a pity we don't have any video cards to help the poor old CPU out!

 
Rob James

September 08, 2000, 11:21 AM

Nice to see another York Univ Alumni here :) Hi Alex.

Quasi-Analytical Error Bounded somethin-or-another :)
I started out doing that but decided for speed that a quick ray-cast through pre-defined depths would do the trick. The heightfield array (256x128) isn't square in world-space. the two near corners are at view-window width apart and the far corners are at the far-plan view frustrum intersection. The distance between sucessive z-slices throught the terrain height array isn't linear either, it's weighted to that there are more samaples close to the viewer than far away.

It's then very quick to cast a ray from each view pixel across the 128 element height column and do a height vs ray height test.

I suppose I could use this routine as a speed-up-pre-step for the QAEB algorithm - sort of a quick rough intersection search to get within the apporx region of intersection with the landscape


If I have time I'll grab the AMD libs and see if any of their SIMD stuff can help me port to Athalon but I have no way of testing :(

ttfn

Rob James

 
Max

September 08, 2000, 12:16 PM

"Realtime ray-tracing, it's the future....."

I'm not so sure about that. It certainly would be cool, but there's a couple of reasons why I don't expect to see it anytime soon. The first is that it's hard to create hardware to do ray-tracing. I'm not a hardware expert, but I think part of this is to do ray-tracing you need to have the entire scene in memory (or at least some form of it). The second reason is Pixar. They've shown quite effectively with their movies that ray-tracing isn't necessary to make awesome looking visuals, and the SIGGRAPH 2000 paper on a RenderMan shader compiler shows that pretty soon we can expect to do all kinds of awesome effects that we typically associate with ray-tracing with triangle rasterizers like OpenGL.

Max

 
fluffy

September 08, 2000, 12:44 PM

Er, no, 'voxel' meaning 'heightfield' is a very bad, imprecise, bastardization (courtesy of the demoscene). A "voxel" is a volumetric pixel. A voxel-rendered heightfield is what people usually mean when they say "voxel" these days. Technically it should just be called a heightfield, since it doesn't really involve voxels anyway. :P

 
malkia/eastern.analog

September 08, 2000, 01:18 PM

Huh... where is theee demo?

ahum?

Anybody tried to use "The Directional Parameter Plane Transform of a Height Field" - http://www.acm.org/pubs/citations/journals/tog/1998-17-1/p50-paglieroni/

The idea is very cool, though a bit a lot of preprocessings, especially finding out good Distance Transform Algorithm (but on www.magic-software.com - Eberly site - there was a good one).

And the bad thing is - Paglieroni PATENTED this - but you can download from ibm patents database PDF file describing - only for $3 - instead of buying from ACM it.

The idea is simple - you precalculate an inverted cone at every height cell, and then you it to accelerate raytracing.... blaaah cool....

My idea using it will be kind of - I'm raytracing 128x128x2 triangles on 512x512 screen and then I'm receiveing some U, V coordinates insidte the height-field - and I'm then using this U,V as texture coordinates in this 128x128x2 fixed mesh - something like this.

And sorry for my english (still learninng it)...

malkia
http://EasternAnalog.hypermart.net/glvox.html
http://EasternAnalog.hypermart.net/vox4.html
some blah-blah voxel-space engines.

 
fluffy

September 08, 2000, 09:32 PM

You don't need to pay any money to get patents. IBM makes the complete patent text available. You have to click on their "view figures" (or whatever) link on the patent summary. You can also do the search at the US Patent and Trademark Office.

 
malkia/eastern.analog

September 08, 2000, 09:36 PM

Yeah, you are right, but pictures on the screen are rendered at fixed resolutions, and you can buy the PDF file (with good resolutions) - anyway - this was the only thing i bought from there - just $3.

Well I bought that when i was back in Bulgaria - where for $3 you can buy 3 bottles of Uzzzzo (our Mastika) - 750mg.... But anaway...

 
fluffy

September 09, 2000, 01:02 AM

Welcome to America (I'm assuming), land of overpriced products to satisfy a rampant consumerism mentality. :)

 
Mark Friedenbach

September 09, 2000, 01:12 AM

Geez.. And to think that everyone here laughed at me when I suggested realtime raytracers as the "next big thing" about a year back on this site..


A voxel ("volumetric pixel") scene or model, as described in Alan Watt's book "3D Computer Graphics, 3rd Edition" (c) 2000, is a 3d array of boolean values representing a volumetric surface.

In simpler terms, a voxel representation involves breaking down the model or scene into cubic sections, much as you would a (very) simple octree, and assigning each cubic area a "filled" or "unfilled" boolean value. In this way, you would be representing a true volumetric space, rather than a triangle covered illusion.

Because of it's nature as a true volumetric representation, ray-casting is not only easy, but fast when operating on voxel data. Thus, absolutly increadible speeds in raytracing engines (read: realtime) may be achieved when working with voxel data. However, the massive number of drawbacks involved with using voxel data (which are too numerous to post here in this message) greatly outweigh the benifets, and it is not often that one finds a true voxel renderer nowadays.

It's only real use is in producing images from medical data, where the input is already in a voxel-like form. Even then, the voxel data is usually analized and then converted into a triangle mesh to be rendered using traditional ways.

 
malkia/eastern.analog

September 09, 2000, 01:47 AM

yeah, you gotta be right :)))

expensive!!!!
very expensive!!!!

america is still not optimized....
it was always in demo version.....
and when you come here... you gotta pay....

but it's kool anyway...

 
malkia/eastern.analog

September 09, 2000, 01:55 AM

Ahum....

at least raycasting or level two or max three raytracing can be optimized by some tricky way. But you can optimize well really if you do a lot of preprocessing (though this will stop you from modifying data). Think about of 3D volume cube which is DISTANCE TRANSFORMED, so that every EMPTY voxel contains biggest SPHERE RADIUS which can be opened from there - not touching any other voxels - this can speed up rendering a lot!!!!.

But why you think voxels can be rendered only with triangles, there were some links on www.OpenGL.org and on www.FlipCode.com about ways to render voxels using SPLATS or using Points. Of course marching cubes are cool, cause they are XBOX-ie compatible but... Microsoft decide to use RADEON instead of GeForce (i hope!!!) in their XBOX - this will be cooler - but also to put 128MB ram - then we can pully use 3D textures - so instead of precalculating for SPLATS voxel data through three different views - you got only one. Also 3D Voxel Data could be compressed well like 1:64, cause it's naturall data (not noise) and there could be a lot of things suitable for compression.
Also better illumination models can be made (blah-blah just talking - never tried - just read :))). Fog can be made realer (is that orl korrekt in english?).

 
Marco Al

September 09, 2000, 03:22 AM

I dont know much about raytracing, but I am curious... what tricks are you thinking about for "level 3" raytracing". For level 2 I can readily recognise an easy trick, the equivalent of shadow mapping... only you need a perfect visible set from the lights POV instead of the shadow map.

As for the NVIDIAs X-Box chipset, for one I doubt they will use anything resembling the GeForce too closely in the X-Box... the Ultra is definetely as far as they can abuse that design, unless they have something better in the wings for next year they are finished. And I doubt NVIDIA would let it come to that. Secondly ATI is already supplying Nintendo, how do you think they would react to them getting together with m$?

I dont see the attraction in 3D textures for generating slices for volumetric textures anyway, overdraw is huge. And compression is nice, but when exactly do you intend to decompress it? A single polygon can be an excellent way to compress a whole boatload of voxels BTW :) Directly accelerated displacement maps would be very nice though, m$ is saying its coming in one of the next DXs, which in a way are heightmaps... so if you want to you can say voxel rendering is going to be the next big thing and still be right :)

The more classic Voxels/surfels are nice, but I dont see the advantage over surface representations with approaches like hierarchical levels of detail (http://www.cs.unc.edu/~walk/hlod/). You get most of the advtages and you can use available hardware efficiently. Once scan-conversion hardware gets to accelerate scenegraphs with hierarchical LOD directly you will even get the last advantage of raycasting octree stored voxels, zero overdraw (think Ned Greene's hierarchical Z-buffer approach).

And finally Voxel's last claim to fame, volumetric effects. The high overdraw of volumetric textures is still there. Other approaches like deep shadow maps seem more appropriate. And for volumes with constant density, or volumes which can be easily approximated by them, effects could be realised with scan-conversion. All you need is the distance the ray travel's through the volume for a given pixel... something which can be easily computed from the Z-values of the front/back facing delimiting surfaces of the volume. All it needs is some hardware support. To me that seems a superior approach, and suitable for most volumetric effects.

Marco

 
DirtyPunk

September 10, 2000, 01:24 AM

Raytracing has a few advantages - Generic (many different things can be ray traced, and you can put them all through the same pipeline), Many effects (reflections, shadows, refractions, CSG) are easily equated to ray tracing. Ray tracing also works well with parallelism.

And hardware raytracing isn't that far fetched, doing a ray/triangle intersection is not that much more work than perspective correct bilerp'd texturing. And raytracing other primitives are also doable.

For instance, Mitsubishi Electric (and the subsiduary RTViz) makes the Volume Pro hardware devices, which traces voxels in realtime.

But the question comes in the form of, which is the way to go? Triangles? Splats? Ray tracing (and if so, in what form).

Personally, I think there is a stage when triangles become too much calculation when you can do things like splatting at subpixel level.

And what if we had a generic representation like ADFs that allowed us to represent many different shapes and allow easy CSG etc? That would be good too.

 
This thread contains 21 messages.
 
 
Hosting by Solid Eight Studios, maker of PhotoTangler Collage Maker.