Not logged in, Join Here! or Log In Below:  
 
News Articles Search    
 

 Home / 3D Theory & Graphics / Camera matrix from basis? Account Manager
 
Archive Notice: This thread is old and no longer active. It is here for reference purposes. This thread was created on an older version of the flipcode forums, before the site closed in 2005. Please keep that in mind as you view this thread, as many of the topics and opinions may be outdated.
 
Parashar Krishnamachari

July 24, 1999, 04:54 PM

Is it viable to define a camera as a point and an orthonormal basis? i.e. The initial state of the camera is at , and its vectors are , , and . When you transform the camera, you transform those three basis vectors, thereby producing another orthonormal basis that will perform complete 6DOF cameras. The idea here is that the matrix generation needn't require any work at all. The basis is already set up.
Even more so, that can be extended to lightsources, could it not? Useful in the cases where we render from the light's view for shadow algorithms -- a frustum for some spotlight can freely rotate, even about it's own "Z-axis." Or even mirror effects... A mirror could fall and rotate as it does... And the flying shrapnel from a broken mirror can still reflect.

- C.P.I. / ASMiNC

 
Raven

July 24, 1999, 05:27 PM

>> Is it viable to define a camera as a point and an orthonormal basis? i.e. The initial >> state of the camera is at , and its vectors are , , and . When >> you transform the camera, you transform those three basis vectors, thereby producing >> another orthonormal basis that will perform complete 6DOF cameras. The idea here is that >> the matrix generation needn't require any work at all. The basis is already set up.

Even though the basis is already setup you still have to find the basis transformation to transform to camera space. If you want to talk linear algebra then you are suggesting that (as long as we are in 3D, this obviously doesn't apply to projections) the camera is the base coordinate system and that the world coordinate system is orthonormally isomorphic to it. True that it will work but this is equivalent to saying that we use matrices. Because in order to transform from an orthonormally isomorphic space to it's parent space we will need to translate the basis vectors to the origin of the parent and rotate them to coincide with the parent basis vectors. This is equivalent to a camera matrix.
As to lightsources and mirrors and shadows you are saying more of the same thing. If an omni-lightsource for example is to be considered as a 360-Field-of-view frusrum, I don't see any reason why we should consider the light to be an isomorphic basis to camera space. And in order to find the basis you will have an ambiguity since a point doesn't define orientation. Mirrors don't even give you the point. If you are talking about building the frusrums then we dont need to transform to light or mirror coordinates. It's just as simple to use the light as the apex for the frusrum, without building an isomorphic space on it, IMHO.

- Raven

 
Parashar Krishnamachari

July 24, 1999, 10:18 PM



Raven wrote:
>> Even though the basis is already setup you still have to find the basis transformation to transform to camera space.
In other words... matrix multiplication of the inverse to the vector and the result to the camera transform... (or vice versa... can't quite remember offhand) Yep. I understand that much. My basic question is -- is that worse or better than generating from a point-target setup. Noting that having a basis already set up means it can innately handle all aspects of orientation. As opposed to... like you said, a point... or even a lone vector can give you.

>> As to lightsources and mirrors and shadows you are saying more of the same thing. If an omni-lightsource for example is to be considered as a 360-Field-of-view frusrum, I don't see any reason why we should consider the light to be an isomorphic basis to camera space.
I didn't necessarily speak of omni-lights... Orientation is meaningless to an omni-light. Take for instance, one of those rectangular-shaped flashlights -- you know, the ones you recharge on the wall -- In such a case, orientation in all 3 axes makes a difference. If it's a normal round spotlight. Then 2 out of the 3 axes matter... etc.

>>Mirrors don't even give you the point. If you are talking about building the frusrums then we dont need to transform to light or mirror coordinates. It's just as simple to use the light as the apex for the frusrum, without building an isomorphic space on it, IMHO.

For the mirrors it can be a simple matter in the way of casting rays that converge at a point on the axis which lines up with the centroid. It's a one-time thing and can be transformed with the mirror itself if need be. But why wouldn't we need to render the scene from the light's view for a shadow algorithm? Shadow volumes and such aside... The idea with a lot of shadow algorithms is the view from the lightsource and test occlusion from that view. Speaking of which, how WOULD one do that for an omni-light?

- C.P.I. / ASMiNC

 
Raven

July 25, 1999, 01:25 AM

>> In other words... matrix multiplication of the inverse to the vector and the result to >> the camera transform... (or vice versa... can't quite remember offhand) Yep. I >> understand that much. My basic question is -- is that worse or better than generating from >> a point-target setup. Noting that having a basis already set up means it can innately >> handle all aspects of orientation. As opposed to... like you said, a point... or even a >> lone vector can give you.
It's meaningless to compare this to a point-target setup since the target information is not included in the camera basis. If you have a target you are yet to find the basis vectors. From there it is a "simple"(though long:) linear algebra equation of:

Vwc*||Aij||=Vbc
where Vwc is the world coordinate system basis vectors, Vbc is the camera coordinate system basis vectors, and ||Aij|| is the 4x4 transformation matrix from one isomorphic space to the parent. Side note: isomorphic means a variation the "parent" basis that can be transformed into the original with a matrix.
Then your camera matrix would be the inverse of ||Aij||. This is longer to calculate(inverting 4x4 matrices is long as HELL, and you have to do that twice here for the equation). Side note: inverse matrix means such a matrix that:

||Aij||*||Bkl||=||I||
where ||Aij|| is our matrix in question, ||Bkl|| is such a matrix that will transform ||Aij|| into ||I||->the identity matrix. There are "quicker" ways to do this without a system of equations, such as Cramer's rule(determinants). But it's still long as hell.

Since there isn't a real plus i can see in defining the vectors instead of the yaw pitch and roll directly, i wouldn't say this is a better aproach. In some applications this could be better though. It's up to you

>> I didn't necessarily speak of omni-lights... Orientation is meaningless to an omni-
>> light. Take for instance, one of those rectangular-shaped flashlights -- you know, the
>> ones you recharge on the wall -- In such a case, orientation in all 3 axes makes a
>> difference. If it's a normal round spotlight. Then 2 out of the 3 axes matter... etc.
Yes true but what you are talking is a non-standard way of doing lighting. I want to take this approach in my engine, ie: a light is defined by a texture of light distribution. This texture is then projected onto the envirnoment around the lightsource. There are numerous difficulties but if this works then lightsources can be fully animated, nothing stops you from removing static lightsources altogether and your world compile time is cut in half, and you can even put a picture of your girlfriend on the wall as from a projector. That is COOL! For a light like that orientation can be important, yes, but its a question of how you define it. If it's the orientation of the light frusrum then ok, but what if you project on the vertex level? You don't know anything about frusrums anymore and you have to define the axis of rotation of the texture for the orientation of the light. I have to think about this...

>> For the mirrors it can be a simple matter in the way of casting rays that converge at a >> point on the axis which lines up with the centroid. It's a one-time thing and can be
>> transformed with the mirror itself if need be.
Yes true enough. This is the portal-based implementation of mirrors you are talking about. The point of convergence would be the apex of the field-of-view frusrum for the mirror, which can then be adjusted for the camera position(since it's faster than inverting it about the mirror plane)

>> But why wouldn't we need to render the scene from the light's view for a shadow algorithm? >> Shadow volumes and such aside... The idea with a lot of shadow algorithms is the view from >> the lightsource and test occlusion from that view. Speaking of which, how WOULD one do
>> that for an omni-light?
Depends on your occlusion algorithm. For an omni-light i would use a beamtree with two base nodes. Look it up in harmless algos, he talks about 360FOV. With z-buffering you'll need 6 z-buffers. Can't think of anything better. Why would you use that when shadow-volumes are so much better, though?:)

 
Parashar Krishnamachari

July 26, 1999, 09:17 PM



Raven wrote:
>>Vwc*||Aij||=Vbc
>>where Vwc is the world coordinate system basis vectors, Vbc is the camera coordinate system basis vectors, and ||Aij|| is the 4x4 transformation matrix from one isomorphic space to the parent.

Yeah, but since the basis is orthonormal, the inverse is the same as the transpose. Of course, that means that the camera transformation cannot include translation or perspective projection, but those can be accounted for later. Translation is really just inverted by negating the translation factors and doing them in reverse order.
I don't think one would WANT to invert the matrix if it included projection. The whole point of this is to render the view from a certain point and angle, so projection should remain the same... It's the matrices that operate in 3d space that we're really concerned with.

>> I want to take this approach in my engine, ie: a light is defined by a texture of light distribution. This texture is then projected onto the envirnoment around the lightsource.

Sounds a lot like particle tracing, except for the fact that you're treating the whole FOV as a single entity rather than as a field in which rays can be traced. Well, that, and you kinda have other things in mind than just global illumination.

>> he talks about 360FOV. With z-buffering you'll need 6 z-buffers. Can't think of anything better. Why would you use that when shadow-volumes are so much better, though?:)

Too bad things aren't so simple as to just plug in 360 as the FOV angle in the perspective projection matrix... But even if, by some miracle, it DID work, we'd end up with a sort of fisheye-lens warped look to everything -- hardly seems like a suitably accurate 2d space in which to test occlusion.
How are Shadow volumes better? You've got a billion planes to clip against -- you can cut down by merging volumes, but that in itself is slow. And last DOC I saw on the subject, it's a O(n^2) corresponding to polycount. What I'm using is C-Buffer based, and I have ways around the span-alignment problem that require no extra computations at all.

- C.P.I. / ASMiNC

 
Raven

July 27, 1999, 12:08 AM

>> I don't think one would WANT to invert the matrix if it included projection. The whole >> point of this is to render the view from a certain point and angle, so projection should
>> remain the same... It's the matrices that operate in 3d space that we're really concerned >> with.
Inverse projections are sometimes called backprojections. The simplest thing they are useful for is 3D frusrum construction. Just take 3 points on the 2d edge of your screen, backproject and you get a triangle which defines your edge plane.

>> Sounds a lot like particle tracing, except for the fact that you're treating the whole >> FOV as a single entity rather than as a field in which rays can be traced. Well, that, and >> you kinda have other things in mind than just global illumination.

Global illumination is just part of the problem you can solve with this. Spherical volume fog(Unreal) can be done in the same way with the eye as the apex and special cases for stuff inside the fog. Shadows can be done by sorting everything by depth from light when applying a lightsource, copying the lightmap to the poly and then stenciling the poly in the lightmap so that the rest of the polys will have been shadowed by it(this also has a problem with omni-lights, see below). Of course as you see it's not quite as simple as that:( Notice that this would be much faster than calculating a whole bunch of light samples, so all lightsources can be dynamic(not sure about this), with shadows. And imagine unreal with the torches animated, flickering. Now imagine that with no performance penalty whatsoever. That's what i'm talking about...

>> Too bad things aren't so simple as to just plug in 360 as the FOV angle in the
>> perspective projection matrix... But even if, by some miracle, it DID work, we'd end up
>> with a sort of fisheye-lens warped look to everything -- hardly seems like a suitably
>> accurate 2d space in which to test occlusion.

I think the answer to your question would be something like this:

Build a sphere around the lightsource, that is your 360FOV. A perspective projection matrix is just a hack around projecting on a plane without intersecting anything with it. Now you're gonna have to intersect(maybe there's a hack here as well, if there is i don't know it). Cast rays from the lightsource(if it's a point, it gets differential with a bunch of integrals if it's an area or a volume) to each point on the poly to be projected and intersect them with the sphere. Thats only an approximation since the paths between the projected points are straight and the surface is curved, of course, but it should be accurate enough. If you want more accuracy you will need to use a cube instead of a sphere. For optimal speed use a cube aligned with the coordinate axees(english??? somebody tell me how to spell this word!). Notice any similarities? 6 z-buffering planes and 6 projection planes. We are back to where we started. Note that there is no way to project a 360FOV onto a plane since some of the stuff will be behind the plane and rays cast from it to the apex will intersect the plane on the negative side and will not lie on the screen! You at least need 2 planes to have somesort of mechanism. Then, if stuff is high up it will get warped and aliased, so we need up and down planes. Same goes for z. We are back to the cube again.
What i would do is use the sphere, and hack around the math for that. That is what i will use for lights in the texture projection method. The plane onto which we project the polygon to get the texture coordinates(you didn't really think you project the texture on the poly? you project the poly on the texture!) is the tangent plane to the sphere at the intersection point between the ray from the apex to the point, and the sphere.
I digressed:) The point is that there is no solution for this with a z-buffer or a c-buffer or any other depth or screen coverage buffer. You need some other fine occlusion algorithm(like a beam-tree, very slow though). Don't forget that the point of your c-buffer is still just to determine visibility. This can be done in many different ways, some of them allow for 360FOV, some don't even care about the FOV. Small note on the speed of your method. What you are doing amounts to rendering AND rasterizing a polygon, just not to the screen but to the light buffer, and then reconstructing the data to a 3D shadow map, unless you tweaked the original. That should be pretty slow. Never tried it though

>> How are Shadow volumes better? You've got a billion planes to clip against -- you can >> cut down by merging volumes, but that in itself is slow. And last DOC I saw on the
>> subject, it's a O(n^2) corresponding to polycount.

Shadow volumes are bad, evil! I hate shadow volumes! They generate such crappy pictures. They are slow for large polycounts but are usually used for game engines that have LOD on models. Envirnoments there are usually low on polys and models at the lowest LOD have no more than 100 polys(and that is too much i think). Projecting them to the envirnoment doesn't have to involve only clipping. You can raycast and resort to clipping only when absolutely necessary and when you already know the polys from the raycast. This can get aliased, like everything else. Merging volumes is more trouble than it's worth usually. But, all this amounts to very low quality shadows. Did you see the shadows characters cast in LithTech2? Rectangles where arms should be! sick... I'm gonna try the method described above with the texture projections, and resort to shadow volumes only if necessary.

- Raven

Small note. Post reply not as a reply to this message but to the original. The depth of the message tree is too big.

 
Parashar Krishnamachari

July 27, 1999, 02:11 PM

Raven wrote:
>> Build a sphere around the lightsource, that is your 360FOV. A perspective projection matrix is just a
>> hack around projecting on a plane without intersecting anything with it. Now you're gonna have to
>> intersect(maybe there's a hack here as well, if there is i don't know it). Cast rays from the
>> lightsource(if it's a point, it gets differential with a bunch of integrals if it's an area or a
>> volume) to each point on the poly to be projected and intersect them with the sphere.

You'd have to base the divisor around total distance rather than distance along some specific axis. 6 projections is probably faster than having to calculate all those distances. To prevent FOV intersection, though, the resulting "images" would have to be square, would they not? 256x256 or something?
Since it's a space only meant for occlusion testing, it shouldn't really matter that the resolution is not the same as the screen res.

>> Don't forget that the point of your c-buffer is still just to determine visibility.
>> This can be done in many different ways, some of them allow for 360FOV, some don't even care about the
>> FOV. Small note on the speed of your method. What you are doing amounts to rendering AND rasterizing a
>> polygon, just not to the screen but to the light buffer, and then reconstructing the data to a 3D
>> shadow map, unless you tweaked the original. That should be pretty slow. Never tried it though

That's not really what I'm doing at all... I can't say too much because of some NDA's, but remember I said that this is for a radiosity renderer. With any given situtation -- indoor or outdoor, a large percentage of the scene can have precalculated lighting and shadows.
The other questions regarding mirrors and such were just for reference. The real concern is for rendering from the lights' views.

Yep, earlier part of the tree...
- C.P.I. / ASMiNC

Note : The word is spelled "axes" Though it IS pronounced axees. Let's go chop down some trees with our axes... I may have a little trouble in that my axis kinda blunt. :)

 
Raven

July 27, 1999, 04:57 PM

Looks like we reached an agreement. Though the discussion seems to have digressed a little since the camera basis setups...

>> To prevent FOV intersection, though, the resulting "images" would have to be square, would >> they not? 256x256 or something?
>> Since it's a space only meant for occlusion testing, it shouldn't really matter that the
>> resolution is not the same as the screen res.

Yes it doesn't matter what your resolution is. I think that best results would be adaptive depending on the amount of polys in the scene around the lightsource. If you get a lot of small polys to project, the resolution has to grow or you'll be aliased. And if it's a cube than yes a simple 1/z projection will work for it. Just don't forget to transform everything to light coordinates(light at the origin).

>> That's not really what I'm doing at all... I can't say too much because of some
>> NDA's, but remember I said that this is for a radiosity renderer. With any given
>> situtation -- indoor or outdoor, a large percentage of the scene can have precalculated
>> lighting and shadows.

Since it's for a radiosity renderer the performance of the buffer/tree/whatever is not going to be critical, so i would just use 6 zbuffers. Just don't use shadow volumes. For a high quality(not speed) rendering its HORRIBLE. And the buffer method will allow you to calculate the form factors as fractions of the visible area of the polygon, not just a yes/no 0/1. That will look much better. If i were you i would do directional lights with a z-buffer, then stencil out the buffer where the cone doesn't exist and use that for the formfactors. Omnilights are split into 6 directional lights(without any cones?) before rendering and just passed in. Well, as they say, NDAs are NDAs:)

>> Note : The word is spelled "axes" Though it IS pronounced axees. Let's go chop down some >> trees with our axes... I may have a little trouble in that my axis kinda blunt. :)
LOL. Thats the best part of the entire discussion:)

- Raven

 
Parashar Krishnamachari

July 28, 1999, 10:23 AM

Raven wrote:
>>Looks like we reached an agreement. Though the discussion seems to have digressed a little since the camera basis setups...

Well, but if it were done, would it not be fairly simple... Say you've defined a camera or a light or a mirror's view point as a point and some arbitrary orthonormal 3-space basis.
The point is and the basis is , , vectors. Now the transformation for some point (x,y,z), should be subtraction of resulting in some (x',y',z') and then the basis should be the transition matrix from it's own space to standard basis.
So you you'd take x'* + y'* + z'*.

>>>> Note : The word is spelled "axes" Though it IS pronounced axees. Let's go chop down some >> trees with our axes... I may have a little trouble in that my axis kinda blunt. :)
>>LOL. Thats the best part of the entire discussion:)

You just KNOW we've been doing 3d coding too long when we can laugh at stuff like that. But I'm not complaining ... are you?

- C.P.I. / ASMiNC

 
Raven

July 28, 1999, 12:28 PM

>> Well, but if it were done, would it not be fairly simple... Say you've defined a
>> camera or a light or a mirror's view point as a point and some arbitrary orthonormal 3-
>> space basis.
>> The point is (a,b,c) and the basis is (l), (m), (n) vectors. Now the transformation >> for some point (x,y,z), should be subtraction of (a,b,c) resulting in some (x',y',z') and >> then the basis should be the transition matrix from it's own space to standard basis.
>> So you you'd take x'*(l) + y'*(m) + z'*(n).

This transform is equivalent to applying a camera transform matrix. Just that defining the vectors is more difficult than defining pitch yaw and roll. If you want to go like that don't use matrices at all, just use rotations. Actually, it might be worth a try. I'm just afraid that to specify these vectors procedurally(such as moving the camera in an animation) could get more unpleasant than using an upvector of pitch yaw and roll. Otherwise, it could be a good idea...
As to using a point instead of a transform matrix it really is much better. In fact, thats how i do it in my code:)

>> You just KNOW we've been doing 3d coding too long when we can laugh at stuff like
>> that. But I'm not complaining ... are you?
I know i've been doing this stuff to much. As long as the thing works...

 
Parashar Krishnamachari

July 28, 1999, 04:01 PM

>> This transform is equivalent to applying a camera transform matrix. Just that defining the vectors
>> is more difficult than defining pitch yaw and roll. If you want to go like that don't use matrices
>> at all, just use rotations. Actually, it might be worth a try. I'm just afraid that to specify
>> these vectors procedurally(such as moving the camera in an animation) could get more unpleasant
>> than using an upvector of pitch yaw and roll. Otherwise, it could be a good idea...
>> As to using a point instead of a transform matrix it really is much better. In fact, thats how i do
>> it in my code:)

Of course, with Pitch, Yaw, Roll, you have to derive a matrix, as opposed to essentially having it already. I think if you just set an initial basis equal to standard basis and transform the camera basis as you go along, it shouldn't be too bad. Would the Gimbal Lock problem show up here, or is it solely dependent on how I transform the basis?
Besides, if it's more complicated, that means it'll work out really well. At least, that's what my track record says... :)

>> I know i've been doing this stuff to much. As long as the thing works...

But it's not like we hate it. If we hated it, we wouldn't be here.

- C.P.I. / ASMiNC

 
Raven

July 29, 1999, 03:11 PM

>> Of course, with Pitch, Yaw, Roll, you have to derive a matrix, as opposed to
>> essentially having it already. I think if you just set an initial basis equal to standard >> basis and transform the camera basis as you go along, it shouldn't be too bad. Would the >> Gimbal Lock problem show up here, or is it solely dependent on how I transform the basis?
So you are saying to keep track of the transformations applied to the basis and then apply the inverses? That's EXACTLY the same thing as pitch yaw and roll, just with vectors... And gimbal lock might show up here, but that depends on how you apply your transforms. As long as you stick to the usual way of transforming stuff i don't think you should get a gimbal lock. But i cant be sure till i actually try. What i don't like about this is that your method seems to be very similar to euler angles, and those do exhibit gimbal lock. But they work in a slightly different way. Your method can also be turned into a euler angle and then you will get a gimbal lock for sure, so you have to be careful. If i were you i would not apply all 3 rotations as a single tranform but separately. That will prevent this from ever appearing, though it might be considerably slower with matrices

>> Besides, if it's more complicated, that means it'll work out really well. At least, >> that's what my track record says... :)
Yeah you know i've been working on my 2D polygon containment test, and i though it would be simple since it's 2D. But there was such a mess with projections and correct signs on the normals i almost broke my head... Maybe your track record is right

>> But it's not like we hate it. If we hated it, we wouldn't be here.
My words exactly

- Raven

 
Parashar Krishnamachari

July 30, 1999, 05:05 PM

Raven wrote:
>>So you are saying to keep track of the transformations applied to the basis and then apply the inverses? That's EXACTLY the same thing as pitch yaw and roll, just with vectors... And gimbal lock might show up here, but that depends on how you apply your transforms. As long as you stick to the usual way of transforming stuff i don't think you should get a gimbal lock. But i cant be sure till i actually try.
The final goal here is really to set up finalized transformations very quickly and logically. In practice, the transformation of the basis may be in some pseudo-spherical coordinates to get the first two axes of rotation. Or possibly do the transformation of the bases by a quaternion -- dunno about how to weigh the two... Both can be pretty nicely interpolated. Neither exhibits Gimbal Lock -- in the pseudo-spherical case, there's no lock for the very reason you mentioned about separated transformations.

>>>> But it's not like we hate it. If we hated it, we wouldn't be here.
>>My words exactly

I'm also moving over to a new platform and language and everything. So far, I've been in DOS Pmode in TMT with my own graphics lib. Now I'm moving to X11 Linux in pgcc with OpenPTC and all... Sounds like a pain -- I'll pretty much have to write everything all over again from Pascal to C++. But ya' know -- I don't mind at all. Would you?

- C.P.I. / ASMiNC

 
Raven

July 30, 1999, 10:08 PM

>> The final goal here is really to set up finalized transformations very quickly and
>> logically. In practice, the transformation of the basis may be in some pseudo-spherical
>> coordinates to get the first two axes of rotation. Or possibly do the transformation of
>> the bases by a quaternion -- dunno about how to weigh the two... Both can be pretty nicely >> interpolated. Neither exhibits Gimbal Lock -- in the pseudo-spherical case, there's no
>> lock for the very reason you mentioned about separated transformations.

Frankly I'd use quaternions because they are simpler then pseudo-spherical coordinates, and they are defined. What is it that you mean by pseudo-spherical coords is up to you to decide, but quaternions are well defined, documented, exampled and overall researched and understandable by other people. Actually a quaternion is a form of spherical interpolation, just in 4D. I think that quaternions would be faster, they are more defined and less prone to errors. Of course the final implementation is up to you

>> But ya' know -- I don't mind at all. Would you?
That depends entirely on how much i will have to convert:)

- Raven

 
Parashar Krishnamachari

July 31, 1999, 03:17 PM

Raven wrote :
>> Frankly I'd use quaternions because they are simpler then pseudo-spherical
>> coordinates, and they are defined. What is it that you mean by
>> pseudo-spherical coords is up to you to decide, but quaternions are well
>> defined, documented, exampled and overall researched and understandable by
>> other people. Actually a quaternion is a form of spherical interpolation,
>> just in 4D. I think that quaternions would be faster, they are more defined
>> and less prone to errors. Of course the final implementation is up to you

Being under NDA's about this engine, I can't give out source anyway, so it doesn't matter what people can understand. And I'll be on the job alone, so no one has to to do anything with it. The intended pseudo-spherical definition is one where the rotations go in X-Y order. So that I can later apply Z-rotation.
Although, I'd like to know how quaternions are simpler than that. You can represent rotations as angles with spherical, pretty much. The big problem I have with the quaternion documents out there is that they only tell you the procedure -- nothing in the way of actual representation.

>>>> But ya' know -- I don't mind at all. Would you?
>> That depends entirely on how much i will have to convert:)

For me, it's pretty much the whole engine. Down to the primitives. The originals are for TMT -- Pascal code... now I'm going on in pgcc -- C++ code. So that entails pretty much everything. The only thing I don't have to rewrite is graphics support. But I still have to get down to polygon drawing and everything... So how'd you be in that situation?

- C.P.I. / ASMiNC

 
Raven

July 31, 1999, 08:02 PM

>> Being under NDA's about this engine, I can't give out source anyway, so it doesn't
>> matter what people can understand. And I'll be on the job alone, so no one has to to do
>> anything with it.
Well, thats one way to look at it...

>> The intended pseudo-spherical definition is one where the rotations go in X-Y order.
>> So that I can later apply Z-rotation.
So its just a way of defining the angles for the rotation. I see

>> Although, I'd like to know how quaternions are simpler than that. You can represent
>> rotations as angles with spherical, pretty much. The big problem I have with the
>> quaternion documents out there is that they only tell you the procedure -- nothing in the >> way of actual representation.
Ok, I'll give you a brief overhaul on quaternions. Consider a 1D point. All the possible rotations of that point are described by a circle in 2D. A unit(normalized) quaternion has the equation of a 4D hypersphere, which is all the possible rotations of a 3D space about a given unique axis. Multiplication of quaternions is equivalent to following the shortest path along that 4D sphere ie interpolating. There is a good explanation of the relevant theory and practice in "Advanced Animation and Rendering Techniques"(if you don't have this book yet BUY IT! It's the best after "Computer Graphics: Principles and Practice"). The details of the representation should be in any good quaternion doc. Try going to faqsys(www.neutralzone.org/home/faqsys), there are nice big expositions on quaternions there. Once you grasp the idea its pretty much the simplest way to do rotations except for the brute-force matrix approach.

>> For me, it's pretty much the whole engine. Down to the primitives. The originals are >> for TMT -- Pascal code... now I'm going on in pgcc -- C++ code. So that entails pretty
>> much everything. The only thing I don't have to rewrite is graphics support. But I still >> have to get down to polygon drawing and everything... So how'd you be in that situation?
If you get paid by the hour, the longer it takes the better:)

- Raven

 
Parashar Krishnamachari

August 01, 1999, 07:20 PM

Raven wrote:
>>So its just a way of defining the angles for the rotation. I see

Yep, and you can set up 2 axes of rotation in 4 fmuls, too.

And as for the quaternion thing... I understand all that, but what I was talking about was usage. I understand the matrix setup and everything you said. But what I wondered is say -- you want to rotate about X-34 degress, Y-47 degrees, and Z-22 degrees... The representatative quaternion number itself is what -- that's something I've yet to see in ANY quaternion DOC.

>>If you get paid by the hour, the longer it takes the better:)

Eh... It's not for work or anything... Research ... And that's on grant money

- C.P.I. / ASMiNC

 
Raven

August 02, 1999, 12:12 AM

>> But what I wondered is say -- you want to rotate about X-34 degress, Y-47 degrees, and Z-22 >> degrees... The representatative quaternion number itself is what -- that's something I've >> yet to see in ANY quaternion DOC.
The quaternion is made up of four numbers. The first three can be thought of as the axis and the last as the angle of rotation. The axis needs clarification. It is a normalized hyperspherical(i think) coordinate which is a point on the sphere(duh). The axis is the vector from that point to the center of the sphere. The transform you are talking about is 3 rotations. A quaternion to represent that would be the shortest rotation around an unknown arbitrary axis which will transform a point in the same way as those three transforms applied consecutively. I dont remember the exact formulation of defining a rotation through a quaternion but i can look it up, since i always keep CGPP and AART close by for refernce

>> Eh... It's not for work or anything... Research ... And that's on grant money
You're researching new radiosity methods? cool. I dont think i should ask any more, top secret. If you'll tell me you'll have to kill me:)

- Raven

 
Parashar Krishnamachari

August 03, 1999, 12:46 PM

Raven wrote:
>> The first three can be thought of as the axis and the last as the angle of rotation. I dont remember the exact formulation of defining a rotation through a quaternion but i can look it up, since i always keep CGPP and AART close by for refernce

Really, that's the thing I need to know... How to define some rotation in quaternions. The rest makes sense... I suppose it's possible to work it out, but like you said, it's supposed to be well-documented, and why would I work out something so many other people have?

>>You're researching new radiosity methods? cool. I dont think i should ask any more, top secret. If you'll tell me you'll have to kill me:)

Yeah -- well, those NDA's are on MY part... The idea is going through a patent attorney and all. And it's all independent -- no university or anything, but I still managed to pull some grant money. But the whole goal is realtime dynamic radiosity -- soft shadows and all... On low-end machines while forgoing 3d hardware. My test machines even happen to be -- 486-75, P133, & PPro-200. Now put your eyes back in their sockets... And get your jaw off the floor!!

- C.P.I. / ASMiNC

 
Raven

August 03, 1999, 11:24 PM

>> Really, that's the thing I need to know... How to define some rotation in
>> quaternions. The rest makes sense... I suppose it's possible to work it out, but like you >> said, it's supposed to be well-documented, and why would I work out something so many other >> people have?
Ok. This is from "Advanced Animation and Rendering Techinques". Its kinda long and a lot of ASCII-based math equations, but the truth is out there:) in the end of this text i mean.

Meet mister quaternion:

q=(S,V) where (S,V)=S+VxI+VyJ+VzK where i->j->k->i have cyclic permutation(ie ijk=i). S is usually called the quaternion scalar and V is called the quaternion vector. Multiplication rules for the imaginary coefficients are: ij=k, ji=-k. Multiplication of two quats: Q1Q2=(S1S2-V1.V2, S1V2+S2V1+V1xV2). Note that S is a scalar and V is a vector. Also note the cross product V1xV2, this makes quat multiplication non-commutative. The conjugate is defined as:

q=(s,v) qc=(s,-v)

We also define:
q*qc=s^2+|v|^2=|q|^2
ie the product of the quaternion with its conjugate defines its magnitude ie |q|=sqrt(q*qc). Normalization is the same as for everything else: qn=q/|q|=q/sqrt(q*qc)=q/sqrt(s^2+|v|^2). But you already knew that.

Now to the meat:

Define the following:
p=(0,r) q=(s,v) where q*qc=1(normalized quaternion) and Rq(p)=q*p*q^-1

Rq(p) with multiplication expands to: 0,(s^2-v.v)r+2v(v.r)+2svXr). Using the fact that its normalized and substituting cosines in the dotproducts we have:
q=(cos THETA, (sin THETA)*n) |n|=1 Substitute into Rq(p) above we get:
0,cos(2THETA*r)+(1-cos 2THETA)n(n.r)+(sin 2THETA)nXr(where nXr is the crossproduct of n by r)

Thats pretty bad but its nearly identical to the formula of applying an angular displacement of
(THETA,n) to p, just take my word for it. Now, ladies and gentlemen, to parametrize quaternion rotation we do:

q=(cos(THETA/2), sin(THETA/2)Nx, sin(THETA/2)Ny, sin(THETA/2)Nz) where N is a vector which is an axis vector, ie EXACTLY the axis of rotation, just normalized, and THETA is the angle by which we want to rotate. Its completely arbitrary. The cos(THETA/2) term is the quaternion scalar, the rest are the quaternion vector.
To actually rotate you need to solve Rq(p) with our parametrized quaternion q as defined and expanded above. There's your answer. Whew:)

>> Yeah -- well, those NDA's are on MY part... The idea is going through a patent
>> attorney and all. And it's all independent -- no university or anything, but I still
>> managed to pull some grant money. But the whole goal is realtime dynamic radiosity -- soft >> shadows and all... On low-end machines while forgoing 3d hardware. My test machines even >> happen to be -- 486-75, P133, & PPro-200. Now put your eyes back in their sockets... And >> get your jaw off the floor!!
HOLY !@#$)! THATS IMPOSSIBLE!!! Realtime??? dynamic??? RADIOSITY??? How the heck is that possible? Ok, i'm sure i'll read about it when its released, since its that important.

 
Parashar Krishnamachari

August 04, 1999, 07:36 PM

Raven wrote:

>>>> managed to pull some grant money. But the whole goal is realtime dynamic radiosity -- soft
>>>> shadows and all... On low-end machines while forgoing 3d hardware. My test machines even
>>>> happen to be -- 486-75, P133, & PPro-200. Now put your eyes back in their sockets... And
>>>> get your jaw off the floor!!
>> HOLY !@#$)! THATS IMPOSSIBLE!!! Realtime??? dynamic??? RADIOSITY??? How the heck is that possible?
>> Ok, i'm sure i'll read about it when its released, since its that important.

Ummm... It's not THAT bad... It IS meant for a game situation, and in any game world, only a small percentage of polys need dynamic calculations, and I'm allowing for that. The bigger thing that's weighing on my mind right now is somehow getting college credit for this research. I'm just now transferring to a new school, and because they don't have to answer to some idiotic bureaucrats on some accreditation board, they'll make a few more allowances.
Right now my older Pascal version runs at around 4.5 fps on that 486-75. Note that I do have a shareware ver of that compiler, which does no code optimization. It also has size limits, so the test worlds are small, but 100% dynamic.

- C.P.I. / ASMiNC

 
Raven

August 04, 1999, 10:13 PM

>> Ummm... It's not THAT bad... It IS meant for a game situation, and in any game
>> world, only a small percentage of polys need dynamic calculations, and I'm allowing for
>> that. The bigger thing that's weighing on my mind right now is somehow getting college
>> credit for this research. I'm just now transferring to a new school, and because they
>> don't have to answer to some idiotic bureaucrats on some accreditation board, they'll make >> a few more allowances.
I think i see but if you're trying to patent it it must be pretty original. Anyway any dynamic radiosity is cool, especially fast ones:)

>> Right now my older Pascal version runs at around 4.5 fps on that 486-75. Note that I >> do have a shareware ver of that compiler, which does no code optimization. It also has
>> size limits, so the test worlds are small, but 100% dynamic.
Thats a pretty good framerate for a worstcase system with a bad compiler. All this sounds pretty impressive to me

BTW, hope the quat stuff helped

- Raven

 
Parashar Krishnamachari

August 07, 1999, 10:25 AM

Raven wrote:
>>I think i see but if you're trying to patent it it must be pretty original. Anyway any dynamic radiosity is cool, especially fast ones:)

The main reason, though, is that the graphics industry just seems to move faster than many other part of the tech ind. If I don't do something about this algo NOW, someone else will do the same. And how do I know that someone else ISN'T already doing the same things that I am?
I mean, I was only able to come up with this idea because I looked at the way radiosity and pseudo-radiosity algos worked and just broke it down to the simplest things. I don't think many people look at radiosity that way -- same thing Midnight was talking about in the "Radiosity in English." But then there ARE so many people who understand the concept inside and out, so it's not impossible.

>>Thats a pretty good framerate for a worstcase system with a bad compiler. All this sounds pretty impressive to me

Thanks, but my tests also showed that framerate didn't change all that much with more or less polygons... I don't know if it's the no-optimization or something internal like the "frustum culling."

>>BTW, hope the quat stuff helped

Absolutely... Just one thing, though... Which do you advise : Euler angles -> quaternions -> matrix? or 3-Spherical -> quaternions -> matrix? The setup is far less expensive with 3-Spherical(4 muls to get quaternion), but Euler angles are the commonly used representation for rotation -- will it come out all that different?

- C.P.I. / ASMiNC

 
This thread contains 23 messages.
 
 
Hosting by Solid Eight Studios, maker of PhotoTangler Collage Maker.