Confusion on model coordinates transformed to word coordinates

As a beginner,I have two question on how model coordinates transformed to word coordinates.

1)After did some experiment,I think,When applying model transformtions using functions like glTranslate ,glScale …, these functions are worked in the model coordinates.
They first translate or scale the vertex we specified to the proper position,and then multiply the transition matrix between two coordinates(I mean,the relation matrix between base vector of the two coordinates ),Is it right ?

2)As many books point out ,after calling transformtions like functions glTranslate ,glScale …we get the model coordinatesto the word coordinates.
Is it because when OpenGL start up ,the model coordinate ,word coordinates and the camera coordinates
coincide,so the transition matrix between model coordinate and word coordinates is an identity matrix,so the book authors declare that or I just misunderstood ?

Please help!

There are only two model transformation matrices the OpenGL maintains in compatibility profile: modelview and projection. The transformation is done by multiplying the incoming vertex coordinates by modelview matrix, then by the projection matrix. While the projection matrix is used only for perspective magics, then you essentially dealing with just a single modelview matrix only. That matrix transforms vertex from object space into the view space (not world space). The view space is bound to the observer, which may change the position and orientation in the world as well as your model or it’s parts.

To transform a point from any space to any space a single matrix required. The start is your model, the end is the observer. And that is what the modelview matrix does.

The functions glTranslate, glScale, glRotate e.t.c. you call - they modify your modelview matrix (assume that is the matrix mode you are working with). The exact way they do that modification is described in the reference pages for the OpenGL2.1 (look for each of the function you use).

In general, the order of the transformations you want to perform on the model must be reversed for the calls of the transformation functions you make to achieve that. Alternatively, you can manipulate the modelview matrix on your own and simply load it using glLoadMatrixf.

You may want to maintain a few transformation matrices in your own code: one transforms the vertices from object space to world space (mat4 ModelWorldMtx), another transforms from world space into observer space (mat4 WorldViewMtx). You may want to maintain them individually and upload their product as modelview matrix before the drawing. To compose both into the single modelview matrix, you need to multiply them like that:
mat4 modelview = WorldViewMtx * ModelWorldMtx;
and then upload it using glLoadMatrixf. The multiplication assumes that you multiply rows of the first matrix onto columns of the second matrix to calculate each element at the intersection of those lines.

Thank you for your reply.

Well,what I don’t understand is how the ModelWorldMtx calculate out,as the formula you put above :
mat4 modelview = WorldViewMtx * ModelWorldMtx;

For example,if we have MC and WC in 2D ,as below picture shows :

The point in MC a=[1;1;1](column vector in homogeneous coordinate ,I express it the way like in Matlab ),mapped to the WC is b = Ma=[-1 0 4;0 1 3;0 0 1][1;1;1]=[3;4;1];
if we apply gltranslate(1.0,0.0,0.0),then the translate matrix is T =[1 0 1.0;0 1 0.0;0 0 1].
So point a=[1;1;1],will mapped to WC in this way:
b’ = MTa=[-1 0 4;0 1 3;0 0 1][1 0 1.0;0 1 0.0;0 0 1][1;1;1]=[2;4;1].

So in the example,what exactly the ModelWorldMtx is ?

In my view,because the MC and WC are not coincide in this case ,
so ModelWorldMtx =MT=[-1 0 4;0 1 3;0 0 1][1 0 1.0;0 1 0.0;0 0 1]= [ -1 0 3;0 1 3;0 0 1].
If the MC and WC are coincide in OpenGL,so the M is a identity matrix,so ModelWorldMtx = T,that is to say the translate matrix T is the ModelWorldMtx .
Am I right ?

Um… I have lost in your numbers. :slight_smile:

See, the vertices of the models drawn in scenes are stored in object space (model space) - their coordinates relate to the internal coordinate system of the object (the coordinates you may see when you read the .obj file). The same models are usually drawn in multiple locations in the scene, and the scene has it’s own origin and coordinate system (world space), therefore each instance of the model has to be transformed by ModelWorldMtx so the vertices are converted from model space into the world space.

The observer, however, moves across the scene. It has it’s own coordinate system, so any object of the scene (which is, in other words, being in world space at that point) need to be transformed from world space into the observer space in order to be drawn into the framebuffer (after the projection transformations). That transformation is handled by the WorldViewMtx.

But we do not want to transform vertices from model to world and then from world to the view spaces, because we can transform model’s vertices directly from model space into the observer’s space in one single step using one single matrix (modelview matrix), which can be calculated as product of the first two in the way I described in a post above. However, maintaining two separate matrices for model-to-world and world-to-view transformations may come handy because WorldViewMtx changes only when the observer moves and it is constant for all objects in the scene, while ModelWorldMtx is individual for each instance of the object in your scene.

So the WorldViewMtx need to be recalculated whenever observer moves, and ModelWorldMtx need to be recalculated for each individual instance of the object in the scene whenever it is repositioned or reoriented. So one WorldViewMtx per camera, and one ModelWorldMtx for each object in your scene need to be maintained. All these matrices are independent until the rendering time comes, when they are combined to produce modelview matrix for each of the model-camera pair rendering.

The world coordinate system usually relates to the static environment model of your scene (the map model for example). Therefore multiplying the coordinates of the static scene model directly by WorldViewMtx is the way to draw it (it is like ModelWorldMtx is an identity matrix in that case). Each of the instanced object in the scene, however, has it’s own ModelWorldMtx, defining it’s orientation and translation in world space. When object’s coordinates are multiplied by it’s own ModelWorldMtx you get the coordinates of the object as if it is a part of the static environment, understand?