Part of the Khronos Group
OpenGL.org

The Industry's Foundation for High Performance Graphics

from games to virtual reality, mobile phones to supercomputers

Results 1 to 4 of 4

Thread: Doubt in Graphics pipeline

  1. #1
    Junior Member Regular Contributor
    Join Date
    Jun 2012
    Posts
    190

    Doubt in Graphics pipeline

    In graphics pipeline after vertex shader comes, primitive assembly-> Clipping to view frustum-> normalized device coordinates -> viewport transformation.

    Now in vertex shader we multiply object cordinates by modelview and projection matrix. " The Projection Matrix transforms the vertices in view coordinates into the
    canonical view volume (a cube of sides 2  2  2, centered at the origin, and aligned with the 3 coordinate axes). Typically, this will be either by an orthographic projection or a perspective projection. This transform includes multiplication by the projection transformation matrix followed by a normalization
    of each vertex, calculated by dividing each vertex by its own w coordinate.
    "

    Now, if this is done in vertex shader only why it comes after the vertex shader part in pipeline shouldn't it just be a part of vertex shader.? If not what is the output of projection matrix multiplied by vertex coordinates?

  2. #2
    Senior Member OpenGL Pro Aleksandar's Avatar
    Join Date
    Jul 2009
    Posts
    1,072
    So called "clip-coordinates" are results of projection matrix multiplication with the "eye-coordinates" vector. Clip-coordinates are in range [-w, w]. By dividing with w, NDC (normalized device coordinates) are obtained.

    Instead of using italic it would be more useful to cite the source of the statement. The statement is not quite correct.

  3. #3
    Junior Member Regular Contributor
    Join Date
    Jun 2012
    Posts
    190
    Quote Originally Posted by Aleksandar View Post
    So called "clip-coordinates" are results of projection matrix multiplication with the "eye-coordinates" vector. Clip-coordinates are in range [-w, w]. By dividing with w, NDC (normalized device coordinates) are obtained.
    Are you saying that canonical volume[-1,1] is not result of multiplication with projection matrix?

  4. #4
    Senior Member OpenGL Guru Dark Photon's Avatar
    Join Date
    Oct 2004
    Location
    Druidia
    Posts
    3,124
    Quote Originally Posted by debonair View Post
    " The Projection Matrix transforms the vertices in view coordinates into the
    canonical view volume (a cube of sides 2 2 2, centered at the origin, and aligned with the 3 coordinate axes).
    No. I don't know where you got this, but this statement is wrong in several ways.

    First, the coordinate space you feed into the projection transform is "EYE-SPACE" (sometimes confusingly called view space).

    The projection transform transforms positions/vectors from that space into CLIP-SPACE. This is NOT a 3D space where -1 <= X,Y,Z <= 1. It is a 4D space where -W <= X,Y,Z <= W. Clipping is performed in this space.

    After clipping is done, then the perspective divide is applied (divide by W), which gives you NDC-SPACE (NDC = Normalized Device Coordinates). This is the -1 <= X,Y,Z <= 1 "cube" you're referring to.

    In summary:

    EYE-SPACE -> (PROJECTION TRANSFORM) -> CLIP-SPACE -> (CLIPPING) -> (PERSPECTIVE DIVIDE) -> NDC-SPACE

    The vertex shader takes care of applying the projection transform, but GPU takes care of clipping and the perspective divide pieces behind-the-scenes after the vertex shader runs.

    Now, if this is done in vertex shader only why it comes after the vertex shader part in pipeline shouldn't it just be a part of vertex shader.?
    No. Think about triangle clipping. Consider a triangle that is partially-in and partially-out of the view frustum. Suppose you did all the above in the vertex shader for each vertex, so you know that 1 vertex of your triangle is in but 2 are out. What does that get you? Not much.

    Clipping needs to be applied on the whole triangle, not just on a single vertex, which is part of why it happens after the vertex shader (which only operates on a single vertex). The GPU, operating on the whole triangle, can then rasterize all of the fragments (think pixels) that lie within your triangle which are inside the view frustum.
    Last edited by Dark Photon; 11-16-2013 at 06:31 PM.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •