Part of the Khronos Group
OpenGL.org

The Industry's Foundation for High Performance Graphics

from games to virtual reality, mobile phones to supercomputers

Results 1 to 5 of 5

Thread: How to send vertex color to vertex shader?

  1. #1
    Junior Member Newbie
    Join Date
    Jul 2014
    Posts
    14

    How to send vertex color to vertex shader?

    Dear All:
    I use GLSL120
    I draw a triangle mesh in OpenGL using this:
    void RenderScene()
    {
    ....
    glBegin(GL_TRIANGLES);
    for (int i = 0; i < faceNum; i++)
    {
    //Draw the first vertex
    glNormal3f(nx1,ny1,nz1);
    glColor4f(r1,g1,b1,a1);
    glVertex3f(x1,y1,z1);
    //Draw the second vertex
    glNormal3f(nx2,ny2,nz2);
    glColor4f(r2,g2,b2,a2);
    glVertex3f(x2,y2,z2);
    //Draw the third vertex
    glNormal3f(nx3,ny3,nz3);
    glColor4f(r3,g3,b3,a3);
    glVertex3f(x3,y3,z3);
    }
    glEnd();
    }

    Now how can i send the RGBA value , normal value and XYZ value of each vertex to vertex shader? I need the GLSL code and C++code in OpenGL.
    Are gl_Vertex and gl_Normal the build-in variables in GLSL, and they represent XYZ value and normal value of each vertex?


    Thanks a lot!

  2. #2
    Senior Member OpenGL Guru Dark Photon's Avatar
    Join Date
    Oct 2004
    Location
    Druidia
    Posts
    3,220
    Now how can i send the RGBA value , normal value and XYZ value of each vertex to vertex shader? I need the GLSL code and C++code in OpenGL.
    If you use the a compatibility profile (the default), you can send your data to the shaders using that same C++ code. You just need to compile/link/bind/setup your shader program first. The way your providing triangles to the GPU isn't very efficient mind you (very CPU heavy), but it'll work.

    As to the GLSL source, this isn't a code writing service. You can find this all over the net. Just websearch gl_Vertex, gl_Normal, etc. and you'll come up with many copy/paste examples.

    Are gl_Vertex and gl_Normal the build-in variables in GLSL, and they represent XYZ value and normal value of each vertex?
    Yes. These contain exactly what you pass in for vertex attributes, with whatever conversions you request when you provide them to the shader.

  3. #3
    Junior Member Newbie
    Join Date
    Jul 2014
    Posts
    14
    Quote Originally Posted by Dark Photon View Post
    The way your providing triangles to the GPU isn't very efficient mind you (very CPU heavy), but it'll work.
    Thank you! The GLSL worked . Would you tell me an efficient way to provide triangles to the GPU?

  4. #4
    Senior Member OpenGL Guru Dark Photon's Avatar
    Join Date
    Oct 2004
    Location
    Druidia
    Posts
    3,220
    Quote Originally Posted by ddguan View Post
    Thank you! The GLSL worked . Would you tell me an efficient way to provide triangles to the GPU?
    Main thing is to set a performance target. Once you're fast enough, you can stop. And profiling is the key. You need to identify what your primary bottleneck is first. The CPU + GPU are a deeply pipelined system, and if you're optimizing something you're not bound on, then you could get absolutely no speed-up to show for your efforts.

    Are your batches largely static (same values to glVertex/glNormal/glColor/etc. for each glBegin/glEnd pair) or dynamic? How much total data do you have in batch data altogether -- KB, MB, GB, unbounded? Do you have a lot of repeated drawing of the same objects in different places (instancing)? And how much have you done with GLSL shaders?

    Re static and dynamic, it can pay dividends to handle these differently. For the static case, you just want to pre-upload the data to the GPU and then repeatedly launch batches from there as efficiently as possible (more below). For the dynamic case (or an unbounded amount of batch data), you need a method you can stream data to the GPU efficiently, and reuse it from there when possible (until it needs kicked off for some reason).

    Just to give you a starter list, here are some things you might look into:

    - Using vertex arrays instead of immediate mode
    (I'd start with client arrays i.e. app-side vertex attributes; you can go VBOs later)
    - Indexed triangle batches (glDrawElements)
    - Triangle order optimization for vertex cache efficiency
    - Generic vertex attributes (e.g. glVertexAttribPointer/gl{Enable,Disable}VertexAttribArray)
    - Vertex buffer objects (VBOs; i.e. server-side vertex attributes)
    - Bindless vertex attribute and index lists (if on an NVidia GPU) or Vertex Array Objects (if not)

    You can move on from there later if/when needed (geometry instancing, batch streaming, etc.). Also these optimizations focus on batch data; there are lots of others for other aspects of GPU rendering.

    A warning: VBOs can be tricky. If applied nievely (one VBO per batch, many small batches) you can kill efficiency and your performance will be worse than client arrays. The trick in my experience is either using NVidia bindless and/or group your batches into shared VBOs, whether static or streaming.

  5. #5
    Junior Member Newbie
    Join Date
    Jul 2014
    Posts
    14
    Quote Originally Posted by Dark Photon View Post
    Main thing is to set a performance target. Once you're fast enough, you can stop. And profiling is the key.
    Thank you! I will try!

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •