OSX GL3.2 and VAO

Hello all,

I’m having problems with VAO under osx lion and gl3.2. I was able to get a window with gl3.2 support as you can see below the info reported back from glGetString( … )

3.2 ATI-7.12.9
1.50
ATI Radeon HD 6750M OpenGL Engine

I have tried using OSX own gl headers and also tried GLee. Both gives me errors.

A simple call to glGenVertexArraysAPPLE( 1, &vboId ); gives me a 0x500 (invalid enumerate) error.
Every function related to VAO gives me the same error.
Also glEnableVertexAttribArray gives me a 0x502 (invalid operation)

Anyone experienced any errors on osx side using GL3.2 ?

the code is pretty basic:

    // Create VAO
    glGenVertexArrays( 1, &mesh.vaoId );
    CHECK_ERROR_STR( "glGenVertexArraysAPPLE" );
    
    glBindVertexArray( mesh.vaoId );
    CHECK_ERROR_STR( "glBindVertexArrayAPPLE" );

   //  Do something.

    glBindVertexArrayAPPLE( 0 );
    CHECK_ERROR_STR( "glBindVertexArrayAPPLE" );

Anyone have any idea on why this could be happening?

Thanks in advance

glBindVertexArrayAPPLE( 0 );

Why does this one use the APPLE extension and the other ones don’t? In any case, 3.2 on OS X doesn’t support the APPLE extension; they dumped the vast majority of the redundant extensions. So you should be using the core function anyway.

When using Opengl/glext.h there is only APPLE extensions available. Can’t find glGenVertexArrays() or any other.
On the code sample above i was just trying both ways (with and without Glee/GLEW), core and apple, still none of them work. both gives me errors 0x500

Now i think i got it. OSX has gl3.h and gl3ext.h headers on osx10.7.
i think i need to work with those, not gl and glext header files. Let’s try.

To get OpenGL 3.2 core support, you need to include the gl3.h header, and specify the NSOpenGLPFAOpenGLProfile, NSOpenGLProfileVersion3_2Core attributes when initialising the view. There is a Apple DevCenter article which explains the process.

Any further conclusions on this? I am working with 3.2 on a Mac now, trying to figure out how I am supposed to work. One surprising effect was errors with glEnableVertexAttribArray, for no apparent reason, that went away once I added a VAO to the code. As if it was an error to use VBOs without a VAO.

I notice that many examples send hard-coded constants as variable locations to glEnableVertexAttribArray and similar calls. AFAIK that is a bad idea, unless you use layout(location = x) in the shader, and I don’t know if I like that either. I prefer asking for a location and passing values with that, connecting directly to the variable name from the CPU code, but I suspect that that could be somewhat expensive performance-wise.

I am using both gl.h and gl3.h, is that bad? And of course I use NSOpenGLProfileVersion3_2Core.

Acutually, you are required to have a VAO from OpenGL3.

Ragnemalm, to your three questions:

  1. as mentioned, Core Profile requires you to create a VAO and all attribute pointers must be set with VBOs bound. Check the spec.

  2. for setting the attribute indices, you have three options:
    a) let the compiler do it. This is the default behavior if you do nothing, and you need to query the locations with glGetAttribLocation().
    b) explicitly set the location prior to link with glBindAttribLocation(). In this case, using constants is fine.
    c) explicitly set the location in the shader text with a layout, per ARB_explicit_attrib_location.

a) and b) have been around since 2003 (GL 1.5 + ARB_vertex_shader). c) is more recent and is not supported on the Mac as of 10.7.2.

  1. The compiler will warn if you include both gl.h and gl3.h. If you want to use only Core Profile functionality, only include gl3.h/gl3ext.h.

Which makes it even more odd that one of the few demos I could find did not use it. I suppose there are implementations where that works, but Apple’s does not and that is perfectly legal then.

  1. for setting the attribute indices, you have three options:
    a) let the compiler do it. This is the default behavior if you do nothing, and you need to query the locations with glGetAttribLocation().
    b) explicitly set the location prior to link with glBindAttribLocation(). In this case, using constants is fine.
    c) explicitly set the location in the shader text with a layout, per ARB_explicit_attrib_location.

a) and b) have been around since 2003 (GL 1.5 + ARB_vertex_shader). c) is more recent and is not supported on the Mac as of 10.7.2.

Well, c) work for me under 10.7. You are right, there are three ways, but hard-coding assumed constant numbers is not one of them, but rather the fourth, unwise way. I go for getting locations, that feels best clean to me since it doesn’t require me to break in with application specific code in my shader loader, and keeps the shaders clean.

  1. The compiler will warn if you include both gl.h and gl3.h. If you want to use only Core Profile functionality, only include gl3.h/gl3ext.h.

No, I don’t think it did. I include both and it it compiles and runs without warnings. Maybe it depends on compiler options.

Thanks for the comments!

Well, c) work for me under 10.7.

That seems unlikely. Especially if you’re able to use #version 330, or if it works with just #version 150.

I go for getting locations, that feels best clean to me since it doesn’t require me to break in with application specific code in my shader loader, and keeps the shaders clean.

It also means that you can’t share VAOs and attribute bindings among different shaders. So if you have a mesh that gets rendered with two or more shaders (very possible. Multi-pass techniques often do it. Shadow map pass and lighting pass are both two separate programs), you have to have two separate sets of attribute bindings. Even though they may use the exact same vertex inputs.

The inability to mix and match shaders with meshes is not what I would call “clean”. So I would suggest either explicitly binding attribute locations to known values with glBindAttribLocation, or use explicit attribute locations within shaders. The latter is what I do, and ever since that extension came out, I have never wanted to do anything else again.

I only tested with version 150. Seemed to work just fine.

[quote]I go for getting locations, that feels best clean to me since it doesn’t require me to break in with application specific code in my shader loader, and keeps the shaders clean.

It also means that you can’t share VAOs and attribute bindings among different shaders. So if you have a mesh that gets rendered with two or more shaders (very possible. Multi-pass techniques often do it. Shadow map pass and lighting pass are both two separate programs), you have to have two separate sets of attribute bindings. Even though they may use the exact same vertex inputs.

The inability to mix and match shaders with meshes is not what I would call “clean”. So I would suggest either explicitly binding attribute locations to known values with glBindAttribLocation, or use explicit attribute locations within shaders. The latter is what I do, and ever since that extension came out, I have never wanted to do anything else again. [/QUOTE]
Good, I needed some arguments for the other two. With three totally different ways, it isn’t so easy. I don’t like matching arbitrary numbers between the main program and shaders, variable names are more descriptive, but I consider all options.

Do you mean that calls like glEnableVertexAttribArray or glVertexAttribPointer will require me to bind the attributes to different values? But can’t I just switch shader and rebind? I havn’t tried multi-pass rendering under 3.2 yet, so maybe the problems will surface when I do. (I must port my shadow mapping examples anyway.)

Do you mean that calls like glEnableVertexAttribArray or glVertexAttribPointer will require me to bind the attributes to different values? But can’t I just switch shader and rebind?

Yes, you could. But that would require work. The whole point of VAOs is that you don’t have to do that work. If you have to constantly be changing what attribute indices are used, you may as well just have a single global VAO and pretend VAOs don’t exist.

Sorry if it’s blatantly obvious, but I can’t find this in the specification. Do you know which section? I also have the red book (7th edition, 3.1), I’ve looked through both but still can’t find where it mandates using vertex array objects and vertex buffer objects. I’m only now trying to upgrade my code to GL 3.2 and this is currently where I’m at.

(now getting slightly off topic)
For instance, I was doing something like:


Vec2f vertices[3] = { Vec2f( 0, 0 ), Vec2f( 0.5, 1 ), Vec2f( 1, 0 ) };
unsigned int indices[3] = { 0, 1, 2 };

glEnableVertexAttribArray( mAttributes.position );
glVertexAttribPointer( mAttributes.position, 2, GL_FLOAT, GL_FALSE, 0, &vertices[0] ); // if VAO is bound, creates GL_INVALID_OPERATION
glDrawElements( GL_TRIANGLES, 3, GL_UNSIGNED_INT, &indices[0] );

Will this work in GL > 3?

Page 331 of GL3.2 core spec,

Client vertex and index arrays - all vertex array attribute and element array
index pointers must refer to buffer objects. The default vertex array object
(the name zero) is also deprecated. Calling VertexAttribPointer when no
buffer object or no vertex array object is bound will generate an INVALID_-OPERATION error, as will calling any array drawing command when no ver-tex array object is bound.

“E.2. DEPRECATED AND REMOVED FEATURES”

I see, thanks for leading me there. I suppose indicating GL_DYNAMIC_DRAW is the only way to do this now.

Cheers,
Rich