Vertex attribute streams

I am hesitating to switch over my renderer to using vertex attribute streams from its current fixed-semantic VB pointers.

I can (compile-time) build either, but I don’t really want 2 code paths, so I’m leaning toward staying with the fixed-semantic VB pointers in the hope I get more compatibility with non-cutting edge GPU drivers - yes, I know vertex attributes aren’t exactly new!

Its just pretty frustrating when you put out a release using a more modern pattern / code burst only to find loads of older cards barfing - and unhappy users.

Any advice? Put another way: for plain vanilla geometry (pos,tex,normal), is there a compelling reason to switch to using vertex attributes?

Adam

Hold on. First of all, by vertex atttribute streams do you mean vertex array objects in conjuntion with VBOs and index buffers? What are fixed-semantic VB pointers? Client-side vertex arrays? What do you mean by “switch to using vertex attributes?” Position, normal direction, tex coords, color etc. are vertex attributes.

Please revise your question and get the terminology right so others might actually understand what you’re asking.

And a Good Morning to you too.

My question is whether they’re is any advantage to using glVertexAttribPointer which is essentially a semantic-free stream, over glVertexPointer / glNormalPointer / glTexCoordPointer which have fixed-semantics.

My guess is glVertexPointer is likely to be more widely supported than glVertexAttribPointer. Is this other developer’s experience.

Adam

Every single vendor supports glVertexAttribPointer, actually, it is more widely supported than glVertexPointer as the later is not in core profile and e.g. on MacOSX you get OpenGL 3.x or newer version only if you stick to core profile. Not to mention that only glVertexAttribPointer is present in OpenGL ES 2.0 and newer as well, so you’re very wrong about which is more widely supported.

What is more important question is whether you use client side data or VBOs to store your attributes

Btw, actually, since OpenGL 4.3 both of these options are considered “legacy” as the new glVertexAttribFormat, glBindVertexBuffer and glVertexAttribBinding functions should be used if available.

[QUOTE=aqnuep;1249247]Every single vendor supports glVertexAttribPointer, actually, it is more widely supported than glVertexPointer as the later is not in core profile and e.g. on MacOSX you get OpenGL 3.x or newer version only if you stick to core profile. Not to mention that only glVertexAttribPointer is present in OpenGL ES 2.0 and newer as well, so you’re very wrong about which is more widely supported.

What is more important question is whether you use client side data or VBOs to store your attributes

Btw, actually, since OpenGL 4.3 both of these options are considered “legacy” as the new glVertexAttribFormat, glBindVertexBuffer and glVertexAttribBinding functions should be used if available.[/QUOTE]

OK, great. Do you have any data to support the claim glVertexAttribPointer is more widely supported? Or is it just your guess? (Which is fine, I just would like to know).

The issue here is the huge gap between latest and greatest OpenGL and what users actually have on their desktop. Which in my sector is often a Intel GMA965 or a NVidia/ATI card from a few years back. So while I’d love to leverage new features, unless they’re actually supported (rather than should be supported but are not), I have to avoid them.:dejection:

Adam

Here is data:

Desktop OpenGL 2.x (legacy, out-of-date) - Both glVertexAttribPointer and glVertexPointer is supported
Desktop OpenGL 3.x, 4.x Compatibility Profile -Both glVertexAttribPointer and glVertexPointer is supported
Desktop OpenGL 3.x, 4.x Core Profile - Only glVertexAttribPointer is supported
OpenGL ES 1.1 (legacy, out-of-date) - Only glVertexPointer is supported
OpenGL ES 2.0 - Only glVertexAttribPointer is supported
OpenGL ES 3.0 - Only glVertexAttribPointer is supported

So unless you target OpenGL ES 1.1 devices, or ancient desktop GPUs, which don’t have OpenGL 2.x support (like GeForce 4 or earlier from 2002 and before, or ATI Radeon 9200 or earlier from 2003 and before) then you should use glVertexAttribPointer.

unless they’re actually supported (rather than should be supported but are not), I have to avoid them.

If you want to talk about what is “actually supported” on the “Intel GMA965”, you shouldn’t even be using shaders, let alone generic attributes. If you really want to support that ancient hardware through OpenGL, you should abandon shaders and embrace fixed-function GL.

a NVidia/ATI card from a few years back

What does “a few years back” mean? If you’re talking about hardware that was being supported in the last 5 years, then yes, those work.

Again, of all of the things you should be concerned about surrounding shaders, support for generic attributes should be the least of your concerns. That stuff usually works; it’s things like accessing textures, dealing with uniforms, or just random compilation glitches that are far more likely to sideline your code than glVertexAttribPointer.

[QUOTE=Alfonse Reinheart;1249252]If you want to talk about what is “actually supported” on the “Intel GMA965”, you shouldn’t even be using shaders, let alone generic attributes. If you really want to support that ancient hardware through OpenGL, you should abandon shaders and embrace fixed-function GL.

What does “a few years back” mean? If you’re talking about hardware that was being supported in the last 5 years, then yes, those work.

Again, of all of the things you should be concerned about surrounding shaders, support for generic attributes should be the least of your concerns. That stuff usually works; it’s things like accessing textures, dealing with uniforms, or just random compilation glitches that are far more likely to sideline your code than glVertexAttribPointer.[/QUOTE]

Alfonse, sure, for really old/basic cards I fallback to a Fixed function codepath. As in, they’re missing basic entrypoints.

And for cards from the last 5 years - >Radeon 5570, >GT120, yep everything generally does works.

The problem is there are a massive number of crappy Intel devices out there used on business laptops and PCs. But they generally claim in their Extension strings to support XYZ feature - for example Shaders and MSAA and Blitframebuffer etc. So code that runs fine on NVidia, ATI/AMD - even Intel HD 4000 (which is finally actually ok), silently dies in glDrawElements(). Hence I was wondering - and you’re probably right its clutching at straws - is it switching to slightly more modern features that triggers problems. BTW Its a plugin I develop for Sketchup called LightUp (http://light-up.co.uk if you want to play).

I think I have little choice but to remain very conservative in use of OpenGL features… and explicitly force Intel GMA devices down a fixed-function path. :frowning:

Just doesn’t work as a solution for my engineering soul!

Thanks for your help.

The way I would do it is quite simple: you have a 3.3 codepath and a codepath for everything below that. The lesser path is pure fixed-function and can run on anything; assume GL 1.4 maximum. The other is a proper 3.3+ codepath.

Generally speaking, most gamers who are serious enough to have more recent hardware are serious enough to have NVIDIA/AMD hardware. And if they’re not that serious, then you can’t trust them to have up-to-date drivers at all, so best not stress them.

You should never trust non-HD Intel hardware with anything past GL 1.4. Granted, I don’t trust any Intel hardware with GLSL, even their HD stuff. At least, not on Windows.

[QUOTE=aqnuep;1249251]Here is data:

Desktop OpenGL 2.x (legacy, out-of-date) - Both glVertexAttribPointer and glVertexPointer is supported
Desktop OpenGL 3.x, 4.x Compatibility Profile -Both glVertexAttribPointer and glVertexPointer is supported
Desktop OpenGL 3.x, 4.x Core Profile - Only glVertexAttribPointer is supported
OpenGL ES 1.1 (legacy, out-of-date) - Only glVertexPointer is supported
OpenGL ES 2.0 - Only glVertexAttribPointer is supported
OpenGL ES 3.0 - Only glVertexAttribPointer is supported[/QUOTE]

(Missed your reply). You’re confusing ‘aspirational’ documents with reality. I know what its meant to be - its just not always like that in the real world!

From an engineering perspective, that makes perfect sense. From a product point of view, its probably too sharp-edged in that a working GL 2.1 card can produce some good results - much prettier than a FF codepath.

Just FYI, this is not gamers. Its architects, set designers, previz people - and they often have older graphics hardware.

Too true! I’d love to know the lowdown on Intel drivers - they have smart people, but apparently they’re busy on other stuff!

From a product point of view it’s completely ridiculous to support pre-GL2.1 hardware which actually requires a FF codepath anyway. GL 2.1 capable hardware dates back to 2006 (maybe even further). Anyone arguing that they cannot invest in a tiny hardware upgrade to fulfill your requirements simply doesn’t deserve any attention - paying customer or not.

Just FYI, this is not gamers. Its architects, set designers, previz people - and they often have older graphics hardware.

The fact is: if said audience really wants your tool, they will spend the 20-30 bucks on capable hardware. All you gain be investing time in ultra-legacy technology is headaches. If you can gather some user data and really establish that your key audience uses hardware that falls into this category (i.e. being incapable of using shaders) then go ahead. Otherwise, please don’t try catering to everyone out there.

Too true! I’d love to know the lowdown on Intel drivers - they have smart people, but apparently they’re busy on other stuff!

They have and, as I stated many times, Intel kicks ass when it comes to involvement in Linux/MESA development. Personal experience shows that at least the Linux devs react very quickly after filing a bug report so one can’t argue laziness or something similar. I think the fact that Intel neglected their graphics drivers for years and years is simply a hard thing to overcome.

From an engineering perspective, that makes perfect sense. From a product point of view, its probably too sharp-edged in that a working GL 2.1 card can produce some good results - much prettier than a FF codepath.

Then you need to make some choices: do you want your code to work, or do you want your code to look good? Because you’re not getting both from outdated hardware running ancient drivers that were crappy even when they were new.

GL 3.3 makes your code exclusive to those drivers that actually have a passing familiarity with the OpenGL spec, so you can write code against the spec without being too surprised (testing is still mandatory; you just don’t have to test against as much). GL 1.4 is the most likely stuff to work. Everything in-between is a minefield of incompatibilities and nonsensical limitations.

That’s the reality of developing against OpenGL 2.1. The only way to get both is to test everything you try to do thoroughly on every GPU and driver combination that exists. If you want to truly serve OpenGL 2.1 hardware without dealing with massive incompatibilities and driver bugs, you’ll need to use Direct3D.

Just FYI, this is not gamers. Its architects, set designers, previz people - and they often have older graphics hardware.

So let me get this straight. People who’s job depends on building graphical objects through the use of CAD and other 3D visualization tools, are running Intel GPUs. The companies that pay them can’t afford giving them a $100 video card that would make their employees much more productive, if for no other reason than that the programs they use daily would run faster.

So let me get this straight. People who’s job depends on building graphical objects through the use of CAD and other 3D visualization tools, are running Intel GPUs. The companies that pay them can’t afford giving them a $100 video card that would make their employees much more productive, if for no other reason than that the programs they use daily would run faster.

Sadly, in some cases, yes.

I know of one major studio that was complaining of performance issues with our GL3 renderer, and it turned out all the animators were running Nvidia GT210 cards, which was simply unable to handle models of several million polys with any sort of fluidity. And it’s not terribly uncommon to have people install 3D software on an Ultrabook for small tasks outside of work. Totally crazy in my mind, but it happens. We just updated our System Requirements and pointed those people to it.

[QUOTE=Alfonse Reinheart;1249289]
So let me get this straight. People who’s job depends on building graphical objects through the use of CAD and other 3D visualization tools, are running Intel GPUs. The companies that pay them can’t afford giving them a $100 video card that would make their employees much more productive, if for no other reason than that the programs they use daily would run faster.[/QUOTE]

We’re generally talking about laptops here with integrated Intel GPUs, so a $100 video card upgrade is not an option. I can completely understand Adam’s situation, our target audience is pretty similar. We receive support tickets on a regular basis for all sorts of 1.x and 2.x hardware.

If you’re talking about laptops, then people cannot reasonably expect the same graphical fidelity that they get out of their desktops. So just use a 1.4 rendering mode on them.

[UPDATE: Ended up buying a second-hand netbook with a GMA 965 to track this down.]

In case it helps others, a big part of the problem is GMA 965 drivers die horribly if you don’t explicitly specify a Vertex and Fragment shader for your Programs. ie if you’re doing some Fullscreen Effect, in the past I’ve not bound a Vertex shader and got a plain vanilla automagically on all the graphics cards I’ve tested on.

You must set both shaders for GMA 965 programs.