Texture Switching

This can be very useful in different situations, the ability of switching texture (sampler state i.e) while rendering primitives.

For instance, you are rendering a triangle strip or a list of triangles and, say 10 triangles, and in the middle at triangle 5 you want to start using a different texture.

I see this only happening if we are able to specify fragment shader uniform parameters during primitive rendering.

glEnable(GL_PER_PIRMITIVE_FRAGMENT_CHANGE)

glPrimitveFragmentUniform(primitveIndex, uniformVar, value)

glDrawArrays(…)

Primitive index is a zero based index denotes that primitive (triangle) rendered in the drawing call.

So at the specified primitive index, the fragment shader will be fed a different uniform value as specified by glPrimitveFragmentUniform.

Welcome back indirect mode ! …

gl_PrimitiveID + Texture 2D array would do and efficiently.

It’s not possible to dynamically index sampler array in GLSL according to the OpenGL 4.1 spec. However it does work on nVidia but render garbage on AMD. Sampler array can be indexed by uniform values is really required but this won’t make sense to do it in a per primitive base.

Yeah but gl_PrimitiveID is not available without activating geometry shader.

Then use geometry shader, it’s meant for it.

So at the specified primitive index, the fragment shader will be fed a different uniform value as specified by glPrimitveFragmentUniform.

Uniforms are so named because they are uniform; they do not change over the course of a primitive. Uniforms that do change would not be “uniform” and would therefore need to be something else.

Furthermore, if such a thing could be implemented with few if any performance issues, uniforms wouldn’t need to be uniform. And if the performance impact is no different from simply issuing multiple draw commands with glUniform/etc calls inbetween, what’s the point?

I don’t think this feature is a good idea. One - IMO - serious weakness of OpenGL in the past is that it has abstracted the hardware a little too much, with the end result being a tendency to see suboptimal formats used in a lot of code, and the dreaded fall back to software emulation being something you need to watch out for. (Of course an advantage of this approach is that you don’t need to sweat over details of the hardware, but I think the downside outweighs the upside here.)

A feature like this is going in the wrong direction of abstracting the hardware even more, whereas what we really need is less.

If you need to deal with textures not the same size, build a texture altas of the image data. For textures the same size: GL_TEXTURE_2D_ARRAY is exactly that, a third texture co-ordinate for which “layer”.

I don’t see point in this suggestion either. We have texture arrays and actually that on its own solves the problem (in most situations) of breaking batches because of texture switches.

The simplest way is simply passing in the texture layer as part of the texture coordinates in the vertex array. Of course, this wastes some memory but you can add here using gl_PrimitiveID to source the texture layer numbers from a texture buffer in the geometry shader. This is much more flexible than your proposal.

I see the your point. Texture array seems sufficient.

Why the hell does it work on nVidia? Isn’t it a spec violation?

Why the hell does it work on nVidia? Isn’t it a spec violation? [/QUOTE]

I don’t think it is a violation.
If I remember the spec says, as often, that the result is undefined so whatever is valid. A warning from the GLSL compiler could be great but it just work…

Yes, it is. NVIDIA tends to violate the spec in order to provide additional functionality. I don’t quite agree the way how they expose these additional functionalities.

They should rather create some extension for these kind of stuff (e.g. NV_shader_indexed_sampler_array or whatever) but they should not allow it just like that. This makes the life of the developers much more difficult as they cannot make sure their shaders are really cross-platform/cross-vendor. However, we cannot do too much about it.

Ermmm… The OpenGL community doesn’t deserve allegation, so I think that if you want to assert something, you should check it!

“Samplers aggregated into arrays within a shader (using square brackets ) can only be indexed with a dynamically uniform integral expression, otherwise results are undefined”

nVidia is free to make it works according to the spec. I am not saying on others cases, it always true but the specification have a lot of lose end like that.

Yes, you are right. I didn’t remember what the spec exactly says about it.

Of course, if OpenGL says that results are undefined, but does not state it is an error then NVIDIA is free to give defined results.

Sorry, I was prematurely judging the driver behavior. Anyway, I still stick to my statement that NVIDIA tends to allow such things in GLSL that are not supported (I remember that earlier it allowed the datatypes like e.g. float3 that are from Cg, but are not valid in GLSL).

I am not saying the contrary, and there are valid examples in nVidia drivers so no need to blame them when it’s not true especially when the specification are actually to blame for implementation variations.

That’s a good point, the spec does need to nail things down more solidly in a lot of cases. There are already too many implementation-dependent or undefined behaviours in there and adding room for more doesn’t help things at all.

Except that the specifications and the drivers are made by more or less the same people. You may even think of it as some kind of a specification backdoor that allows implementers to silently add their own behavior, wait until people start using it (people don’t usually read specifications, so they have no way to know it’s an undefined behavior) and swear at the competitors’ implementations that it doesn’t work.

Instead, the responsible standardization process of the ARB should be:

  • if possible, forbidden any undefined behavior in specification version X (e.g. by throwing a compiler error)
  • consider making some of the forbidden behaviors allowed and well-defined in version X+1

I guess this process is already used, but apparently only sometimes.

Undefined behavior is usually undefined because it is too difficult/performance consuming to catch at runtime.

Take the prohibition of reading and writing to the same image at the same time. There’s no real way to test this, because a shader could read from any mipmap layer of the texture, not just the one(s) bound to the FBO. The simple answer of checking texture object names doesn’t work, because it’s possible to bind different images of the same texture for writing and reading. As long as you ensure that you don’t read and write from the same image, you’re fine.

Also:

It’s not possible to dynamically index sampler array in GLSL according to the OpenGL 4.1 spec. However it does work on nVidia

Not true. Entirely.

In the GLSL 3.3 spec, section 4.1.7 states that, “Samplers aggregated into arrays within a shader (using square brackets ) can only be indexed with integral constant expressions.” However, in the GLSL 4.0 spec, this corresponding section states, “Samplers aggregated into arrays within a shader (using square brackets ) can only be indexed with a dynamically uniform integral expression, otherwise results are undefined.”

In case you’re wondering, a “dynamically uniform” expression is an expression such that all invocations of the shader, with the same uniform values, will result in the same value. This means that the index can now depend on uniforms and constants, rather than just constants.

So rather than a compile-time constant, it is a glDraw*-time constant. Which is better.

I guess my vocabulary haven’t be accurate enough on that on, it skipped my mind a second that they use “dynamically” for sampler array indexing with uniforms… huummm.

On nVidia you can index using any integer value from any source and it works as far as my experiment went. At least a warning could be nice because indexing with a uniform or constant could be checked at compile time.

At least a warning could be nice because indexing with a uniform or constant could be checked at compile time.

Could it? Consider this:


uniform int iLoopLen;

uniform sampler2d texArray[5];

void main()
{
  vec4 iAccum = vec4(0.0);

  for(int iLoop = 0; iLoop < iLoopLen; iLoop++
  {
    iAccum += texture(texArray[iLoop], <someTextureCoord>);
  }
}

Each of these access to texArray perfectly qualifies as “dynamically uniform.” But it doesn’t directly involve uniforms. That’s why the spec defines “dynamically uniform” instead of saying “uses a uniform or constant.”

And that’s why the compiler can’t just do a simple test to see if it works.