Integer Texture Filtering

Hi all,

This was a problem I’ve encountered several years ago, but since it could be neglected then I repressed it.
Yesterday it appeared once again and now it is crucial to be solved. The problem is that I cannot use GL_LINEAR (minification/magnification) filtering on textures with GL_R16UI internal format. It seems like GL_NEAREST is used instead. Is it something common and normal, or I have some other problem in the application? Did anyone have a similar issue?

Thanks in advance!

That’s by design. Pure integer formats don’t allow any form of filtering (they are treated as “incomplete” if the current sampler would need filtering.)

If you want to filter them, do it manually in a shader (you get to decide what happens with the low bit.)

Thank you very much for the answer, arekkusu!

That’s exactly what I was afraid of. I’ve chosen GL_R16UI format because it gives me the best precision/size ratio. GL_R32F is twice the size of GL_R16UI, while GL_R16F doesn’t have enough precision. I’ll try filtering in the shader (VS), but I guess the performance will not be great because of multiple texture readings plus setting more uniforms in order to precisely define extent and boundaries of the texels (for each layer of the array), plus math for calculating offset and linear interpolation. Texture filtering is something we’ve got for free.
:doh:

With only one component in your texture you could use textureGather (ARB_texture_gather) to sample 4 values with one sample.

How about simply using GL_R16 (UNORM?)

Thank you for the suggestion, Osbios!

Unfortunately, GL_ARB_texture_gather is not supported in GL3.x, or to be more precise in NV Cg gp4vp profile.
That significantly limits the usage, but I’ll try it certainly.

Thanks for the suggestion, but glTexImage3D() simply refuses to accept it as an internal format if I provide integer source data.
Here is the corresponding code segment:


m_nInternalFormat    = GL_R16UI; // or GL_R16I
m_nDataFormat        = GL_RED_INTEGER;
m_nDataType          = GL_UNSIGNED_SHORT;
glTexImage3D(GL_TEXTURE_2D_ARRAY, 0, m_nInternalFormat, width, height, m_nLevelsNo, 0, m_nDataFormat, m_nDataType, NULL);

Maybe there is something I have overlooked.

Thank you very much for the help!

GL_R16 isn’t integer, it’s UNORM. Just like GL_R8, or GL_RGBA8.

So, you’re over-complicating things:
glTexImage3D(…GL_R16… GL_RED, GL_UNSIGNED_SHORT…)

(and use a sampler2DArray, not a usampler2DArray).

Thank you, arekkusu, a million times! :slight_smile:
This works excellent!