Question about FBO and 16Bit RGB Half Float textures

Hello,

I’m unable to create a 16 bit float texture with RGB channels for an FBO.

This attempt fails with GL_FRAMEBUFFER_UNSUPPORTED:


 glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB16F, dim[0], dim[1], 0, GL_RGB, GL_HALF_FLOAT, NULL);

while this one passes:


 glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA16F, dim[0], dim[1], 0, GL_RGBA, GL_HALF_FLOAT, NULL);

GL_ARB_half_float_pixel and GL_ARB_texture_float is supported on my Haswell GPU. I do not find a reason why I cannot create a half float texture with three channels.

Any Ideas?

Regards
Saski

My guess would be that 16b RGB isn’t 4B aligned. Textures used as render targets are more restricted in terms of storage formats than general textures. I’ve found that Intel GPUs generally require RGBA targets.

That might be a clue.
16bit rgb = 6 bytes = non power of two while
16bit rgba = 8 bytes = power of two.

If it’s true than I’d be wasting 2 bytes. That doesn’t sound like much but when you do deferred shading you fight to save every byte to make the renderbuffer smaller. :frowning:

But why does RGB GL_UNSIGNED_BYTE work for glTexImage2D(GL_TEXTURE_2D, …)? Does the driver implicitly store four bytes instead of three to meet alignment?

Is there a smarter way to pack a half float 3D normal vector in a floating texture?

Think about an additional data you can use the forth channel for. :wink:
Maybe some shininess parameter or smg?
As a very tricky alternative, you may store just two components of your normal, assuming the third one pointing towards the viewer (as back faces are hidden normally), so later on it can be calculated as sqrt(1.0-xx-yy). Not sure about that, but you may be the first one to try that out! :slight_smile:
Another alternative: store third component in different texture (maybe you have another 3-channeled one waiting for smg to pad the unused channel?).

Since you are not really giving GL any data to load to the surface, you can create an RGB16F surface in two ways:

  1. Pass in GL_RGBA for the format parameter and GL_UNSIGNED_BYTE for the type. I don’t know if this is a violation of the spec or not, but it has always worked for me on Intel, AMD and nVidia without any kind of gl error being triggered or any warning message given through the debug callback. The last three parameters on this function (format, type, data) all refer to the data you pass into the function and are only giving instructions to GL on how to read that data. It never made sense to me that the spec seems to create a requirement on these parameters when you pass in nullptr, since you are passing no data at all. I feel this thought is somewhat validated by the second option.

  2. Use glTexStorage2D (GL4.2+), which has none of this shienanigans.

I’ve used RGB16F as render targets extensively without any specific problems to this format. I’ve observed no performance difference (eyeballed, no benchmarks) from using RGBA16F however, so it’s possible that internally the drivers just allocate RGBA16F regardless.