glCompressedTexImage2D seems to cause segfault.

Hello everyone,

My main program is a VBO based fixed function OpenGL application.

When I use glCompressedTexImage2D, everything works perfectly.

However, I was working on a tool which uses immediate mode and the texture loader uses identical code, including the glCompressedTexImage2D.

-And this code segfaults when it runs the glCompressedTexImage2D line.

I am totally stumped. There is no difference except one is using VBO/FBO and this one is a fairly plain immediate mode program.

To help you, help me, I have provided a complete code example. This code sample crashes for me and I don’t know why.

To compile you will need:

[ul]
[li]GLI headers for the compressed texture loader from g-truc. [/li][li]GLFW 2.7~ for windowing. [/li][/ul]

In Mac:

usr/bin/g++ crash.cpp -o crash -lglfw -framework Cocoa -framework OpenGL -L/usr/lib/libglfw.a -lGLU -lGL -lGLEW -pthread -lboost_thread -lm -L/usr/local/lib -L/usr/lib/ -I/usr/include -g

In Linux:

/usr/bin/g++ crash.cpp -o crash -lglfw -lGL -lX11 -L/usr/lib/libglfw.a -lGLU -lGL -lGLEW -pthread -lboost_thread -lm -L/usr/local/lib -L/usr/lib/ -g

// Example where glCompressedTexImage2D causes a segfault.


#include <iostream>
#include <string>
#include <GL/glew.h>
#include <GL/glfw.h>


#include <gli/gli.hpp>
#include <gli/gtx/loader.hpp>


using namespace std;


int xres = 0;
int yres = 0;


GLuint ta = 0;


void refresh()
{
    glMatrixMode(GL_PROJECTION);
    glLoadIdentity();


        float ratio = 1.0 * xres / yres;
        gluPerspective(45,ratio,0.1,4000);


        gluLookAt(    0, 0, -5,            // camera location
                0.0, 0.0, 0.0,            // looking at
                1.0, -1.0, 0.0 );


    glMatrixMode(GL_MODELVIEW);


    glViewport( 0, 0, xres, yres );


    glClearColor( 1.0f, 1.0f, 1.0f, 1.0f );
    glClearDepth( 1.0f );
    glClear( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT );


    glEnable(GL_TEXTURE_2D);
    glBindTexture(GL_TEXTURE_2D, ta);


    glBegin(GL_QUADS);
        glColor3f ( 0.0, 1.0, 0.0 );
        glVertex3f( 0, 0, 0 );        glTexCoord2f( 0, 0 );
        glVertex3f( 2, 0, 0 );        glTexCoord2f( 1, 0 );
        glVertex3f( 2, 2, 0 );        glTexCoord2f( 1, 1 );
        glVertex3f( 0, 2, 0 );        glTexCoord2f( 0, 1 );
    glEnd();


    glfwSwapBuffers();
}


void window ( string name, int xsize, int ysize )
{
    xres = xsize;
    yres = ysize;


    glfwInit();


    glfwOpenWindowHint( GLFW_WINDOW_NO_RESIZE, 1 );


    int ok = glfwOpenWindow(
        xres, yres,          // Width and height of window
        8, 8, 8,           // Number of red, green, and blue bits for color buffer
        8,                 // Number of bits for alpha buffer
        24,                // Number of bits for depth buffer (Z-buffer)
        0,                 // Number of bits for stencil buffer
        GLFW_WINDOW        // We want a desktop window (could be GLFW_FULLSCREEN)
    );


    glfwSwapInterval( 1 );


    glfwSetWindowTitle( name.c_str() );


    glClearColor( 1.0f, 1.0f, 1.0f, 1.0f );
    glClearDepth( 1.0f );
    glClear( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT );


    glEnable( GL_DEPTH_TEST );
    glDepthFunc( GL_LEQUAL );


    glDisable(GL_DITHER);
    glPixelStorei(GL_UNPACK_ALIGNMENT, 1);


    glHint(GL_PERSPECTIVE_CORRECTION_HINT,GL_NICEST);
    glClear(GL_ACCUM_BUFFER_BIT);
    glEnable(GL_LINE_SMOOTH);        // Enable Antialiased lines


    glEnable(GL_ALPHA_TEST);
    glAlphaFunc(GL_GREATER, 0);
    glEnable(GL_BLEND);
    glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
}


void loadtexture ( string filename )
{
    glGenTextures(1, &ta);


    glBindTexture(GL_TEXTURE_2D, ta);


      glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP);


    glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR );
    glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR );


    cout<<"loading filename: "<<filename<<"
";


    gli::texture2D T = gli::load(filename);


    cout<<"image dimensions = "<<T[0].dimensions().x<<" "<<T[0].dimensions().y<<"
";


    // glCompressedTexImage2D crashes for some reason..


    cout<<"about to submit compressed texture
";


    glCompressedTexImage2D( GL_TEXTURE_2D, 0, GL_COMPRESSED_RGBA_S3TC_DXT5_EXT, GLsizei(T[0].dimensions().x), GLsizei(T[0].dimensions().y), 0, GLsizei(T[0].capacity()), T[0].data());


    cout<<"Submitted ok.
";


    glEnable(GL_TEXTURE_2D);
}


int main ( )
{
    window ( "crash example", 1024, 768 );


    loadtexture ( "image.dds" );


    refresh();


    sleep (5);


    cout<<"This did not crash.
";


    glDeleteTextures( 1, &ta );
    glfwTerminate();
    exit( EXIT_SUCCESS );
}

[QUOTE=kingc8;1247797]However, I was working on a tool which uses immediate mode and the texture loader uses identical code, including the glCompressedTexImage2D.

-And this code segfaults when it runs the glCompressedTexImage2D line.[/quote]

Is this your problem?

Use glTexImage2D to allocate compressed textures.

I saw that thread as I was hunting for answers. I dismissed it because I am not allocating NULL data. I am allocating with data retrieved with a GLI DDS image decoder.

As I said, the texture loading portion works perfectly in my other program (I literally copied it across to this example), where the only difference is that I use VBOs there to hold geometry, but I don’t think that’s anything to do with it.

Why is it suddenly segfaulting under this particular setup?

I think there is something wrong with my state setup which is causing it to crash. Everything looks legit -at least to me.

I have submitted code for you to execute to see if you find the same result. -If it works for you, then something is broken on my end.

Thanks Dark Photon, I know you know your stuff and I really appreciate your time and input.

To quote the 4.3 specs:

If the data argument of CompressedTexImage1D, CompressedTexImage2D,or CompressedTexImage3D is NULL, and the pixel unpack buffer object is zero, a
texel array with unspecified image contents is created, just as when a NULL pointer
is passed to TexImage1D, TexImage2D, or TexImage3D.

@kingc8: don’t know what your actual bug is, but it pretty much sounds as if the buffer you provide to GL is not as big as it should be. What are the dimensions of your image?
What does T[0].capacity() return? Can you see the image data at T[0].data() in your debugger?
Is the image file maybe corrupted?
Another source of problem might be the GL_UNPACK_ pixel store settings. The driver ought to ignore them, but maybe does not?! Try setting them back to default.

I have tested with both a 2048x2048 DXT5 image and a smaller 256x256 DXT5 image.

T[0].dimensions().x = 256
T[0].dimensions().y = 256
T[0].capacity() = 65536
sizeof(T[0].data()) = 8

I know the data is good, it works in the larger (it’s a bigger project) VBO version of this code.

I commented out the GL_UNPACK pixel store settings, but this made no difference.

Thanks.

That appears to be the first non OpenGL 1.1 call made (it requires OpenGL 1.3), so have you (or your framework) retrieved the OpenGL functions for the context correctly, if the platform requires it?

Indeed it does appear to be the first non OpenGL 1.1 call. But if you start using newer OpenGL calls, do you need to hint or declare that to OpenGL explicitly?

I always assumed that before The Deprecation, all OpenGL (barring pre-requisite calls) calls were equal and you can mix and match as you need them.

I was already thinking that, but scanning through my other listing nothing jumps out at me as specifying that the “context” needs to be newer.

I have no idea what GLFW is doing behind the scenes, but as far as I understand it’s only providing the window calls and inputs and whatnot. The OpenGL calls are direct OpenGL calls.

Before OpenGL 3.x, do you need to tell OpenGL what set of commands you’re going to be working with?

See Load OpenGL Functions - OpenGL Wiki and [OpenGL Loading Library - OpenGL Wiki in the wiki, but it basically depends on the platform being used which functions are can be linked statically, and which must be retrieved dynamically with wglGetProcAddress/glxGetProcAddress etc. on each platform. Looking at the section on extension support at http://www.glfw.org/GLFWUsersGuide277.pdf GLFW also provides glfwGetProcAddress.

Windows only exposes OpenGL 1.1, so OpenGL 1.2+ entry points must be retrieved with wglGetProcAddress. wglGetProcAddress doesn’t return OpenGL 1.1 functions, which is a pain, so you need to be aware which core functions were available in OpenGL 1.1 + which were introduced later.

For Linux, according to OpenGL® Application Binary Interface for Linux - The Khronos Group Inc](http://[URL=“http://[URL”):

The libraries must export all OpenGL 1.2, GLU 1.3, GLX 1.3, and ARB_multitexture entry points statically."

Applications should not expect to link statically against any entry points not specified here.

So anything above OpenGL 1.2 on Linux will need to be retrieved with glxGetProcAddress.

No idea about Mac, but it’s probably a similar version of OpenGL that you need to retrieve functions for.

Darn. I guess I need to file a bug with the good people behind GLFW, unless I’m mis-using their library but I guess that must be it.

Thanks to Dan and fellow contributors.

GLFW has nothing to do with loading OpenGL functions. That’s the job of an OpenGL loading library, as previously stated. Your problem is that, while you #include <glew>, you never initialize it. And until you do, it can’t load anything.

glfwInit();


glfwOpenWindowHint( GLFW_WINDOW_NO_RESIZE, 1 );


int ok = glfwOpenWindow(
...
);


glewInit();

Yes, this was my fault. Everything is wonderful.

Thanks Alfonse.