Loading a bitmap image to use it as a texture / background on canvas for drawing

Greetings all !

I intended to write some code that would load a bitmap image of Earth surface ( of size 1024x512 ) into a texture, and that texture would be displayed in a 1024x512 size FreeGLUT initialized window. I would then use this image to draw simple primitives on it. I am using SOIL image loading library to load the bitmap image. Instead of getting a window with the Earth surface image I get a 1024x512 black background window.


#include "SOIL.h"

#include "GL/glew.h" 
#include "GL/glut.h"

GLuint texture; 


void display (void) 
{
    glClearColor (0.0,0.0,0.0,1.0);
    glClear (GL_COLOR_BUFFER_BIT);
    glLoadIdentity();

    glEnable( GL_TEXTURE_2D );

    glBindTexture(GL_TEXTURE_2D, texture);

    glBegin (GL_QUADS);
    glTexCoord2d(0.0,0.0); glVertex2d(0.0,0.0);
    glTexCoord2d(1.0,0.0); glVertex2d(1024.0,0.0);
    glTexCoord2d(1.0,1.0); glVertex2d(1024.0,512.0);
    glTexCoord2d(0.0,1.0); glVertex2d(0.0,512.0);
    glEnd();

    glutSwapBuffers();
}


void reshape (int w, int h) 
{
    glViewport (0, 0, (GLsizei)w, (GLsizei)h);
    glMatrixMode (GL_PROJECTION);
    glLoadIdentity ();
    gluPerspective (60, (GLfloat)w / (GLfloat)h, 1.0, 100.0);
    glMatrixMode (GL_MODELVIEW);
}


int main (int argc, char **argv) 
{
    glutInit (&argc, argv);
    glutInitDisplayMode (GLUT_DOUBLE | GLUT_RGB | GLUT_DEPTH);
    glutInitWindowSize (1024, 512);
    glutInitWindowPosition (100, 100);
    glutCreateWindow ("A basic OpenGL Window");
    glutDisplayFunc (display);
    glutIdleFunc (display);
    glutReshapeFunc (reshape);

    glEnable( GL_TEXTURE_2D );

    texture = SOIL_load_OGL_texture
	(
		"earth.bmp",
		SOIL_LOAD_AUTO,
		SOIL_CREATE_NEW_ID,
		SOIL_FLAG_MIPMAPS | SOIL_FLAG_INVERT_Y | SOIL_FLAG_NTSC_SAFE_RGB | SOIL_FLAG_COMPRESS_TO_DXT
	);

    glutMainLoop ();

    return 0;
}

What did I do wrong or miss to include in the program ? And also… since you know what I intend to do and use the image for, please let me know if my approach to the problem is not right, or if there is a more optimal way of doing it. Thank you very much.

Kind regards,
T

Note : I have edited my question to include the new code that I tried. I will leave the old code here just to serve as “topic history”.



#include <vector>
#include <fstream>

#include "GL/glew.h" // Include the GLEW header file
#include "GL/glut.h" // Include the GLUT header file

#include <cstdint>


using namespace std;


GLuint texture; //the array for our texture

GLfloat angle = 0.0;

int width = 0;
int height = 0;
short BitsPerPixel = 0;
std::vector<unsigned char> Pixels;

GLuint LoadTexture( const char * filename );


void FreeTexture( GLuint texture )
{
  glDeleteTextures( 1, &texture );
}


void display (void) {
    
    glClearColor (0.0,0.0,0.0,1.0);
    glClear (GL_COLOR_BUFFER_BIT);
    glLoadIdentity();
    glEnable( GL_TEXTURE_2D );

    glBindTexture( GL_TEXTURE_2D, texture );
    
    glBegin (GL_QUADS);
    glTexCoord2d(0.0,0.0); glVertex2d(0.0,0.0);
    glTexCoord2d(1.0,0.0); glVertex2d(1024.0,0.0);
    glTexCoord2d(1.0,1.0); glVertex2d(1024.0,512.0);
    glTexCoord2d(0.0,1.0); glVertex2d(0.0,512.0);
    glEnd();

    glutSwapBuffers();

}
void reshape (int w, int h) {
    glViewport (0, 0, (GLsizei)w, (GLsizei)h);
    glMatrixMode (GL_PROJECTION);
    glLoadIdentity ();
    gluPerspective (60, (GLfloat)w / (GLfloat)h, 1.0, 100.0);
    glMatrixMode (GL_MODELVIEW);
}

int main (int argc, char **argv) {
    glutInit (&argc, argv);
    glutInitDisplayMode (GLUT_DOUBLE);
    glutInitWindowSize (1024, 512);
    glutInitWindowPosition (100, 100);
    glutCreateWindow ("A basic OpenGL Window");
    glutDisplayFunc (display);
    glutIdleFunc (display);
    glutReshapeFunc (reshape);


    //Load our texture
    GLuint texture;
    texture= LoadTexture( "earth.bmp" );

    glutMainLoop ();

    //Free our texture
    FreeTexture( texture );

    return 0;
}

You need to render your background with an orthographic projection matrix instead of the perspective one you are currently using. glVertex2d has very little to do with 2 dimensional rendering, it just implicitly means the z coordinate is 0 and the w coordinate is 1 and that you want to pass in double precision floating point values, but otherwise the vertex data still goes through the normal transformation steps and that includes applying the projection matrix.
You also probably want to clear the depth buffer when clearing the color buffer.
As far as doing things more optimal: You may want to read up on ‘modern’ OpenGL (see the tutorials at wiki) where buffer objects are used to avoid the CPU intensive immediate mode drawing. However, for your program this is not really going to matter much, since there is not a lot happening so far :wink:

Thank you for reply. Some more questions follow up, since I didn’t manage to make it work :confused:
Where should I put an orthographic projection matrix instead of the perspective one ? There is only one perspective matrix, and it’s in reshape() function, that may not be used at all if window size isn’t changing.
And I have changed “glClear (GL_COLOR_BUFFER_BIT);” to “glClear (GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);” .

Can you please write the corrected code about the part with projection matrix that I am not doing right ? Thank you.
Regrads, T

Look at it like this: there are some parts of your scene that you want to render using an ortho projection and some that you want rendered with perspective projection. You’ll have to switch from one to the other in your display function:


float aspect = 1.f;

void display()
{
    // set up ortho projection
    // render background

    // set up perspective projection, making use of the stored aspect ratio
    // render foreground
}

void reshape(int w, int h)
{
    aspect = static_cast<float>(w) / h;

    // set viewport
}

Glut calls the reshape function at least once before calling display for the first time.

Thank you. I have managed to make the texture appear on the background properly. Now I would like to render a blue square on it ( just for demonstration to learn how to draw on the background ). I tried to follow the simple example from primitives rendering sample code, but it doesn’t function properly. I understand the problems must be in perspectives, but I have no idea how to solve it. My present display function now looks like this :


void display (void) {

    glClearColor (0.0,0.0,0.0,1.0);
    glClear (GL_COLOR_BUFFER_BIT);

    // background render

    glOrtho(0.0f, 1024.0, 512.0, 0.0, 0.0, 1.f);

    glMatrixMode(GL_PROJECTION);
    glLoadIdentity();

    glEnable( GL_TEXTURE_2D );

    glBindTexture( GL_TEXTURE_2D, texture );

    glBegin (GL_QUADS);
    glTexCoord2d(0.0,0.0); glVertex2d(0.0,0.0);
    glTexCoord2d(1.0,0.0); glVertex2d(1024.0,0.0);
    glTexCoord2d(1.0,1.0); glVertex2d(1024.0,512.0);
    glTexCoord2d(0.0,1.0); glVertex2d(0.0,512.0);
    glEnd();

    // foreground render - added code, not working
    
    glMatrixMode(GL_MODELVIEW); 
    glLoadIdentity(); 

    glColor3f(0.0f, 0.0f, 1.0f);

    glBegin (GL_QUADS);
    glVertex2d(500.0,400.0);
    glVertex2d(500.0,500.0);
    glVertex2d(600.0,400.0);
    glVertex2d(600.0,500.0);
    glEnd();

    glutSwapBuffers();
}

I also understand now why you pointed out the reshape function, since reshaping doesn’t function, and on a window resize I get a black window.
What did you have in mind when you wrote “static_cast<float>(w) / h;”, isn’t that equivalent to “(GLfloat)w / (GLfloat)h” that I have in gluPerspective ? I am not sure what you mean by it , and how I should modify reshape =/

You should be more careful to put projection related matrices on the projection matrix stack, e.g. enable glMatrixMode(GL_PROJECTION) before calling glOrtho() - currently this all just works by accident, because you put the ortho projection on the modelview matrix stack and have an identity projection. In this case that happens to result in the same modelviewprojection matrix being used for rendering, but is very fragile.


int winWidth  = 0;
int winHeight = 0;

void display()
{
    // clear

    // orthographic projection for background
    glMatrixMode(GL_PROJECTION);
    glLoadIdentity();
    glOrtho(0.0f, 1024.0, 512.0, 0.0, 0.0, 1.f);

    // draw background

    // perspective projection for foreground
    glLoadIdentity();
    gluPerspective (60, (GLfloat)winWidth / (GLfloat)winHeight, 1.0, 100.0);

    glMatrixMode(GL_MODELVIEW);
    
    // draw foreground
}

The variables winWidth, winHeight are assigned in the reshape function, because you need those sizes to calculate the aspect ratio for the perspective projection - alternatively you can just calculate the aspect ratio in reshape an store that, but usually as a program grows there’s few additional places where you need the window size.
I would suggest you read a tutorial on how the various transformations of the OpenGL pipeline work to get a better understanding what the various steps are good for.

Actually, I forgot one other important aspect: you need to disable depth buffer writes when rendering the background (or clear the depth buffer afterwards), otherwise the background might obscure foreground objects.


void display()
{
    // clear color/depth buffer
 
    // orthographic projection for background
    glMatrixMode(GL_PROJECTION);
    glLoadIdentity();
    glOrtho(0.0f, 1024.0, 512.0, 0.0, 0.0, 1.f);
 
    // disable depth writes
    glDepthMask(GL_FALSE);

    // draw background
 
    // re-enable depth writes
    glDepthMask(GL_TRUE);

    // perspective projection for foreground
    glLoadIdentity();
    gluPerspective (60, (GLfloat)winWidth / (GLfloat)winHeight, 1.0, 100.0);
 
    glMatrixMode(GL_MODELVIEW);
 
    // draw foreground
}


Carsten Neumann - yes , I was searching the internet and found out that I should pay attention to clearing depth buffer. And about what you said earlier, that I should read a text on transformation and pipeline… you are absolutely right and I know very well that I should. It just so happens that right now I am in a situation when I need to produce a result. But certainly, if I want to know the subject well and do more work in the future I will have to read about it when I get the chance.

My current state of the code is :


void display (void) {

//    if these lines were still there, i get a black screen
//    glClearColor (0.0,0.0,0.0,1.0); 
//    glClear (GL_COLOR_BUFFER_BIT);

    // background render

    glMatrixMode(GL_PROJECTION);
    glLoadIdentity();
    glOrtho(0.0f, 1024.0, 512.0, 0.0, 0.0, 1.f);

    glEnable( GL_TEXTURE_2D ); 
    glBindTexture( GL_TEXTURE_2D, texture );

    glBegin (GL_QUADS);
    glTexCoord2d(0.0,0.0); glVertex2d(0.0,0.0);
    glTexCoord2d(1.0,0.0); glVertex2d(1024.0,0.0);
    glTexCoord2d(1.0,1.0); glVertex2d(1024.0,512.0);
    glTexCoord2d(0.0,1.0); glVertex2d(0.0,512.0);
    glEnd();

    glDisable(GL_TEXTURE_2D);


    // foreground render

    // re-enable depth writes
    glDepthMask(GL_TRUE);

    glLoadIdentity();
    //gluPerspective (60, (GLfloat)winWidth / (GLfloat)winHeight, 0.0, 1.0);
    glMatrixMode(GL_MODELVIEW);

    glOrtho(0.0f, 1024.0, 512.0, 0.0, 0.0, 1.f);

    glColor3f(0.0, 0.0, 1.0);

    glBegin (GL_QUADS);
    glVertex2d(400.0,100.0);
    glVertex2d(400.0,300.0);
    glVertex2d(700.0,100.0);
    glVertex2d(700.0,300.0);
    glEnd();

    glutSwapBuffers();
}

What I get now is flashing image on the screen. And the quad actually looks like this : more like a ribbon, [ATTACH=CONFIG]714[/ATTACH]. The background seems to be displayed in proper colours though. Why do I get the flashing screen and incorrectly drawn quad ? Thank you.
Regards, T

Edit : I also tried to add your new code that wasn’t present here before, and result is same. =/
Also, if I include this line ( gluPerspective (60, (GLfloat)winWidth / (GLfloat)winHeight, 0.0, 1.0); ), the “quad” isn’t going to be drawn at all.

Your “ribbon” quad is not wound counterclockwise which is why it displays as a ribbon.


glBegin (GL_QUADS);
glVertex2d(400.0,100.0);
glVertex2d(700.0,100.0);
glVertex2d(700.0,300.0);
glVertex2d(400.0,300.0);
glEnd();

I’m still working on the flashing aspect.

EDIT: You are disabling depth writes for the background, but you are still testing the background against the depth buffer, and the depth buffer is never cleared. That’s why you get a black screen since you are clearing the screen to black and nothing else is passing the depth test. By default the depth test used is GL_LESS, so if you render once, you will not see anything for subsequent writes.

I’d try:


void display (void) {
 
//    if these lines were still there, i get a black screen
    glClearColor (0.0,0.0,0.0,1.0); 
    glClear (GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT );
 
    // background render
 
    glDisable( GL_DEPTH_TEST ) ///!!!!
    glMatrixMode(GL_PROJECTION);
    glLoadIdentity();
    glOrtho(0.0f, 1024.0, 512.0, 0.0, 0.0, 1.f);
 
    glEnable( GL_TEXTURE_2D ); // se ugotovi kam dat
    glBindTexture( GL_TEXTURE_2D, texture );
 
    glBegin (GL_QUADS);
    glTexCoord2d(0.0,0.0); glVertex2d(0.0,0.0);
    glTexCoord2d(1.0,0.0); glVertex2d(1024.0,0.0);
    glTexCoord2d(1.0,1.0); glVertex2d(1024.0,512.0);
    glTexCoord2d(0.0,1.0); glVertex2d(0.0,512.0);
    glEnd();
 
    glDisable(GL_TEXTURE_2D);
 
 
    // foreground render
 
    // re-enable depth writes and testing
    glEnable( GL_DEPTH_TEST );
    glDepthMask(GL_TRUE);
 
    glLoadIdentity();
    //gluPerspective (60, (GLfloat)winWidth / (GLfloat)winHeight, 0.0, 1.0);
    glMatrixMode(GL_PROJECTION); // changed this here to be consistent w/ conventions
 
    glOrtho(0.0f, 1024.0, 512.0, 0.0, 0.0, 1.f);
 
    glColor3f(0.0, 0.0, 1.0);
 
    glBegin (GL_QUADS);
    glVertex2d(400.0,100.0);
    glVertex2d(400.0,300.0);
    glVertex2d(700.0,100.0);
    glVertex2d(700.0,300.0);
    glEnd();
 
    glutSwapBuffers();
}

MtRoad - thank you for your explanations and code. I have tried your code and I still get the “ribbon”. Also now it’s back to the problem I had in the beginning, where whole image consisted only of black and blue(hues) colours. Flashing is gone however.

My bad. Use the first code snippet to fix the ribbon. I must have forgotten to change it in the second.

Sorry I missed it earlier. A bound texture is multiplied by the current color when performing texturing. Add glColor3f(1,1,1) before you draw the texture.


    glEnable( GL_TEXTURE_2D ); 
    glBindTexture( GL_TEXTURE_2D, texture );
 
    glColor3f(1.0f,1.0f,1.0f); // HERE!
    glBegin (GL_QUADS);
    glTexCoord2d(0.0,0.0); glVertex2d(0.0,0.0);
    glTexCoord2d(1.0,0.0); glVertex2d(1024.0,0.0);
    glTexCoord2d(1.0,1.0); glVertex2d(1024.0,512.0);
    glTexCoord2d(0.0,1.0); glVertex2d(0.0,512.0);
    glEnd();
 
    glDisable(GL_TEXTURE_2D);

MtRoad, now it works properly, thank you very much for your effort. And I should’ve paid closer attention to the “ribbon” vertices as well before just copying your code :slight_smile: It is maybe a little off-topic now, but while we are at it I may get another thing cleared up. The OpenGL coordinates (x,y) for drawing are not like we are used to from mathematics lessons. Is there a built-in function or maybe a simple method that can be written, that will transform the input x-y coordinates in such a way that, they can be passed as we are used from math lessons and be drawn properly in that way on the screen ?

I’m not sure I understand your question, so I’ll just explain what your code does.

Vertices perform a series of transforms before they are drawn on screen. You are using the older “fixed-function” pipeline of OpenGL here, which several of your actions for you. Even with the vertex/geometry/fragment shaders several steps happen regardless.

Fixed Function Pipeline: Vertices get “transformed” (multiplied) by matrices with 4 columns and 4 rows. Vertices first get multiplied by the ModelView matrix, which really combines the model matrix and a viewing (camera) matrix into one. A model matrix transforms from model space, to world space. The view matrix transforms from world space to eye (camera) space. The projection matrix, projects from eye space into “normalized device coordinates” which in OpenGL is a cube from (1,1,1) to (-1,-1,-1), also called the “canonical view volume”. Everything outside the canonical view volume gets “clipped”, z-coordinates dropped and then mapped to the screen you provide. So there isn’t a single function to use, it’s a combination of all these which result in the output.

Here the functions and what spaces they affect.


glMatrixMode( GL_PROJECTION )
  - all further matrix operations affect the projection matrix

glMatrixMode( GL_MODELVIEW )
  - all further matrix operations affect the modelview matrix

glViewport
  - changes screen coordinate mapping, most code for
    full screen windows or mobile apps handles this for you

glOrtho
  - Creates a projection matrix to map a cube with specific dimension
    to the canonical view volume. Note this is a "CUBE!" so there is 
    no perspective generated since we are just scaling one cube to another
    and then dropping the Z-coordinate in screen mapping.

gluPerspective
  - Creates a projection matrix to map a frustum to the canonical view volume.
    since the smaller point of the frustum is closer to the viewer, when the frustum (pyramid with tip chopped off)
    gets transformed into a cube, it stretches objects closer to the camera, which we
    perceive as "perspective foreshortening" (close objects get bigger!)

This is why I changed your glMatrixMode parameter for glOrtho to GL_PROJECTION.

Now for the answer to the question I think you want. To make vertex (x,y) to be pixel (x,y) just call glOrtho using the bounds of your screen. This
makes pixels get sampled at locations that should map close to physical element locations. I’m doing
this from what I remember, so it might not work exactly as written.


glMatrixMode( GL_PROJECTION );
glLoadIdentity();
glOrtho(0, screenWidth, 0, screenHeight, 0, 1);

For further reading try:
http://www.realtimerendering.com/ or the OpenGL wiki. The iPhone book is decent (not great), but it has a lot of useful math in it.