Rendering grayscale image from array

Hi, I’m using OpenGL﹣SharpGL on my work.
I have a 2D integer array, i would like to stick it on a cube that i drew.
This is a 176*224 grayscale array
Here is my code:

 gl.Enable(OpenGL.GL_TEXTURE_2D);
            gl.ShadeModel(OpenGL.GL_SMOOTH);
            gl.Enable(OpenGL.GL_DEPTH_TEST);
            gl.DepthFunc(OpenGL.GL_LEQUAL);
            gl.Hint(OpenGL.GL_PERSPECTIVE_CORRECTION_HINT, OpenGL.GL_NICEST);

           gl.TexImage2D( OpenGL.GL_TEXTURE_2D, 0, OpenGL.GL_LUMINANCE, 176, 224, 0, OpenGL.GL_LUMINANCE, OpenGL.GL_INT, pixle);

Or should i use gl.drawarray or gl.drawpixel to do this?

Anyone can help me?

Thanks!

In general using a texture is the right approach. However, textures are not something that is drawn directly, instead they are applied to primitives (most often triangles) as those are rasterized - your call to glTexImage2D will never draw anything, the purpose of that command is to specify the initial size and content of a texture object. Before that you need to obtain a name of a texture object using glGenTextures and bind it to the active texture unit glBindTexture.
The wiki has links to tutorials and most cover textures at some point, explaining how to create/use textures and how to control the way they are mapped onto primitives.

Thanks for your help!
I have a followup question,
The following code that i found:

uint[] texture = new uint[1];
gl.GenTextures(1, texture);
gl.BindTexture(OpenGL.GL_TEXTURE_2D, texture[0]);

If i’m using the code like this, which means the object “texture” is my array, so that i have to change my 2D array to 1D unit array isn’t?

The uint values are only used to identify the texture object so that your application and OpenGL have a way to specify which texture object to use if you had more than one. It has nothing to do with the contents (the actual image data) stored in the texture object. In fact, after the two calls you’ve posted the texture object does not have a (non-zero) size or any data contents yet, those will only be set once you call glTexImag2D to supply the image data to be stored in the texture object.

A typical sequence of OpenGL commands to create and initialize a texture object looks something like this:


GLuint texId;
glGenTextures(1, &texId);   // reserve an id to be used for a texture object

glBindTexture(GL_TEXTURE_2D, texId);  // select the texture object with the given id to be the one the following commands modify

glTexImage2D(GL_TEXTURE_2D, ...);     // set size and contents for the currently bound texture object

glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);   // select linear filtering for minification/magnification
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);

glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);  // select what happens with texture coordinates outside [0,1]
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);

glBindTexture(GL_TEXTURE_2D, 0);  // (optional) unbind texture object - we are done modifying it

The above should be executed just once (for each texture object) not every frame. To actually use the texture when rendering you’d bind it again and then issue your drawing commands.

I used a simplest example:


double [] pixel = {
0,0,0,0,0,0,0,
0,1,1,0,1,1,0,
0,1,1,0,1,1,0,
0,1,1,0,1,1,0,
0,1,1,0,1,1,0,
0,1,1,0,1,1,0,
0,0,0,0,0,0,0
}
bitmap image;
ArrayToImage conv = new ArrayToImage(7,7);  //Create the converter to create a Bitmap from the array
conv.convert(pixel, out image);  //Declare an image and store the pixels on it

gl.ClearColor(0, 0, 0, 0);
gl.Enable(OpenGL.GL_TEXTURE_2D);
gl.ShadeModel(OpenGL.GL_SMOOTH);
gl.Enable(OpenGL.GL_DEPTH_TEST);
gl.DepthFunc(OpenGL.GL_LEQUAL);
gl.Hint(OpenGL.GL_PERSPECTIVE_CORRECTION_HINT, OpenGL.GL_NICEST);

gl.GenTextures(1, texture);
gl.BindTexture(OpenGL.GL_TEXTURE_2D, texture[0]);

gl.TexImage2D(OpenGL.GL_TEXTURE_2D, 0, 1, image.Width, image.Height, 0, OpenGL.GL_LUMINANCE, OpenGL.GL_UNSIGNED_BYTE,
                image.LockBits(new Rectangle(0, 0, image.Width, image.Height), ImageLockMode.ReadOnly, PixelFormat.Format24bppRgb).Scan0);


gl.TexParameter(OpenGL.GL_TEXTURE_2D, OpenGL.GL_TEXTURE_MIN_FILTER, OpenGL.GL_LINEAR);	// Linear Filtering 
gl.TexParameter(OpenGL.GL_TEXTURE_2D, OpenGL.GL_TEXTURE_MAG_FILTER, OpenGL.GL_LINEAR);	// Linear Filtering 
gl.TexParameter(OpenGL.GL_TEXTURE_2D, OpenGL.GL_TEXTURE_WRAP_S, OpenGL.GL_CLAMP_TO_EDGE);
gl.TexParameter(OpenGL.GL_TEXTURE_2D, OpenGL.GL_TEXTURE_WRAP_T, OpenGL.GL_CLAMP_TO_EDGE);

There was an error in TexImage2D function, how could i do change my code?

I asked this because i can mapping the bitmap texture to the cube with this code:


gl.ClearColor(0, 0, 0, 0);
gl.Enable(OpenGL.GL_TEXTURE_2D);
gl.ShadeModel(OpenGL.GL_SMOOTH);
gl.Enable(OpenGL.GL_DEPTH_TEST);
gl.DepthFunc(OpenGL.GL_LEQUAL);
gl.Hint(OpenGL.GL_PERSPECTIVE_CORRECTION_HINT, OpenGL.GL_NICEST);

gl.GenTextures(1, texture);
gl.BindTexture(OpenGL.GL_TEXTURE_2D, texture[0]);

textureImage = new Bitmap("C:\\Users\\Shin Tai\\Desktop\\woodbox.jpg");

gl.TexImage2D(OpenGL.GL_TEXTURE_2D, 0, 3, textureImage.Width, textureImage.Height, 0, OpenGL.GL_RGB, OpenGL.GL_UNSIGNED_BYTE,
    textureImage.LockBits(new Rectangle(0, 0, textureImage.Width, textureImage.Height), ImageLockMode.ReadOnly, PixelFormat.Format24bppRgb).Scan0);

gl.TexParameter(OpenGL.GL_TEXTURE_2D, OpenGL.GL_TEXTURE_MIN_FILTER, OpenGL.GL_LINEAR);	// Linear Filtering 
gl.TexParameter(OpenGL.GL_TEXTURE_2D, OpenGL.GL_TEXTURE_MAG_FILTER, OpenGL.GL_LINEAR);	// Linear Filtering 
gl.TexParameter(OpenGL.GL_TEXTURE_2D, OpenGL.GL_TEXTURE_WRAP_S, OpenGL.GL_CLAMP_TO_EDGE);
gl.TexParameter(OpenGL.GL_TEXTURE_2D, OpenGL.GL_TEXTURE_WRAP_T, OpenGL.GL_CLAMP_TO_EDGE);


gl.TexImage2D(OpenGL.GL_TEXTURE_2D, 0, 1, image.Width, image.Height, 0, OpenGL.GL_LUMINANCE, OpenGL.GL_UNSIGNED_BYTE,
                image.LockBits(new Rectangle(0, 0, image.Width, image.Height), ImageLockMode.ReadOnly, PixelFormat.Format24bppRgb).Scan0);

Looks to me like you tell OpenGL to expect single channel (GL_LUMINANCE) image data with 1 byte per pixel (GL_UNSIGNED_BYTE), but request three channel 3 byte per pixel data (PixelFormat.Format24bppRgb) from image.

Which respond with “Object reference not set to an instance of an object”
I tried “PixelFormat.Format16bppGrayScale”, but not work also.

Ok, I suggest you run under a debugger and find out which Object reference is not valid and investigate why that is - are you missing a ‘new Bitmap()’ perhaps?

I tried “PixelFormat.Format16bppGrayScale”, but not work also.

Uhm, but that still does not match what you tell OpenGL to expect, which is 8 bit per pixel (8bpp)… Anyway, I think you should first solve the invalid object reference, anything after that may well be a follow on problem caused by that.

You’re right!! There was no error after i added this:

public static Bitmap image2 = new Bitmap(10, 10,PixelFormat.Format24bppRgb);

But the cube that i drew has disappeared…orz

Is it gone or perhaps just black like your background? Sorry, I’m not familiar enough with the C# Bitmap and ArrayToImage APIs to make additional suggestions on what could go wrong when using them to pass data to OpenGL.

Thanks for your teaching.
I found the error while i call by value.