one mesh, multiple texture

hi all,
i’m trying to finish my wavefront loader but, how can i draw multiple textures (for now it’s what i need) on the same mesh is not clear to me, can anyone explain me what’s the procedure to follow? (i’m working with forward rendering for now so if possible i want to implement all stuff before moving to deferred rendering)
thanks to all for patience.

I suspect you are not asking about the technical steps to allow you to sample from two textures in you fragment shader, but just in case:

  • bind texture A to texture unit 0, bind texture B to texture unit 1.
  • have two sampler2D variables in your fragment shader and assign values 0, 1 to them respectively
  • sample from both textures and “combine” the result

The last step is what I suspect you are really asking about, because that is the only thing that is substantially different from the single texture case. And there is no single answer what the correct way is: you will have to decide what the textures represent, e.g. one could be the diffuse material contribution and the other the specular contribution. Depending on what your answer to the question “what do the two textures represent?” is it should become clear what your shader should do with the values you’ve sampled from each.

sorry, think my question is not clear:
if i have a mesh like for example a sphere that represent a planet, the sphere has two textures to draw (could be cloud and water or whatever), since i’m using glDrawElements how can i draw multiple diffuse textures on a single mesh ( using forward rendering)?
i hope it’s clear now :wink:

I’m afraid it isn’t much clearer to me what exactly the issue is. I tried to outline the steps for using multiple textures above:

  • bind texture A to texture unit 0, bind texture B to texture unit 1.
  • have two sampler2D variables in your fragment shader and assign values 0, 1 to them respectively
  • sample from both textures

This works pretty much the same as for one texture, only you switch the active texture unit (glActiveTexture) between calls to glBindTexture.

If the above does not help or is not detailed enough, can you please say what you are doing and what is not working or where you need more information?

There’s a function called glActiveTexture that will switch the current texture unit. Calling glBindTexture binds a texture to the currently selected texture unit. You bind one texture to unit 0, one to unit 1 and so on.

If you use sampler objects, the glBindSampler function has an index parameter where you pass the texture unit index to bind to.

In your shader code, you need multiple sampler* uniform variables (one for each texture you want to sample from). You use glUniform1i to set the texture uniform varaibles to the texture unit index (not the texture handle).

Your shader can then sample from the different sampler* variables and combine the colors.

Whether you use forward or deferred shading is completely irrelevant. In fact, sampling your G-Buffer values also requires multiple texture bindings.

thanks all for explanation, it was what i needed, if something will be wrong i will “revive” the thread, thanks again

I combine two textures on a sphere using fixed pipeline GL (i.e. no shaders). The first texture is a world map. The second is global cloud cover expressed as a grey scale image. The texture images have the same pixel dimensions. The textures are read into arrays. A third array is generated by combining RGB values for each pixel in the first two images. Basically, I want the world map colors to determine the final texture colors (RGB) more as the grey value in the cloud cover value approaches zero. It’s actually easier to code up than to explain. The third array is the one that gets textured onto the sphere.

I’m guessing that shaders would be a faster way to go to accomplish this effect. Have nothing against shaders. Just haven’t learned them yet. All of my texture calculations take place on the CPU as opposed to the graphics card. For those familiar with shaders, I’d like to ask - would the texture operations be done on the graphics card using Mr. Neumann’s approach?

Thanks.

Yes, but if your textures are static you are perhaps better off combining them just once on the CPU as you are doing now. The shader approach would mean that you upload the ground and cloud textures to the GPU and for each fragment take a sample from each one, giving you an RGB ground color and a float cloud coverage value. You then combine these using the same formula as on the CPU and write them to the fragment shader output. That all means you keep doing the combining every frame - which is a little wasteful if the clouds and ground are static, but of course great if they move relative to each other or otherwise change over time.

[QUOTE=carsten neumann;1261265]Yes, but if your textures are static you are perhaps better off combining them just once on the CPU as you are doing now. The shader approach would mean that you upload the ground and cloud textures to the GPU and for each fragment take a sample from each one, giving you an RGB ground color and a float cloud coverage value. You then combine these using the same formula as on the CPU and write them to the fragment shader output. That all means you keep doing the combining every frame - which is a little wasteful if the clouds and ground are static, but of course great if they move relative to each other or otherwise change over time.[/QUOTE] Your comments make sense. In reality both of the scenarios you discuss come up in my simulation. Most of the time I’m simply compositing cloud cover over a world map. It’s a static texturing situation. Lots of other things are moving around on the screen, but the earth texture is not changing. For fancier (and slower) graphics, I combine cloud cover with a daylight earth map, a nighttime earth map, and add a gradual transition across the terminator (line between day and night). This is a dynamic texturing situation. I’m guessing that this would be much (?) faster with shaders?

One advantage to my CPU approach (I think) is that once I’ve computed a texture to wrap around the globe, the same texture can be applied directly to my flat map representation. Nothing has to be recomputed.

Thanks for your comments.

I combine cloud cover with a daylight earth map, a nighttime earth map, and add a gradual transition across the terminator (line between day and night). This is a dynamic texturing situation. I’m guessing that this would be much (?) faster with shaders?

I assume that means you have to repeatedly upload an updated texture whenever the terminator moves? In that case you essentially trade bandwidth over the PCIe bus for bandwidth on the graphics card. Since PCIe tends to be slower/narrower it could (should?) be faster - no promises though, you’ll have to measure it :slight_smile: Mostly to me this kind of effect is just way more convenient to implement in a shader.

One advantage to my CPU approach (I think) is that once I’ve computed a texture to wrap around the globe, the same texture can be applied directly to my flat map representation. Nothing has to be recomputed.

True, but GPUs are quite good at this sort of thing (it has a nice regular access pattern to the textures, allowing good use of caches) and you might even reuse the same shader for both.