Why mipmapping is needed?

Mipmapping is used to remove the visual artifacts so that we get better quality picture and it is minification filter. Apart from this, why mipmapping is needed?

Performance: mipmapping is basically a level-of-detail scheme for texture images, in the same way as you’d use a lower poly mesh for an object that’s far away, you also get to use a lower resolution texture for far away objects.

Say that your base texture is some arbitrary resolution, something like 256x256. If the transformed geometry using that texture only occupies a 4x4 pixel region on the screen, does it make sense to have to swap in the entire 256x256 texture? What if you could just swap in a 4x4 version of it, thus saving bandwidth and texture cache performance? That’s mipmapping.

Just to make it clear, drivers do not upload separate mipmap levels to a graphics memory, or at least I’m not aware of any doing that, but the whole texture (about 1.3 times bigger than without mipmaping). The performance boost comes from the cache coherence, which is higher with the smaller objects.

I am sorry I am still not clear about it. I did not get the cache coherence point here. Exactly what gets cached here and how it helps for smaller objects?

Actually that is exactly what the driver/gpu does! You autogenerate or upload your own mipmaps with whatever data you like.

More about performance:

If you use an 256x256 texture and stretch it on an object that only uses 2x2 texels on the screen. Without Mipmap settings each screen texel would be an sample of 1 (with linear filter 4) texture texel(s).

Sampeling 4 (or 4*4) texels scattered over an 256x256 texture is bad because GPU memory/cache is bad at random access.

Here’s a mental picture that might help.

Suppose you have a square surface in your scene. Suppose you’ve textured that square surface with a 2048x2048 texture. Now suppose you back your eyepoint away from that square surface far enough that this square’s apparent size exactly fits within a single pixel on your monitor. What “color” should the GPU render for that pixel?

Well, the GPU could, on-the-fly, loop over all 2048x2048 ~= 4.2 million texels in your texture, add up their color values, and divide by 4.2 million to get a get a good average. It could…

…but wouldn’t that be horribly wasteful of GPU compute resources? Yep.

Now instead, what if we’d already precomputed this average before rendering and stored it in the 1x1 MIPmap level of the texture. Then the GPU could just grab it with a single memory fetch. Sounds like a deal to me!

So as you can probably see now, MIPmaps save a lot of useless computation. In doing so, they also reduce the amount “texture data” that needs to be fetched, which reduces the working set size needed to operate with textures, which caches lots better and speeds up texturing on the GPU even further.

[QUOTE=Dark Photon;1256336]Here’s a mental picture that might help.

Suppose you have a square surface in your scene. Suppose you’ve textured that square surface with a 2048x2048 texture. Now suppose you back your eyepoint away from that square surface far enough that this square’s apparent size exactly fits within a single pixel on your monitor. What “color” should the GPU render for that pixel?

Well, the GPU could, on-the-fly, loop over all 2048x2048 ~= 4.2 million texels in your texture, add up their color values, and divide by 4.2 million to get a get a good average. It could…

…but wouldn’t that be horribly wasteful of GPU compute resources? Yep.

Now instead, what if we’d already precomputed this average before rendering and stored it in the 1x1 MIPmap level of the texture. Then the GPU could just grab it with a single memory fetch. Sounds like a deal to me!

So as you can probably see now, MIPmaps save a lot of useless computation. In doing so, they also reduce the amount “texture data” that needs to be fetched, which reduces the working set size needed to operate with textures, which caches lots better and speeds up texturing on the GPU even further.[/QUOTE]

Thanks a lot, very nicely explained.

Could you elaborate on that (your personal) opinion, give a reference or a link? Or maybe we didn’t understand each other.
Well, a texture is a whole. Mipmap layers are just part of the same texture. glTexImage*() really specifies each layer separately, but each call recreates a texture. That’s why OpenGL 4.x uses glTexStorage*() to specify a storage space for immutable textures. The amount of memory required for the storage must include space for all mipmap layers.

The story Dark Photon told is really illustrative, but it is not about the performance I was talking about. That is the story about achieving mipmap effect without having mipmaps. GPUs has never done it that way and probably will never do it. The texture filtering is done in hardware. There is no filtering mode GL_USE_ENTIRE_TEXTURE. It is useless. When I was talking about performance boost, I was talking about advantage GL_LINEAR_MIPMAP_LINEAR over GL_LINEAR. I really have no time to write about caching and cache coherence. Please surf the net to learn more about it.