Using the depth buffer

So many of the OpenGL examples I see seem to assume you’re not using a depth buffer or depth comparisons. For 3D applications, is it the norm to use a depth buffer and letting OpenGL figure out which objects are in front? I can’t imagine it would be faster to be doing all the occlusion calculations on the CPU, but for instance I can’t get multisampling to work if I also have a depth buffer, so I feel like I’m missing something.

Thanks!

Where have you happened to see such strange examples?! Who draws without depth buffers? Bah, performing occlusion on CPU instead of using depth buffers - what a perverted religion!.. Point those heretics out, we will send an OpenGL’ Holy Inquisition at their den! :smiley:

If you are just making yourself familiar with depth buffer, maybe it is too early to play with multisampling?.. :wink:

To use a default depth buffer you need to request it when you initialize your window, then enable depth test. Take a look at the page defining the initialization of the window:
http://www.opengl.org/wiki/Creating_an_OpenGL_Context
In that example 24 bits for depth buffer is requested, so following that tutorial you will get the window ready for your geometry (just enable the depth test once with a call glEnable(GL_DEPTH_TEST) at the beginning and you are ready to draw).

Well they don’t necessarily exclude the depth buffer, but the “default” assumption a lot of tutorials seem to make is that you don’t have one (e.g. Multisampling - OpenGL Wiki). But alright, sounds like making use of a depth buffer is pretty common. I guess the reason it’s not enabled by default is for 2D applications?

Tutorials are just tutorials. Sometimes a teacher won’t show the full details to prevent the pupil from being overwhelmed initially, and then incrementally add complexity as the pupil’s understanding grows. Some tutorials purposely omit techniques to make what they are doing expressly clear, especially if it won’t change the outcome.

Z-buffers are extremely common (e.g. most 3D programs), but it is only one technique (albeit the most widely used) to determine fragment visibility. There is also a technique called occlusion queries, where you test a specific type of geometry like a bounding box or sphere holding a model, instead of each possibly visible fragment of that model. It can speed things up since the initial go/no-go test is quick and has little overhead, but rejecting entire groups of geometry instead of transforming it and so on can lead to big speedups. Keep in mind, to properly render the final 3D model will most likely use the Z-buffer.

If you are having problems attaching a depth buffer, you should check OpenGL errors to see that your framebuffer is still good (e.g. did you run out of memory, or allocate/attach it improperly).

We should step back and realize that all capabilities in OpenGL aren’t applicable to every situation. To blend some forms of alpha transparency properly, you need to presort your transparent geometry from furthest to nearest, and since you do this, you don’t necessarily need to use depth testing. The initial pass of the Doom3 engine wrote only Z-depths, and the subsequent did an EQUAL_TO_OR_LESS_THAN comparison with depth writing disabled to minimize the number of visible fragments for shading. Every operation performed eats up time, so it is up to you to determine what you do and do not need to achieve your effect.

Remember: the Z-buffer was originally thought to be prohibitively expensive, back when framebuffers cost $500,000 apiece, and hence using a full byte per color (requiring 3 framebuffers), was considered insanity, but eventually it came around.

Alright, makes sense, thanks!