What's the idea behind deferred lighting

Hi
I have read several tutorials about deferred lighting. All of them explain the codes instead of explaining the algorithm in a few lines.
All of them say we should render the position, normal, diffuse and other useful information to several draw buffers.
Then they explain about drawing a sphere for point lights. But what’s the idea behind the second pass? Could you please explain the “step by step” algorithm (not coding)?
Thank

I was about to explain it myself, but this article has a pretty good overview already. Also read the comments section!

The idea of deferred shading is to decouple light calculations from geometry rendering. (From your description, I assume you mean deferred shading. Some people use the term “deferred lighting” to referre to what Wolfgang Engel called “light pre-pass”)

Rendering is split into a pure geometry pass (the first pass) and a pass for lighting calculation (the second pass).

The rough outlines are as follows:
[ol]
[li]In the first pass, render the scene into the G-Buffer[/li][ul]
[li]Attributes like surface normal, material perameters etc… are stored in the buffer [/li][li]The shaders responsible for this don’t need to know anything about the light calculations at all [/li][li]The shaders responsible for this don’t need to know anything about shadows at all [/li][/ul]

[li]In the second pass, for each light source, a light volume is drawn[/li][ul]
[li]The rendered light volume coveres the area affected by the light [/li][li]The shader used for the light volume fetches material properties from the G-Buffer and uses them for light calculation. [/li][li]A single shadow map, rendered on the fly for the current light source, can be used for shadow effects [/li][li]The resulting colors (accumulated for all light volumes, can be done via blend func) are the final colors of the light scene [/li][/ul]

[/ol]

This approach has some obvious advantages:

[ul]
[li]Geometry rendering is independent of lighting calculations [/li][li]Lighting is only calculated for pixels actually visible [/li][li]For N objects and M light sources, a multi pass approach takes NM rendering passes, while deferred shading needs only N+M [/li][li]For N materials and M light sources, you don’t need NM different shader combinations, but only N+M [/li][li]You don’t need to produce shadow maps in advance, you can produce them on the fly for an arbitrary number of lights, no changes in the shader code [/li][/ul]

But also some obvious disadvantages:
[ul]
[li]You are restricted to a single BRDF model for everything [/li][li]The G-Buffer eats up a lot of memory [/li][/ul]

Agent D: Classical deferred shading and light pre-pass (or deferred lighting), while sharing many similarities, are still two approaches with different memory/bandwidth implications, a different number of geometry passes and in part different advantages and drawbacks. What you describe is classical deferred shading.

You don’t need to produce shadow maps in advance, you can produce them on the fly for an arbitrary number of lights, no changes in the shader code.

Can you please elaborate on that?

[QUOTE=thokra;1259424]Agent D: Classical deferred shading and light pre-pass (or deferred lighting), while sharing many similarities, are still two approaches with different memory/bandwidth implications, a different number of geometry passes and in part different advantages and drawbacks. What you describe is classical deferred shading.
[/QUOTE]
Thats exactely what I wrote. I wrote that light pre pass rendering (what some people call deferred lighting) is something different and that I assume from the initial question that classical deferred shading was meant:

Seriously, does noebody actually read my posts or is my writing that ambiguous?

[QUOTE=thokra;1259424]

You don’t need to produce shadow maps in advance, you can produce them on the fly for an arbitrary number of lights, no changes in the shader code.

Can you please elaborate on that?[/QUOTE]
Ok, you could call that ambiguous writing if you want to.

When you use a multi pass approach with N light sources per pass that are all supposed to cast a shadow, you need N shadow maps in advance before you start rendering the geometry.
Different number of shadow maps means more shader combinations, fewer lights/shadow maps per pass means more geometry passes.

With deferred shading, the shadow maps are of course not needed for the geometry pass. When ever we need to render a light source, we can generate a shadow map, it doesn’t concern the
geometry, as the geometry pass is already over. You could also use a total of one shadow map, but just as in a multi pass aproach, this means switching shaders more often, so a more
efficient approach would also render N shadow maps and then N light sources.

So in total it’s pretty much the same: N light sources with shadows -> N shadow maps to generate.
The point is that you don’t need shadow maps in advance for the geometry pass, both light and shadow calculations are completely decoupled, but I’ve already seen forum posts and sample
implementations where the later has led to confusion.

I assume you mean deferred shading

Yep, admittedly I read that but fogot is pretty quickly and only had the divergence between what you wrote and what the OP asked in mind. My apologies.

Seriously, does noebody actually read my posts or is my writing that ambiguous?

Does confusion with what you write happen frequently?

you could call that ambiguous writing if you want to

I believe it is. On-the-fly suggests dynamic calculation at runtime. In advance or precalculated or similar suggests preprocessing, which is clearly out the question here. I try to be very careful with terminology because as soon as there are two or more devs talking, almost always there’s potential for confusion. :slight_smile:

fewer lights/shadow maps per pass means more geometry passes.

How would you get more geometry passes if the number of objects to be rendered is constant and the number of objects rendered during shadow map rendition is reduced? Or do you mean you can push more stuff down the pipeline because reducing shadow mapping overhead frees resources?

Instead of shading the lighting result as you render your depth buffered scene you can shade lighting parameters to multiple framebuffers.
This leaves things like surface normal and albedo in various buffers for only those visible surfaces. These buffers can then be texture mapped into a subsequent shader.

This means you can perform a shading operation on the framebuffer using these results and write the final lighting result to the framebuffer. This final operation does not need to render the full scene but is driven by a single screen space quad to invoke a fancy lighting shader on each pixel.

There are innumerable variations on this depending on the scenario from shading light source parameters to surface material parameters that really mean there’s no one right algorithm.