Question on attenuation

Hello.

Attenuation is used to make lighting look right by giving fragments which are further away less brightness by multiplying the light color with (1 / (c + ld + qd)), where d is distance from fragment to light, c is constant attenuation (usually 1) and l and q are (inverse) linear and (inverse) quadratic components.

Now this makes it look better, but I wondered: Light does have to travel from the fragment to the eye/camera, right? How come I see no GLSL implementations where this is taken into account? What happens in reality when light is reflected from a surface? Do the attenuation components become meaninglessly small?

Thanks for any clues!

Most lighting examples you will come across are meant for real-time rendering. These usually make compromises or you can’t do it in real-time. If you want to see how to do lighting “realistically” you need to look at raytrace engines which may use the gpu but do not attempt to render in real-time.

I do understand that, and I do realtime rendering as well, but this specific step is not a lot of code. In fact, when I try to do it, it ends up looking kind of weird. It looks more like black fog than anything, even if I use seperate, subtle attenuation values for it. Is there any explanation for this?

I can look at the lamp in my room right now. I can see attenuation happen from light to surface/wall/ceiling, alright. But when I myself walk closer to the wall, it does not really appear to become any brighter. And it makes sense when you think about anything you have seen in your life, ever. But what is going on, physically?

You might find this interesting

I think I somewhat understand now, but somewhat really. The attenuation formula is to approximate the behaviour of how light being emitted from a volume spreads out over distance. It’s not the single ray which loses intensity, but rather that the rays spread out and have less accumulated energy as distance increases. For light reflected diffusely on a surface, we mostly receive a bunch of parallel rays which hardly attenuate over distance on the fragment/pixel. Please tell me if I am on the right track here because I am genuinely mentally challenged right now rofl.

I am not a physical light expert but your understanding is similar to mine. Remember light is simultaneously a particle and energy so simulating gets tricky.

Close enough for jazz. But think of it this way. Imagine a small point light source emitting photons radially outwards. For ease of thought, consider the photons to all have the same energy. At any given instant, there is a spherical shell of photons (of some radius R) being emitted with a specific photon density. A short time later, that same spherical shell has moved further out, so its radius has gotten larger (radius > R), so the photon density is less (i.e. number of photons per unit area on the shell has gone down). The photon density is how “bright” the light is at that point in space. So you see just due to distance falloff the illumination becomes dimmer with distance from the light.

However, there is another “attenuation” involved in your scenario which we ground-based lifeforms need to deal with. That due to propagation through a volumetric material. In space (much closer to a vacuum), there’s relatively little of this. But here on earth, the light gets attenuated and scattered by passage through some volumetric material such as atmosphere, water, glass, etc. What this tends to do is “scatter” some photons from their original course from the light source (more and more with distance), which defocuses them – diffusing/spreading out the photon density and effectively “dimming” the light. In realtime rendering, this kind of attention is approximated with “fog” equations (for the distance the light travels from the rendered surface to the eyepoint specifically). However that only approximates the “dimming” part not the “defocusing” part, and doesn’t deal with the light-to-surface volumetric attenuation/defocusing.

There’s lots of other effects such as refractions/defocusing due to media transitions (and gravitational lensing in space), but I think the above two cover what you’re asking about.

Yeah I thought some more about this and finally fully understood this. It has to do with light to surface is different math than surface light to eye, which is a projection through a point. If our eyes would just be small surfaces on the front of our faces (regardless of how creepy this sounds lol) we’d have to deal with attenuation again because with increasing distance then less and less portions of the diffuse light would come from an angle and position to be able hit the eyes.

Thanks for the help!