Small light source aliasing

Hard to find many hits on this with a web search. So thought I’d ask the experts here.

What’s the latest in-vogue methods to antialias small (distant) light sources? Specifically, when the area illuminated by light sources subtends gets down to ~1 pixel.

Example: Envision a night scene with a street light shining down onto the ground below. You’re on the ground (or airborne) above the street light looking toward the horizon, and you back away from the light and the bright area of the ground it’s illuminating until the circular pool of light under the lamp only subtends 1-2 pixels. Now of course we get lots of aliasing (flickering), whether using light sources or light maps to add in the lighting. It’s bright enough that it aliases/swims badly, but not bright enough to bloom effectively. In fact, the area subtended by the illuminated ground sliver under the light source may not have even hit a pixel/sample.

Of course we can just throw more samples at it, but that’s not free. Any work out there on analytically computing a pixel coverage percentage based on depth samples? (this is deferred) Any other pointers to papers/presentations?

Thanks.

So thought I’d ask the experts here.

Said the expert. :smiley:

Are you using glow-cards or actual geometry to represent light-sources? Have you thought about deducing the possibly irradiated area by measuring the screen-space projected area of the light source geometry? Is your goal to get some reasonable representation of the irradiated area at all or do you really need to bloom an area of only ~1 pixel?

If not, I’d simply try to measure the screen-space error of the projected area (simply, yeah, right … ) and fade out small lights.

If your light cone is a projected texture it will not alias but descend the MIP pyramid applying a suitable prefiltered image. In order for your scenario to work well though you need anisotropic filtering and probably a high degree of it if you’re going that thin. 16:1 should not really impact performance too much these days as it only affects texture reads on a small fraction of screen pixels. Considering what is happening in hardware when rendering like this you are actually using the multiple texture taps of the anisotropic sampling hardware to supersample the thin light pool along the thin axis.

P.S. if your light is shader math based you have the means to filter the light cone accordingly, by for example sampling with some form of du and dv. Supersampling is the trivial approach but it would probably be possible to do something LUT based or analytic in the shader, but really iterating the light for large du dv shouldn’t be that expensive. The details depend a lot on the exact scenario.

It’s geometry. Point and analytic cone (with falloff inner/outer cone ala old D3D).

Have you thought about deducing the possibly irradiated area by measuring the screen-space projected area of the light source geometry?

Yes, we were about to go there (what I meant by “analytically computing a pixel coverage percentage”). But thought we’d do a little digging first to avoid re-inventing the wheel (in case this is a “solved problem”). We have position and normal for the fragment, so from that can probably come up with some continuous coverage estimate.

Is your goal to get some reasonable representation of the irradiated area at all or do you really need to bloom an area of only ~1 pixel?

The former. Ideally I’d like this work even when the geometric area illuminated by the point light source cone/sphere doesn’t even overlap any screen samples. We’re exploring options now.

If not, I’d simply try to measure the screen-space error of the projected area (simply, yeah, right … ) and fade out small lights.

Right! If we can’t compute a real coverage value that doesn’t alias cheaply, that’s definitely the goal – band-aids to push out the aliasing effect far enough that the light source can just be faded out. Unfortunately, since the contrast between the in-light and out-of-light areas is so high, that’s actually a surprisingly long way out (when you’re above the ground looking toward the horizon)!

Thanks for your thoughts!

Thanks. Fading out to baked-in texture light maps is something we did play with, though with high anisotropy those were aliasing too. Will experiment with the level of aniso and texture sampling quality.

your light is shader math based you have the means to filter the light cone accordingly, by for example sampling with some form of du and dv.

That’s an interesting thought. Got any suggestions (not details, just broad strokes)? Using derivative expressions would imply looking at rates of change between shader expressions in neighboring pixels. The subtended light source cones are small enough so that can be beneath the sampling rate, so that’s out. But we could use the derivates to approximate the slope of the surface being illuminated, if we didn’t already have the surface normal encoded in the G-buffer.