Dual paraboloid shadow mapping confusion

I’m attempting to implement dual paraboloid shadow mapping to provide omnidirectional shadows for point lights. I’ve already implemented cube map shadows and ordinary directional projective shadows, so I’m basically just doing this for the purpose of comparing the implementations.

Currently, I’m using the following vertex shader to generate shadow maps (please pardon the slightly weird variable names; they’re chosen by a higher-level shading language - this is the GLSL output):


vec4
calculate (
  in vec4 pl_eye, 
  in float pl_z_near, 
  in float pl_z_far
) {
  vec4 pl_clip = (pl_eye / pl_eye.w);
  float pl_len = length (pl_clip.xyz);
  vec4 pl_dir = (pl_clip / pl_len);
  float pl_nz = (pl_dir.z + -1.0);
  float pl_lmn = (pl_len - pl_z_near);
  float pl_fmn = (pl_z_far - pl_z_near);
  float pl_dist = (pl_lmn / pl_fmn);
  {
    return vec4 ((pl_dir.x / pl_nz), (pl_dir.y / pl_nz), pl_dist, 1.0);
  }
}

in vec3 v_position;
uniform mat4 m_modelview;
uniform float z_near;
uniform float z_far;
out vec4 f_position;
out vec4 f_position_clip;

void
main (void)
{
  vec4 pl_position = (m_modelview * vec4 (v_position, 1.0));
  vec4 pl_clip_position = calculate (pl_position, z_near, z_far);
  gl_Position = pl_clip_position;
  f_position_clip = pl_clip_position;
  f_position = pl_position;
}

The fragment shader simply writes the z component of f_position_clip to a texture. To actually generate the maps, I first position the “camera” at the light’s world position, facing towards negative Z, and then render the scene. I then point the same camera in the positive Z direction, and render the scene again. This results in the following depth textures:

Towards negative z:

[ATTACH=CONFIG]738[/ATTACH]

Towards positive z:

[ATTACH=CONFIG]739[/ATTACH]

The light isn’t actually in the center of the scene, which is why there appears to be more objects on one side than on the other. The first thing I notice: The images are upside down. These are images that I’ve downloaded from the GPU using the apitrace debugger, so they definitely are upside down on the GPU. This seems incorrect! It was my understanding, from trying to piece together information from the various tutorials online that only one of them should be upside down. If so, what is the correct way to perform the flipping of the map? Presumably I shouldn’t just arbitrarily flip one of them, I feel as though one of the images should appear to be upside down due to the way the original coordinates were calculated for that hemisphere rather than some extra step taken after the fact.

Edit: To clarify, the z_near value is 0.0001f and the z_far value is equal to the radius of the light, to get a good distribution of depth values.

Found the cause of the upside down images, at least. The sign of the Z component was flipped:


  float pl_nz = (pl_dir.z + -1.0);

Should have been:


  float pl_nz = (pl_dir.z + 1.0);

The above yields the following maps:

Towards negative Z:

[ATTACH=CONFIG]740[/ATTACH]

Towards positive Z:

[ATTACH=CONFIG]741[/ATTACH]

You can see the lack-of-tesselation artifacts on the ground mesh (it’s just a single quad with four vertices).