Normals using dFdx and dFdy

Hi there,

For my deferred rendering shader, I need to render the normals in world space to a buffer. When I was just working with creating terrain, I used this and it worked fine:

in the vertex shader:

eyeVec = gl_ModelViewMatrix * gl_Vertex;

in the fragment shader:

vec3 normal = normalize(cross(dFdx(eyeVec.xyz), dFdy(eyeVec.xyz)));
gl_FragData[1] = vec4(transpose(gl_NormalMatrix) * normal,1);

as you guessed, the second buffer gl_FragData[1] is for normals

But now I noticed that when I started rotating models that the normals are not rotated with it (it seems like model-rotation is ignored).
I’m not too familiar with the dFd functions, so I’m not sure where it goes wrong.
Anyone knows how to fix this?

To me it seems like it ignores the model-matrix, but in the vertex shader I use the gl_ModelViewMatrix , and the gl_normalMatrix is also based on the modelViewMatrix, so I’m not sure where it goes wrong.

Thanks!

[QUOTE=STTrife;1260916]in the vertex shader:

eyeVec = gl_ModelViewMatrix * gl_Vertex;

in the fragment shader:

vec3 normal = normalize(cross(dFdx(eyeVec.xyz), dFdy(eyeVec.xyz)));
gl_FragData[1] = vec4(transpose(gl_NormalMatrix) * normal,1);

as you guessed, the second buffer gl_FragData[1] is for normals

But now I noticed that when I started rotating models that the normals are not rotated with it (it seems like model-rotation is ignored).[/QUOTE]

Hold up just a second. First, what are you doing with the transpose of the gl_NormalMatrix. When you use gl_NormalMatrix to transform normals from OBJECT-SPACE to EYE-SPACE, you just apply it directly, with no transpose involved (gl_NormalMatrix is implicitly populated with the inverse transpose of the the MODELVIEW matrix, which is what you want in general for transforming normals if you can’t make any assumptions about the content of the MODELVIEW matrix).

But back up even further. eyeVec is (presumably) the fragment position in EYE-SPACE, right? Ok, then the normal you get from your normalize/cross of spatial derivatives of eyeVec will be an EYE-SPACE normal, right? You want it in EYE-SPACE and you’ve already got it there. So no need to transform it at all! Just nuke the gl_NormalMatrix reference altogether.

That should get you going. But note:

If eyeVec (your EYE-SPACE fragment position vector) is just interpolated across the triangle, then you’re going to get a constant, identical normal vector for every fragment on a given triangle. This is great for flat shading. But if your tris are supposed to represent a smoothly curved object whose boundary is just being approximated by triangles, then you’ll instead likely want to pass-in/lookup/or whatever your normal in the vertex shader and smoothly interpolate it across the triangle, or look it up from a texture using texture filtering in the frag shader so you get smoothly varying normals at each fragment.

Hi Dark Photon,

Some background:
-I need flat shading
-I want to generate the normals in the shaders (I might change this later, but let’s assume that I need it for now :wink:
-I need the normals in world-space.

Before we go into if this is a good idea AT ALL, I’d like to understand what goes wrong and how to do this right, for the purpose of learning more about shaders… so let’s assume the above things are really needed :slight_smile:

Now the reason why I do it as I do:
At first I didn’t really understand the dfdx and dfdy, and I used the eyevec example from somebody else to generate the flat normals in the fragment shaders. The problem is: in the example the normals are in eye space, but I needed them in world space. So I used the transpose(gl_NormalMatrix), to try to get the eye-space normal back to world-space normal. It was kind of a guess that this was the right way, because I saw the gl_normalmatrix is the transpose of the inverse of the modelview matrix. so I thought: I take the transpose of THAT to get the inverse of the modelviewmatrix and then I can transform the eye-space normal back to world-space normal.
But now I understand that it goes back to OBJECT-space, and not world-space, therefore the results I’m getting seem logical.

I think now that the best way would be to first calculate the world-space vertexes in the vertex shaders (but how, there is no separate modelMatrix…?) and pass that as a varying to the fragment shader, and then I can use the dfdx and dfdy to calculate the normal in world-space.

[QUOTE=STTrife;1260924]Some background:
-I need flat shading
-I want to generate the normals in the shaders (I might change this later, but let’s assume that I need it for now :wink:
-I need the normals in world-space.

Before we go into if this is a good idea AT ALL, I’d like to understand what goes wrong and how to do this right, for the purpose of learning more about shaders… so let’s assume the above things are really needed :)[/QUOTE]

Ok, we’ll assume that.

Now the reason why I do it as I do:
At first I didn’t really understand the dfdx and dfdy, and I used the eyevec example from somebody else to generate the flat normals in the fragment shaders. The problem is: in the example the normals are in eye space, but I needed them in world space. So I used the transpose(gl_NormalMatrix), to try to get the eye-space normal back to world-space normal. It was kind of a guess that this was the right way, because I saw the gl_normalmatrix is the transpose of the inverse of the modelview matrix. so I thought: I take the transpose of THAT to get the inverse of the modelviewmatrix and then I can transform the eye-space normal back to world-space normal.
But now I understand that it goes back to OBJECT-space, and not world-space, therefore the results I’m getting seem logical.

Ok, I see where you’re going. Yeah, if you want world space in your shader, you have to provide coords to the shader in world space or provide a transform to put your coords in that space. So sounds like you either need a modelMatrix (OBJECT-to-WORLD), or an inverseViewingMatrix (EYE-to-WORLD).

That said, I would recommend just ditching WORLD-SPACE in your shader and do your computations in EYE-SPACE instead. EYE-SPACE is almost always the better choice for a space to work in in shaders.
If you must use WORLD-SPACE in your shaders for some reason, only deal with direction vectors and normals in that space (3x3 rotation matrix) not positions (i.e. no 4x4 transform to get you to/from world). Representing WORLD-SPACE positions in your shaders (where you only have 32-bit float precision) limits you to tiny worlds.

[QUOTE=Dark Photon;1260928]Ok, we’ll assume that.

Ok, I see where you’re going. Yeah, if you want world space in your shader, you have to provide coords to the shader in world space or provide a transform to put your coords in that space. So sounds like you either need a modelMatrix (OBJECT-to-WORLD), or an inverseViewingMatrix (EYE-to-WORLD).

That said, I would recommend just ditching WORLD-SPACE in your shader and do your computations in EYE-SPACE instead. EYE-SPACE is almost always the better choice for a space to work in in shaders.
If you must use WORLD-SPACE in your shaders for some reason, only deal with direction vectors and normals in that space (3x3 rotation matrix) not positions (i.e. no 4x4 transform to get you to/from world). Representing WORLD-SPACE positions in your shaders (where you only have 32-bit float precision) limits you to tiny worlds.[/QUOTE]

Ok thanks for the suggestions. I noticed that world-space is not really a much used space in shaders, and the point about tiny world seems to clarify why that is. But for things that I do in the deferred shader, world space seems easier. I’ll have to find out if I can make those things work in view-space, or else I’ll ask for some help here :slight_smile:

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.