understanding gl_PointCoord (always 0)

Hi,

I have a problem understanding the working of gl_PointCoord correctly.
They way I get it is, if I draw a point with PointSize > 1 the pixels that light up on the display sourrounding the center pixel have gl_PointCoord values between 0 and 1 correct?
There should be nothing more to it than just access gl_PointCoord in the fragmentshader, no initialization other than using glDrawArrays(GL.GL_POINTS,…) correct?

I am getting always 0 for all my fragments. Searching for this gives a few results, but only on older openGL versions, in combination with nvidia driver problems. I dont really think I have faulty drivers here or anything. I rather think I am doing something wrong. So here goes the code:

[CODE=fragment Shader]
#version 420 core
out vec3 fColor;

void main(){
fColor = vec3(gl_PointCoord.s,gl_PointCoord.t,0.0);
}




[CODE=Main]
gl.glPointSize(15.8f);
gl.glDrawArrays(GL.GL_POINTS, 0, 6);

So instead of colorful cubes I just get white ones. :frowning:

Also is there a difference between gl_PointCoord.s/t and gl_PointCoord.x/y ? What does s and t stand for?

Thanks

Do you have point sprites enabled? IIRC gl_PointCoord only has defined values if you draw point sprites.

I read this for earlier versions of opengl aswell, but I am on GL4 and it does not specify point sprites as render-primitives anymore.

The openGL programming guide V 4.3 writes:

Point sprites are essentially OpenGL points rendered using a fragment shader that takes the fragment’s coordinates within the point into account when running. The coordinate within the point is available in the two-dimensional vector gl_PointCoord.
By simply using gl_PointCoord as a source for texture coordinates, bitmaps and textures can be used instead of a simple square block. Combined with alpha blending or with discarding fragments (using the discard keyword), it’s even possible to create point “sprites” with odd shapes.

Have you checked the range of supported point sizes with glGet(GL_POINT_SIZE_RANGE)? It’s valid for the upper limit to be 1.
Also, if GL_PROGRAM_POINT_SIZE is enabled, the size is taken from the value written to gl_PointSize by the vertex (or geometry) shader.

There is no difference. The elements of any vector can be accessed using any of x/y/z/w, r/g/b/a, or s/t/p/q (however, you can’t mix the forms in a single access, so e.g. v.xypq is invalid).

By convention, spatial coordinates use x/y/z/w, colours use r/g/b/a, and texture coordinates use s/t/p/q. The OpenGL specifications use s/t/r/q for texture coordinates, but “r” conflicts with red component in colours, so it was changed to “p” for GLSL.

[QUOTE=GClements;1252708]Have you checked the range of supported point sizes with glGet(GL_POINT_SIZE_RANGE)? It’s valid for the upper limit to be 1.
Also, if GL_PROGRAM_POINT_SIZE is enabled, the size is taken from the value written to gl_PointSize by the vertex (or geometry) shader.[/QUOTE]
Ok I checked this and with my 15.8 for my points I am in range. 63.375 is the biggest allowed size for me. I also tried using GL_PROGRAM_POINT_SIZE, but I get the same result. My points are big, but are filled with one color, meaning that gl_PointCoord is 0 for all fragments.

Would buffers be a way to write data back to my application from glsl or is this not possible at all? I know that I can read data in glsl via uniform variables. I would like to find out how many fragments my points have.

[QUOTE=GClements;1252708]There is no difference. The elements of any vector can be accessed using any of x/y/z/w, r/g/b/a, or s/t/p/q (however, you can’t mix the forms in a single access, so e.g. v.xypq is invalid).
By convention, spatial coordinates use x/y/z/w, colours use r/g/b/a, and texture coordinates use s/t/p/q. The OpenGL specifications use s/t/r/q for texture coordinates, but “r” conflicts with red component in colours, so it was changed to “p” for GLSL.[/QUOTE]
Thanks for the insight.

You can use buffer variables or images to pass data from GLSL back to the client. If you want to implement a counter, you need to use atomicAdd() (or imageAtomicAdd()) to avoid race conditions.

Is the easiest way to realize this via Shader Storage Buffer Objects? Because this sounds kind of complicated to implement. I havent found a simple example on the web.
In the end I just wanted to see how many fragments per point are drawn, to verify my view on them. As I mentioned in the first post my understanding is that for every pixel that lights up for a point with PointSize > 1, I will get one fragment. Meaning that for a PointSize of 3.0f I will get 9 fragments per point. Each of these fragments should have different gl_PointCoords.

Can someone confirm or deny my view on this?
I still have the problem that all my PointCoords are 0, but I think I am missunderstanding something here.

Yes.

Relatively speaking, it is.

You should be able to determine that much just by looking at the rendered image.

That is correct, assuming that the implementation actually uses 3.0 as the point size (implementations may impose a granularity on point sizes, with other sizes being rounded to the nearest supported size), and that multi-sampling isn’t enabled (if it is, you may actually get 4x4=16 fragments if the point’s centre doesn’t exactly coincide with the centre of a pixel).

Have you tried glEnable(GL_POINT_SPRITE)? This flag doesn’t exist in the core profile (point sprites are always enabled), but it may be required in the compatibility profile. The 4.3 compatibility profile specification isn’t entirely clear on this; it says:

Point sprites are enabled or disabled by calling Enable or Disable with the symbolic constant POINT_SPRITE. The default state is for point sprites to be disabled. When point sprites are enabled, the state of the point antialiasing enable is ignored. In a deprecated context, point sprites are always enabled.

The phrase “deprecated context” appears nowhere else in the document.

Thanks for the detailed reply and first things first: gl_Point_Coords work for me now. :slight_smile:

I am running all my code through the JOGL bindings. Since I am not running a compatibility profile, there was no GL_POINT_SPRIT capability existant for me. So I thought, thats not where the mistake lies and never tried. I had to force access to the capability via a compatability profile and now it works.

Maybe this is a JOGL bug, that gl_PointCoord does not work without enabled GL_POINT_SPRIT.