Questions about geometry shader performances

Bonjour,

Before asking anything, please let me explain the context.

I’m currently working on a video game project, in which I need to draw a lot of squares to render a minecraft-like terrain made of cubes, something between 100 000 and 500 000 per frame.
Of course, i’m culling a big part of it, so I could not exactly say how many are drawn, but we could assume that it’s still a lot.

As the game should be cross-platform, I’m working most of the time on an old Dell computer, on Ubuntu, with a NVIDIA chipset. NVIDIA drivers are installed. We are using GL 3.2.

So, my solution is to build multiple VBAs, each containing a part of the terrain geometry. Until recently, my arrays was containing triangles, 2 for each square, so 6 points per square.
I was drawing many GL_TRIANGLEs, and made them pass through a vertex and fragment shader.

Recently, I decided that 6 points was way too much for a square. So even if many told me not to, I tried to draw GL_QUADS. And I noticed an important increase of performance, going from 58-65 fps to
70-80.

Obviously, as GL_QUADS are depreacated since gl 3, my OSX friend told me: don’t use these.

Thus, I decided to try to create a very basic geometry shader, that would take LINE_ADJACENCY in and output TRIANGLE_STRIPs.
It’s a simple pass-through shader, no additional computation is made.

No, my fps caps to 60fps when I don’t move.

So my questions are:

  • Does geometry shaders always drags performances down ?
  • Is it possible that this performance issue is related to my old hardware ?
  • Does some hardware actually doesn’t really support geometry shader, even with 3.2 drivers ?
  • If some hardware lacks a good support of geometry shaders, is it possible to know it at runtime ?
  • Is the conversion from LINE_ADJACENCY to TRIANGLE_STRIP costly, or is this irrevelant in term of performances ?
  • Should I use indexes or another solution ? Or stick to triangles ?

Thanks !

Does geometry shaders always drags performances down ?

As far as I know - yes

Is it possible that this performance issue is related to my old hardware ?

Old hardware generally performs worse.

Does some hardware actually doesn’t really support geometry shader, even with 3.2 drivers ?

If the hardware does not support a function it doesn’t matter what driver you load you won’t have access to the function.

If some hardware lacks a good support of geometry shaders, is it possible to know it at runtime

Yes, the function pointer will be NULL and have a look at glGetString

Is the conversion from LINE_ADJACENCY to TRIANGLE_STRIP costly

I don’t know but I believe LINE_ADJACENCY adds cost to performance

Should I use indexes or another solution

The main advantage of indices is in reduction the number on vertices needed in the vertex buffer.

Does geometry shaders always drags performances down ?

That’s always a safe assumption. Never use a GS because you think it will save performance. Use it when you can’t do something any other way.

Is it possible that this performance issue is related to my old hardware ?

Maybe, but maybe not. You’d have to profile it. I understand that many pieces of DX11-class hardware has larger interim buffers (primarily for tessellation), such that GS’s don’t hurt as much. But you’ll have to test on each hardware platform.

Does some hardware actually doesn’t really support geometry shader, even with 3.2 drivers ?

If it wasn’t supporting GS’s and doing them in software, you’d be getting a lot less than 60FPS.

If some hardware lacks a good support of geometry shaders, is it possible to know it at runtime ?

Besides checking the VENDOR/RENDERER strings? No.

Is the conversion from LINE_ADJACENCY to TRIANGLE_STRIP costly, or is this irrevelant in term of performances ?

There is no such conversion. There is simply a GS that takes lines_adjacency and outputs triangle_strip. OpenGL doesn’t know or care how this happens, and your performance characteristics will generally be based on the specifics of your GS in this process. How may vertices you write, the fact that you’re using a GS at all, etc.

No, that’s not a safe assumption. It depends on what you’re doing. If you’re using the geom shader to unconditionally create more work, then maybe. But if you’re using the geom shader to potentially avoid tons of work, then no.

We do agree on your other point though: always profile! Determine your primary bottleneck. Then determine what you can do about it (on your target GPUs). Geom shaders could be part of the problem …or part of the solution.

What makes me wonder is, how LINE_ADJACENCY is related to rendering cubes.
I would suggest either
a) create a geometry shader that takes GL_POINTS and outputs the cubes as single triangle strip each. In theory you could draw your entire scene with a single glDrawArrays calls then.
b) use instancing. Create one ‘unit cube’ in a static vbo and use instancing to duplicate it thousands of times. This requires a bit more setup work, though.

Which one is faster? I don’t know. You have to try yourself.

What makes me wonder is, how LINE_ADJACENCY is related to rendering cubes.

Because he’s rendering the surfaces of the cube terrain. That’s typically the way you render a Minecraft-alike.

He’s only using GS’s to emulate GL_QUADS.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.