Draw with fragment shader without vertices

In OpenGL 3.3, is it possible to use the fragment shader just to “paint” to the screen without the need for VBO-s and vertex shader? …or do I need to create a simple vertex shader and at least a “quad” that covers the screen?Thanks!

You need to at least make a quad that covers the screen. Whether you need a vertex shader or VBOs depends upon whether you’re using the core profile or compatibility profile. For the core profile, you need a vertex shader and VBOs. For the compatibility profile, you only need a vertex shader if you use generic attributes or require some behaviour which isn’t available through fixed-function vertex processing, and you can use client-side vertex arrays, glBegin/glEnd, or glRect() instead of using VBOs.

You only need a vertex shader, you don’t need any VBOs in core profile. You can simply have your vertex shader generate the vertices of a quad, a point or even complex geometry just by using the built-in gl_VertexID variable that tells you which vertex your current vertex shader is processing.

e.g. you can have the following to render a full-screen quad:

[code=“Application code”]
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);



[code="Vertex shader"]
#version 330 core
const vec2 quadVertices[4] = { vec2(-1.0, -1.0), vec2(1.0, -1.0), vec2(-1.0, 1.0), vec2(1.0, 1.0) };
void main()
{
    gl_Position = vec4(quadVertices[gl_VertexID], 0.0, 1.0);
}

I was doing the same, using a geometry shader :

  • no VBo is bound
  • glDrawArrays(GL_POINT, 0, 1);
  • geometry shader build a full screen quad in clip space

But since my last driver update (NVidia, didn’t remember the version number) this doesn’t work anymore. This is very strange and we made several tests :

  • on some nvidia GPU, this works only when the application is started using the Visual Stuio debugger
  • on some nvidia GPU, this works every time as expected
  • on some nvidia GPU, this works never works

On all these GPU, it was working before the driver update, so I think this some kind of driver bug, and we now always bind a VBO to be sure that the application will work.

In my experience, not having at least one vertex attribute seems to be a fairly reliable way of triggering bugs in the implementation.

Personally, I’d just supply the vertex data normally. It’s not as if 4 (or even 6) vertices consume a significant amount of memory.

Thanks! The code works fine. Except that I had to change the initialization to this:

const vec2 quad_vertices[4] = vec2[4]( vec2( -1.0, -1.0), vec2( 1.0, -1.0), vec2( -1.0, 1.0), vec2( 1.0, 1.0));

…it said “OpenGL doesnt allow C-style initialization”.

“On all these GPU, it was working before the driver update, so I think this some kind of driver bug, and we now always bind a VBO to be sure that the application will work.”

Good to know that. You mean just genBuffer and bindBuffer, …or do I need to fill it with some dummy-data?

It’s possible to satisfy having one attrib with a single glVertexAttrib call - e.g:

glVertexAttrib1f (0, 0);

This is still valid in core profiles and just sets a “current” value for the attrib, which is then attached to all vertices in your draw call. No VBO needed. (For bonus points and with sufficient care you can abuse this functionality to emulate a limited number of shared uniform slots without needing UBOs.)