Pure Server Rendering

I am an experienced C programmer with a basic understanding of OpenGL. I have designed a multimedia service that runs on a pure Ubuntu server (no Xorg installed) running on high-end server hardware. This multimedia service currently performs various functions dealing with streaming video and audio. I am interested in optimizing some of my video manipulation code using hardware-accelerated OpenGL.

I am familiar with FBO’s and this is what I intend to use for all of my rendering. The idea is for the service to grab a video stream, perform some hardware accelerated modifications to it using various shaders and matrix operations, then load the rasterized pixels back into memory to be re-streamed to another device. The CPU is a precious resource in this case which is why I am trying to offload some of this work to the GPU and by no means involve Xorg.

Here are a few questions that I have:

1. How can I execute OpenGL’s rendering engine in a pure server without any Xorg overhead? All examples I have require a device context from a windowed environment, but that is not applicable in my case. It would be great if some example code can be provided.

2. The extent of OpenGL’s rendering complexity will be one or two textured quads floating in a perspective view. I take it that’s 4 textured polygons or so? My point is that I don’t expect the GPU to work that hard on producing my rasters. Thus, my question is this: Can I safely use the GPU on a dedicated server (Example: ATI ES1000) or should I consider using a video accelerator expansion card for this sort of task? If so, what kinds of advantages are there for my kind of application in using an expansion card versus a stock on-board GPU?

3. How much of a performance hit is it to have multiple parallel processes performing similar rendering functions on the same GPU? Should this even be a concern? I don’t think I will ever have more than 16 parallel processes working with the GPU.

Once I have a full understanding of how this works I would be happy to provide completed and fully commented source code so that others in my situation could render on a pure server.

Thank you in advance for your help!

1. How can I execute OpenGL’s rendering engine in a pure server without any Xorg overhead?”

You can’t.
Without an X and an attached screen you can’t create a OpenGL context. You can run OpenGL remotely iff you have a screen attached (and X running). NVidias Grid is intendet for headless 3D acceleration, but you can’t buy that stuff yet.
You can look into using OpenCL or CUDA for your task which don’t require X or a screen but both will probably not run on your GPU.

Actually, I think you can allocate a GLX context (GL context under X Windowss) to target either:

[ol]
[li]an X window,[/li][li]an X pixmap,[/li][li]an off-screen pbuffer[/li][/ol]

See the GLX_DRAWABLE_TYPE flag passed to glXChooseFBConfig(), which can be one of (or a mask of) GLX_WINDOW_BIT, GLX_PIXMAP_BIT, or GLX_PBUFFER_BIT.

So if you just want off-screen rendering, you can use a pbuffer. Or (possibly simpler) you can just allocate a context with a window system framebuffer, never map the window onto the screen, and do all your rendering in an off-screen FBO.

All that said, AFAIK you need a connection to the X server to allocate a GLX context mapped to a GPU, and thus a connection to the X server (Xorg in this case).

However, if you don’t care about using GPU-accelerated rendering and what Mesa3D supports is sufficient for your needs, IIRC you can just do your rendering off-screen in software with Mesa3D without a connection to the X server or a GPU. Check me on that though.

Thank you for your responses.

It’s unfortunate that X server is a requirement in order to take advantage of the GPU using OpenGL. It seems like a design flaw on the part of OpenGL designers not to consider a pure server environment.

Let’s assume I installed a basic Xorg installation, one without a window manager. Lets also assume I did not have a monitor attached, and this computer never has a user physically at the computer since it is remotely managed with SSH.

1. Can the minimal X server be run automatically upon boot-up with no monitor attached?

2. Does it matter that no window manager is used? I seek the absolute lowest overhead.

3. Can services launched by remote users or server start-up scripts initialize and take full advantage of OpenGL against this minimal X server?

Menzel’s advice for using CUDA is interesting but limiting since it requires specific hardware from NVidia. Additionally, I would not be able to fully take advantage of the OpenGL library and its brilliantly efficient rendering system.

Dark Photon’s advice for using MESA3D is also interesting since it is compatible with OpenGL yet designed to run its rendering off of the GPU, but I want to take advantage of the GPU without a heavy dependency with the X windows system. I understand that sending pixel data back and forth to the GPU’s RAM takes time, but I feel that it outweighs the rendering speed of the GPU.

P.S. Why is the word “suggestion” a banned word on this forum? Every time I try to submit this word, it tells me it’s a bad word. I was forced to pull up a thesaurus in order to complete my post.

That’s actually an intentional design choice. This separates the core OpenGL API from the platform-specific details of how to create and manage windows, which allows OpenGL to be very portable between a variety of platforms.

EGL may help abstract those platform-specific details in the future, but it’s up-and-coming.

Let’s assume I installed a basic Xorg installation, one without a window manager. Lets also assume I did not have a monitor attached, and this computer never has a user physically at the computer since it is remotely managed with SSH.

1. Can the minimal X server be run automatically upon boot-up with no monitor attached?

Yes.

2. Does it matter that no window manager is used? I seek the absolute lowest overhead.

No.

3. Can services launched by remote users or server start-up scripts initialize and take full advantage of OpenGL against this minimal X server?

Yes. Just open a display connection to display name “:0.0” in your app and go to town. For instance:

ssh remoteglserver xclock -display :0.0

At most you need to allow local connections on X startup (e.g. [noparse]xhost +local:[/noparse]).

I want to take advantage of the GPU without a heavy dependency with the X windows system.

There’s really very, very little source code “dependence on the X windows system” needed to create an X window and a GLX context for it. Very, very little (like 1 or 2 rtns). Past that, it’s pure OpenGL. That’s why abstraction libraries like GLUT are possible. In fact, you can just use GLUT to create your window and context if that’s sufficient for you (and it sounds like it very well may be) and thus totally avoid coding any calls to any X or GLX routines in your programs.

P.S. Why is the word “suggestion” a banned word on this forum? Every time I try to submit this word, it tells me it’s a bad word. I was forced to pull up a thesaurus in order to complete my post.

That’s odd. I’ll mention it to James.

Take a look here http://www.cl.cam.ac.uk/~cs448/git/trunk/src/docs/glfbdev-driver.html
or just Google for framebuffer and mesa.

You can find some code about to to set up windowless OpenGL (also via SSH) here: Windowless OpenGL | RenderingPipeline . As said, you will need X, not sure about a window manager. A monitor could be simulated to the graphics card with a VGA dummy (basically 3 resistors at the right pins) so you don’t need a real screen in your server rack.

Looks like our “bad word” list might have been overly aggressive on Newbie accounts. I’ve made an adjustment and tested. Posts seem to work fine now.