Can I do some larger operations on the GPU by OpenGL?

Hi Guys!

I’m developing a Viewer for HDR-Images (OpenEXR) and the ToneMapping (mapping the HDR data into the RGB color space) is implemented in shader code and done on my GPU. That’s no problem, because it’s realy fast. But I also implemented a fragment shader to do some filtering. This computation is expensive and when doing on the GPU on higher parameters, Windows is restarting the GPU driver (Nvidia), bescause it’s not responsing in certain time. This also happens with the Intel HD 4000.

My questions:
Is their any best practice known, to do some larger computations on the GPU by OpenGL?
Or is it possible to prevent Windows from interupting the computation when restarting the driver?
And if not, do you know a way to catch this error, to prevent the viewer crashing?

Thanks for your help!

Greetz,
Mane

How did you implement your filter? Using a 3x3 kernel for linear or even non linear image filtering should not be a problem at all. I’ve seen people implement 9x9 gaussian blur filters years ago in real time rendering applications without any performance issues, so I’m quite curious to find out what you are doing.

Also, a friend of mine started writing a masters thesis about GPGPU image processing back in 2010 and as far as I remember he mentioned running into a problem where a watchdog timer would trigger on his nvidia card, but it took quite some workload to actually get to that point.