Part of the Khronos Group
OpenGL.org

The Industry's Foundation for High Performance Graphics

from games to virtual reality, mobile phones to supercomputers

Page 1 of 2 12 LastLast
Results 1 to 10 of 16

Thread: State of openGL drivers from Nvidia, AMD, Intel, & Apple

  1. #1
    Junior Member Newbie
    Join Date
    May 2002
    Posts
    25

    State of openGL drivers from Nvidia, AMD, Intel, & Apple

    Since I have been running into these bugs as well, and AMD isn't forthcoming about when / if they have fixed anything, nor have they released any BETA drivers for awhile, it is nice to have a tally of the current state of the openGL drivers, so when people wonder why something doesn't work, it may not be your code, but it could very well be driver bugs instead.

    All this if from http://www.g-truc.net/post-0655.htm

    This table shows the current status from all the vendors.
    Click image for larger version. 

Name:	0655.jpg 
Views:	122 
Size:	21.4 KB 
ID:	1302

    Basically, it seems if you are serious about openGL, and want the least amount of driver bugs, then your only choice is to use Nvidia gear.


    The full breakdown is available here: http://www.g-truc.net/doc/OpenGL%20status%202014-05.pdf



  2. #2
    Senior Member OpenGL Pro
    Join Date
    Jan 2007
    Posts
    1,201
    I'm continually fascinated by how far Intel have come in these tests. Their performance is getting close to competitive with other low-end GPUs (particularly in the mobile space) and when it comes to features I'd personally rather see "unsupported" than "fail".

  3. #3
    Senior Member OpenGL Pro
    Join Date
    Apr 2010
    Location
    Germany
    Posts
    1,128
    I've come to really like Intel for two reasons: they crontribute heavily to the development and the progression of the Linux graphics stack and they have the most open documentation of hard- and software out there. Another thing is, unlike AMD and NVIDIA, who have to care about many legacy applications and respective clients, Intel decided to not implement ARB_compatibility - which is awesome.

    Fun (or more or less depressing) facts about the big three here - in case you didn't see it already.

  4. #4
    Intern Newbie
    Join Date
    Mar 2014
    Posts
    47
    Quote Originally Posted by thokra View Post
    Another thing is, unlike AMD and NVIDIA, who have to care about many legacy applications and respective clients, Intel decided to not implement ARB_compatibility - which is awesome.

    Yes, truly awesome. The old legacy apps don't work on old Intel because it's garbage and they don't work on new Intel because it doesn't have the compatibility profile.
    Believe it or not, some developers are stuck with such software - and thanks to this crap we can't do a gradual upgrade to modern features. It's either all or nothing. So nothing it is and they stay as they are because a rewrite isn't feasible.

  5. #5
    Senior Member OpenGL Pro
    Join Date
    Apr 2010
    Location
    Germany
    Posts
    1,128
    Quote Originally Posted by Nikki_k
    they don't work on new Intel because it doesn't have the compatibility profile
    You are aware that feature removal only takes effect if you explicit request a 3.1+ core context when using Intel's DRI driver, right? If you create a 3.0 context, you can still use every single feature OpenGL 2.1 and OpenGL 3.0 provide plus a load of extensions.

    Saying you can't gradually port from legacy to 3.x is simply nonsense. Of course, if you want to use 3.1+ core features, you'll have to get rid of all the removed stuff and port in one swing, true. I'd first try to isolate everything you can port to GL 3.0 core features, do that, and let the rest follow when you got the time.

    In principle, forcing users to port to GL3/4 core contexts would give vendors the opportunity to provide drivers that only focus on the implementing the core profile and extensions and can their legacy code base. Oh well, we all know how it is instead.

  6. #6
    Intern Newbie
    Join Date
    Mar 2014
    Posts
    47
    You make it sound so easy but my guess is you never had a chance to look at the code involved when it comes to porting such old projects.
    They normally come with a code base that has no optimization of rendering flow, takes liberal advantage of the freedom immediate mode gives and are nearly impossible to rewrite without addressing some fundamental design decisions first.

    The code of the project I'm working on is inherently non-portable to core GL 3.x, it'd necessitate a complete rewrite of our data management.
    I can, however port it to 4.x with persistently mapped buffers, but this can only be a gradual transition because the project is quite large. But with Intel not supporting a 4.x compatibility context I'm stuck in the situation that in order to keep it working all the old cruft needs to be retained throughout the entire transition but try telling that to a boss who needs to be sold on 'more efficiency'. The answer I got was a straight 'no'. Not worth the effort if we can't clean up the code for months to come.
    The old code, currently based at GL 3.0, is working fine, after all...

  7. #7
    Senior Member OpenGL Pro
    Join Date
    Apr 2010
    Location
    Germany
    Posts
    1,128
    You make it sound so easy but my guess is you never had a chance to look at the code involved when it comes to porting such old projects.
    I advise you to guess again.

    takes liberal advantage of the freedom immediate mode gives
    Such as?

    The code of the project I'm working on is inherently non-portable to core GL 3.x
    Examples pls - no code, just a higher level problem description.

    I can, however port it to 4.x with persistently mapped buffers.
    Persistent mapping is an optimization - nothing more, nothing less. Plus, you can't port to 4.x willy nilly - you need core 4.4 compliance or GL_ARB_buffer_storage.

    but try telling that to a boss who needs to be sold on 'more efficiency'. Not worth the effort if we can't clean up the code for months to come.
    I know exactly what you're speaking of.

    The old code, currently based at GL 3.0, is working fine, after all...
    If your code is completely based on core OpenGL 3.0, it's hardly legacy code. Are we talking about actual legacy code here at all? I'm sorta confused...

  8. #8
    Intern Newbie
    Join Date
    Mar 2014
    Posts
    47
    It still uses immediate mode, but thanks to GL 3.0 we were at least able to remove the fixed function code. We haven't done matrices yet because it's a waste of time unless we don't get the big obstacle out of the way.

    The main problem I am facing is that I can't get rid of the immediate mode without using GL_ARB_buffer_storage's persistently mapped buffers. Its plain and simply impossible to convert to core 3.x first and then upgrade.

    And having to deal with hardware that does not allow both to coexist means I can't do it gradually, it has to be done all at once. And that's plain and simply impossible.

  9. #9
    Intern Newbie
    Join Date
    Mar 2014
    Posts
    47
    Quote Originally Posted by thokra View Post
    Persistent mapping is an optimization - nothing more, nothing less. Plus, you can't port to 4.x willy nilly - you need core 4.4 compliance or GL_ARB_buffer_storage.

    Well, that depends on what you do.

    I have tried years ago to replace immediate mode with a glBuffer(Sub)Data based method. This translates badly. The buffer sizes are simply too small, the only way to make it work is to reorganize all the data to accumulate larger amounts, which is way beyond the scope anyone is willing to go. And constant mapping/unmapping is even worse in this particular case.
    That's why the project never went further than immediate mode 3.0, because anything beyond that simply was not doable efficiently.

    With a persistently mapped buffer I can keep everything as it is and the code is actually faster, at least on NVidia. As for GL_ARB_buffer_storage, it is being supported by all recent drivers of the three major manufacturers. Fortunately one thing I don't have to think about here is old hardware.

  10. #10
    Senior Member OpenGL Pro Aleksandar's Avatar
    Join Date
    Jul 2009
    Posts
    1,146
    Quote Originally Posted by Nikki_k View Post
    The main problem I am facing is that I can't get rid of the immediate mode without using GL_ARB_buffer_storage's persistently mapped buffers.
    Can you, please, elaborate this statement?

    There is no direct link betwwen persistent buffer storage and immediate mode. Further more, persistent buffers are not the best ever possible solution for any problem programmers have with OpenGL. If data is dynamic, than it can bust performance, but if they are static, classical approach is much better. They gain no bust also for TF (tried in my application). Why would you use persistent buffer storage at all?

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •