Don't try to do DirectX 12 feature parity counterpart to early

As the title says, don’t try to do DirectX 12 feature parity counterpart to early.
Hasted, not well-thought-out features can really hamper OpenGL.

Definitely not in the coming months, summer. Would be way to early.

Only add OpenGL 3.1 context creation and cleanup or fixing adding very little things/imperfections/inconsistencies this summer with OpenGL 4.5 plz.

In principle I agree, things should be carefully thought out, not rushed and also not blindly copied. Microsoft-designed APIs are especially BAD models to copy from. Many decisions there are political rather than technical (e.g their disgusting COM which to this day they continue to force down the developer throats).
Other their decisions are badly misguided due to their unwillingness to consult others in the industry but have an arrogant attitude “we decide single-handedly and then impose our decisions on the entire world”

Well firstly D3D12 full functionality is going to be accessible on a lot of hardware that’s actually already quite old today - all the way back to NVIDIA Fermi according to here: Gaming Archives | NVIDIA Blog - so in terms of feature parity it’s more likely to be a case of D3D playing catch-up with what’s already available in OpenGL via vendor extensions.

But that’s not what D3D12 is supposed to be about. According to all of the advance material the primary purpose of D3D12 is reducing API overhead, and new features are going to be comparatively thin on the ground. Now, GL has it’s own ways of achieving a similar result, but what’s interesting is that D3D and the console APIs are moving in one direction (writing directly to command buffers) whereas GL seems to be moving in a different direction (instancing and multi-draw indirect), which means that GL is really starting to look like the weird, incompatible outlier. This is kind of crucial as we’re near to one of those tipping points where the ARB always seem to have a bad habit of screwing things up (their track record doesn’t inspire confidence). If GL ends up being the API that involves considerably more work to port to, it affects the likelihood of people doing a GL port. If it’s more difficult to port to another API from a GL base, it affects the likelihood of people selecting GL to begin with. I think we can all agree that’s an outcome that would suck.

With all that said I largely agree that moving too early would be a bad idea. There are plenty of areas in GL that do need cleaning up, and still some remaining API cruft that needs to be taken outside and shot. This year a GL4.5 that addresses these issues (and makes a better stab of making new features more accessible to downlevel hardware so that we don’t get a repeat of the latest GL_VERSION being NVIDIA-only for such a long time; GL4.3 and 4.4 failed miserably at that) is a better prospect.

I’m not certain that comments about COM are relevant to a discussion about feature parity, but yes, COM is horrible but despite that D3D still has a few areas where it’s API is cleaner than GL’s.

Likewise we largely have Microsoft to thank for forcing standards on the hardware vendors; look at vendor extensions again for example. Do we really want a world where every hardware vendor has their own incompatible API and shading languages? That’s the world we would have got if vendor extensions were allowed to run riot.

Fortunately the “political” parts are easier to spot and filter out. And the danger of getting COM in opengl should be relatively small :slight_smile: This was moatly a joke.

But i’m more worried about the “misguided” things. Not only ones copied from outside (directx) but also ones invented by the opengl committee itself. We dont want to see any more things like the pbuffers or the shader program/pipeline mess.
Or the idiotic decision to tie the “srgb” property to the internal format (which makes it immutable for given texture) and cause major headaches for many directx-to-opengl porting developers, even though all hardware that ever existed that supports “srgb”, also supports turning it on/off on the fly. The reason for this decision was that someone thought this was the “right” thing to do, for some arbitrary notion of “right” in their head, that apparently was completely out of touch with the reality. Well, they finally fixed this particular idiocy with the “srgb_decode” extension. Better late than never. In this case the fix was relatively easy without leaving too much garbage in the api (although it did leave certain amount of garbage in the form of the redundant srgb and non-srgb format variants). Other bad decisions pollute the api considerably more, like the shader mess for which there is no getting around.
Thats the kind of hasted decision im most afraid of.

What I’d be more concerned about is “we don’t need Feature-X because we’ve already got Feature-Y”, where Feature-Y turns out to be completely unrelated and doesn’t even solve the problem that Feature-X was initially designed to solve (thus showing a misunderstanding).

We saw it with instancing (where glVertexAttrib calls were initially felt to be good enough), with the sheer length of time it took to get glMapBufferRange, and - yes - with the transition from pbuffers to FBOs (where the essentials of FBO capability actually already existed going back to GL1.4-class hardware).

My biggest concern, as always, is drivers and driver support. I would have really preferred to have seen something like GL_ARB_bindless_texture split in two, with the basic capabilities available to GL3.x+ hardware and the GL4.x+ stuff split out as a separate extension (I’m not specifically citing bindless here, just using it as a convenient example). I’m not particularly wedded to GL3.x+ but I think this kind of approach would enable vendors to get new stuff into drivers quicker (and I accept that it’s not always possible to do this kind of split, so let’s qualify that with “wherever possible”).

As for the drivers, i think a major hurdle for the driver writers is having to support all the legacy stuff in the compatibility profile, which is useless and pointless. It is my suspicion that nvidia is the principal culprit for continuing to drag the legacy stuff along. They say it serves their customers that rely on legacy software, but thats a lie because the supposed legacy software uses old GL versions. Any application that uses recent version is newly written and there is no point of using obsolete features from the “compatibility” profile. But i suspect that the real motivation for nvidia is that they believe that their drivers have all that mess better implemented and sorted-out than the competitors and thus see it as their strength. So they continue to push for the continued support of all the legacy bloat by the logic that even though it is bad for them, if it is even worse for the competitors, it should be ultimately good for them. Man how i hate such vile politics.
For this reason i recently started to be interested in the AMD mantle, which is free of legacy crud and allows much lighter drivers.

Actually it’s not only the compatibility profile. There are legacy things in the “core” profile that are very bad for the drivers and need to be removed too, like the old way of creating textures (e.g. glTexImage2D).

No driver vendor has to support the compatibility profile, it is optional. So blame the driver vendors that they don’t want to drop the functions that you don’t need.

I think, if Khronos stopped to update the compatibility profile with every new GL version, some vendors would just provide the functionality via proprietary extensions as long as there is a demand for it.