Multi-threading and OpenGL context problem on Win32

I’m just trying to write a simple program with some OpenGL rendering that works on Linux and Windows and I test it with nVidia and AMD video cards. It have 2 modes of operation:

  1. Single-thread. Window events polling and OpenGL rendering is in main thread, no other threads are created. Works as it should on both Linux and under Windows with any video card.
  2. Multi-threaded. Window events polling is in main thread, OpenGL context creation and rendering is in secondary thread. Works on Linux with any card and under Windows with Nvidia but not with AMD.

Multi-threaded mode under Windows with AMD card fails in secondary thread on first call to wglMakeCurrent. My code to create context:


Win32Context::Win32Context(Window &rWindow):
	m_pWindow(&rWindow) {
	HWND hWindow = (HWND)m_pWindow->getNative();
	m_hDC = GetDC(hWindow);
	m_hContext = NULL;

	// context is created without errors here:
	create(GLContext::GL_COMPATIBILITY, NULL);

	if (!WGL::_loaded) {
		// To load a WGL extensions we need a context set as current.

		// Fails on this call (with memory access error in atiDrvPresentBuffer):
		wglMakeCurrent(m_hDC, m_hContext);

		int nResult = WGL::LoadFunctions(m_hDC); // load WGL extensions...
		assert(nResult == WGL::_LOAD_SUCCEEDED);
		unused(nResult);
		wglMakeCurrent(NULL, NULL);
	}
}

bool Win32Context::create(GLContext::Profile nProfile, const Win32Context *pShareContext) {
	if (m_hContext != NULL) {
		wglDeleteContext(m_hContext);
		m_hContext = NULL;
	}

	if (nProfile == GLContext::GL_COMPATIBILITY) {
	    m_hContext = wglCreateContext(m_hDC);
	    m_nProfile = GLContext::GL_COMPATIBILITY;

	    if (m_hContext && pShareContext) {
	    	wglShareLists(pShareContext->m_hContext, m_hContext);
	    }

	}
	else {
		if (!WGL::_ARB_create_context_profile) {
			return false;
		}

		int nGLVerMaj, nGLVerMin;

		switch (nProfile) {
		case GLContext::GL_CORE_3_3:
			nGLVerMaj = 3; nGLVerMin = 3;
			break;

		//...

		default:
			return false;
		}

		int vContextAttrib[] = {
				WGL_CONTEXT_MAJOR_VERSION_ARB,	0,
				WGL_CONTEXT_MINOR_VERSION_ARB,	0,
				WGL_CONTEXT_FLAGS_ARB,			WGL_CONTEXT_FORWARD_COMPATIBLE_BIT_ARB
#if defined(_DEBUG) || defined(_GL_DEBUG)
												| WGL_CONTEXT_DEBUG_BIT_ARB
#endif
				,
				WGL_CONTEXT_PROFILE_MASK_ARB,	WGL_CONTEXT_CORE_PROFILE_BIT_ARB,
				0
		};

		vContextAttrib[1] = nGLVerMaj;
		vContextAttrib[3] = nGLVerMin;

		m_hContext = wglCreateContextAttribsARB(m_hDC,
				pShareContext ? pShareContext->m_hContext : NULL,
				vContextAttrib);
		m_nProfile = nProfile;
	}

    return m_hContext != NULL;
}


So, does anyone has same problem? Is it just a bug in driver or I doing something wrong?

Sorry for my bad English. Leon.

Make sure you have the latest AMD driver - their drivers are not as stable as nVidia. Also try using


int vContextAttrib[] = {0};

that is not options - when creating the device context

tonyo_au, thanks for reply.

I already fixed this, seems like GL context must be created in same thread that created a window, and then I can use this context in other threads. Also, rendering threads must have a large stack, the OpenGL driver eats stack a lot.

Sorry for my bad English. Leon.

Besides, don’t use wglShareLists(). GL3+ should use shared groups.