texture memory problems

Hi all,

I am writing an application that is very texture hungry. We are currently trying to figure out how much texture memory we can allocate for our caches by trial and error. So I was wondering if there were better mechanism in place in OpenGL.

So far we know we can check if texture objects are resident or not. But what we would like to know is

  1. how much video memory is available on the card
  2. how much memory is a texture actually allocating (considering whatever packing the driver is performaing)
  3. when using fbos as texture targets, how much memory the whole fbo allocates

are there GL calls (or NVidia extensions) available to do so?

Thanks!!!

–x

Unfortunately there is no way to tell how much video ram you got via OpenGL; DX on the other hand allows for such thing.
Regarding the memory foot-print of your textures, you can safely estimate it by multiplying the area of your texture by the number of bits exposed through the internal format. For instance a 256*256 2D texture with internal format GL_RGBA8 will hold 2 megs.

Originally posted by Java Cool Dude:
For instance a 256*256 2D texture with internal format GL_RGBA8 will hold 2 megs.
You mean 256k…?

If this is for a fixed hardware platform, you can use the Nvidia perf tools (with instrumented driver) to find out how much video/agp mem is in use.

If want to do this in a generic way on end users machines, you are out of luck. The best you can do is guess the amount of video memory using an old DX7 direct draw interface.

Originally posted by rgpc:
[quote]Originally posted by Java Cool Dude:
For instance a 256*256 2D texture with internal format GL_RGBA8 will hold 2 megs.
You mean 256k…?
[/QUOTE]I meant megabits which correctly translates into your figure assuming you’re talking bytes. :wink:

You should not measure anything like this. Our software is extremely texture hungry, and we are running on integrated nVidia videocards with 16MB of VRAM!! PCI Express buses combined with technologies like nVidia’s TurboCache really help a lot. I’m not saying it’s as fast as VRAM, but not bad. And it will keep getting better. jm2c

Originally posted by Java Cool Dude:
I meant megabits which correctly translates into your figure assuming you’re talking bytes. :wink:
it’s generally accepted that megabits are used when talking about bandwidth, while megabytes are used when talking about storage.

Originally posted by andras:
You should not measure anything like this.
That’s a bit of a sweeping statement.
Of course knowing this information will help you optimise your application for the hardware it’s running on. The fact that OpenGL doesn’t provide a mechanism for retrieving this information doesn’t mean it’s not valuable information. OpenGL seems to work on the assumption that this is a service the host OS should provide - I don’t agree, I think it should be in the OpenGL API as a server state, as it’s a low level detail directly relevant to what GL resources the user creates and how they are used.
Even a GL_STRING containing the total physical memory available on the graphics card (64mb,128mb,256mb) would be a start. It’s all very well pointing out that this isn’t a true indication of how much memory the system has available when the app is run, or whatever, but the fact is that most people resort to querying the registry or creating dummy d3d devices to try to get some feature scaling information.

I agree with knackered. Of course it’s nice to simply “trust” the drivers to do good work, but in some cases it would be nice to at least know some limitations, so that one can give the user a hint, what options he might want to en- or disable.

Jan.

I think the point that Andras was trying to make is that knowing the quantity of video RAM available is useless if you’re on hardware that uses video RAM as a texture cache. If we suddenly start having that kind of hardware around, then software that relies in knowing how much video memory there is will start looking really bad. They will see 32MB of memory and think that they need to use hyper-low-res textures, when in fact they can reasonably use texture sizes as though there were 512MB of video memory.

I was thinking about this also, how about if you upload a 8/16/32 MB texture to the card, then check for errors, then keep doing that until you get an error back, of course also do a glAreTexturesResident() to make sure it wasn’t offloaded by the driver to system memory?

I know you could write a quick directx subroutine to check the amount of memory, but there has to be a better way on platforms that don’t have directX on them.

Originally posted by Korval:
I think the point that Andras was trying to make is that knowing the quantity of video RAM available is useless if you’re on hardware that uses video RAM as a texture cache. If we suddenly start having that kind of hardware around, then software that relies in knowing how much video memory there is will start looking really bad. They will see 32MB of memory and think that they need to use hyper-low-res textures, when in fact they can reasonably use texture sizes as though there were 512MB of video memory.
Well in that case the driver should calculate how much texture memory it’s safe to assume you’ve got, how much framebuffer memory etc.etc.
Very much like the other queryable limits available through opengl.
The tests each and every app is expected to perform (proxy textures, rendering a frame and timing etc.) should not be necessary - the best people to provide this information are the vendors.

There is no OpenGL API which would return video RAM size, and the way to query video RAM size via DirectX is clumsy to say the least. If you are developing for Windows you can get that information using the code I wrote below:

DWORD GetVideoMemorySizeBytes(void)
{
	const char *str_key1 = { "HARDWARE\\DEVICEMAP\\VIDEO" };
	const char *str_key2 = { "\\Device\\Video0" };
	const char *str_key3 = { "system\\currentcontrolset" };
	const char *str_key4 = { "HardwareInformation.MemorySize" };
	LONG	s;
	HKEY	key;
	DWORD	type, buf_size, rv = 0;
	LPBYTE	buf = NULL;
	char	*ptr;

	s = RegOpenKeyEx(HKEY_LOCAL_MACHINE, str_key1, 0, KEY_READ, &key);

	if (s != ERROR_SUCCESS) {
		goto bail_out;
	}

	type = REG_SZ;

	s = RegQueryValueEx(key, str_key2, NULL, &type, NULL, &buf_size);

	if (s != ERROR_SUCCESS) {
		goto bail_out;
	}

	buf = (LPBYTE)malloc(buf_size);

	if (buf == NULL) {
		goto bail_out;
	}

	s = RegQueryValueEx(key, str_key2, NULL, &type, buf, &buf_size);

	if (s != ERROR_SUCCESS) {
		goto bail_out;
	}

	RegCloseKey(key);

	ptr = strstr(strlwr((char*)buf), str_key3);

	s = RegOpenKeyEx(HKEY_LOCAL_MACHINE, ptr, 0, KEY_READ, &key);

	if (s != ERROR_SUCCESS) {
		goto bail_out;
	}

	type = REG_BINARY;
	buf_size = 4;

	s = RegQueryValueEx(key, str_key4, NULL, &type, (LPBYTE)&rv, &buf_size);

bail_out:
	if (buf != NULL) {
		free(buf);
	}
	if (key != NULL) {
		RegCloseKey(key);
	}

	return rv;
}

Bear in mind that if you have multiple video adapters this may not work as expected because it is currently “hardwired” to VIDEO0.