Problem with the 5900Ultra

I just upgraded from a 4600 to a 5900U and one of my apps has stopped working. When my app tries to initialise the extensions they now return null. I’ve narrowed it down a bit to be an issue with creating a valid window.

I’m initialising with this
glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGBA | GLUT_DEPTH | GLUT_ALPHA | GLUT_STENCIL);

I have another app that uses this
glutInitDisplayString(“rgba depth=24 double alpha”);
and this app can sucessfully initialise extensions but if I put this type of window in my other app it fails on glutcreatewindow. If I then delete all my code out of that app it can create the window. Ok so it must be my app right? Well yes except none of the code deleted is run before the window creation so I don’t see how this can be impacting on anything. wierd.

I’m currently trying to reverse engineer my app to narrow it down further but if anyone has any ideas it could save me a lot of time. Thx.

I’ve tried detonators 44.4? and 44.71

[This message has been edited by Adrian (edited 07-09-2003).]

Try calling glGetString(GL_VENDOR) to verify that you are getting a hardware accelerated rendering context. It should return Microsoft Corporation if you are getting software acceleration.

Yeah, when I find a combination that allows me to create a window e.g.
glutInitDisplayString(“rgb single”);

I get Microsoft corporation.

Ok now what should I do? I have other apps that are getting hardware accelerated rendering contexts though its not obvious whats different about them. Maybe some compiler setting is different.

[This message has been edited by Adrian (edited 07-09-2003).]

[This message has been edited by Adrian (edited 07-09-2003).]

Is your display set to 16-bit or 32-bit? If its at 16-bit and you request a pixelformat with alpha, GLUT will get a software rendering context.

I solved it. I deleted some arrays and it started giving me hardware acceleration. I was originally allocating about 1 Gb of system memory which should have been fine because I have 1.5Gb. So it looks like a driver bug, I will notify NVidia.

I have another problem but I’ll look at that tomorrow.

[This message has been edited by Adrian (edited 07-09-2003).]

You can reproduce the problem with the following code (assuming you have at least 1Gb of RAM and a GF5900 Ultra). If you have a non ultra I’d be interested to hear if you get the problem to.

char X[100000000][8];

int main(int argc, char **argv)
{
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGBA );
glutCreateWindow(“Test”);

const GLubyte *strVen = glGetString(GL_VENDOR);

return 0;

}

The strVen is set to “Microsoft Corperation” indicating software emulation but if you change the 8 in the array to 7 strVen becomes ‘Nvidia Corperation’.

I might as well post this here to.

Radial fog doesn’t work with intellisample set to High performance or performance. It only appears for the ‘quality’ setting. I suppose not drawing any fog is one way of improving performance

and finally…

I was clearing the depth buffer by redrawing my objects with

glDepthRange(1.0f,1.0f);
glDepthFunc(GL_ALWAYS);
DrawObjects() // for a second time
glDepthRange(0.0f,1.0f);
glDepthFunc(GL_LEQUAL);

This was quicker than glClear(GL_DEPTH_BUFFER_BIT) because the objects take up a small amount of the viewport.

The above code worked fine on the 4600 but on the 5900 Ultra it does not clear the depth buffer at all. I’ve fixed it by using
glDepthRange(0.99999f,1.0f) instead.

Is this another driver issue or should I not have been using 1,1.

[This message has been edited by Adrian (edited 07-10-2003).]

What happens, if you allocate that amount of memory dynamically?
I once discovered, that when you allocate a big array in a function (not dynamically), it crashes. But, of course, this is only because of a stack-overflow.
However it would be interessting, if it still not works, if you allocate it dynamically before/after intializing your window.

Jan.

Originally posted by Adrian:
Between 500 & 600Mb.

You say you’re on an MS OS, with 1.5GB of RAM. Just out of curiosity, what happens when you do start allocating that amount of memory? Excessive page swaps by the OS, or do you just “get it”? Do you use new/malloc? Maybe you should look into functions like VirtualAlloc(). Just a thought.

Allocating dynamically before window creation also causes it to fail. In fact it requires slightly less memory allocation to fail. Between 500 & 600Mb.

If I try to Allocate over 500Mb after window creation the malloc fails instead.

Just to add to the depth range issue. I tried glDepthRange(0.99,0.99) and that didn’t do anything either.

Roffe, I’m using malloc for this test. It takes about two seconds to allocate that memory.

In my app all the memory is allocated statically since I’m just prototyping at the moment. I might take a look at virtualalloc at a later stage, thanks.

why do you allocate so much memory statically? maybe another data structure would work better (your app is limited to computers with at least 1.5 gb ram)?

Jan

Originally posted by JanHH:
[b]why do you allocate so much memory statically? maybe another data structure would work better (your app is limited to computers with at least 1.5 gb ram)?

Jan[/b]

It uses that memory when compiling levels. I can easily reduce the memory allocation, it’s just low priority at the moment.

The radial fog problem is more complicated than I thought, NVidia’s fog demo works fine…

I’m surprised Windows doesn’t have some dumb issue with having 1.5GB of RAM.

Windows always has some ennoying limitation that requires us to upgrade.

Hope you solved your prob.

I have another app showing the same problem with memory allocation and software rendering and I can’t reduce the memory usage so I need to get to the bottom of this.

I have written a small app to demonstrate the problem. If you have a GeforceFX and >=768Mb RAM I would be gratefull if you would run it and tell me the results.
http://www.adrian.lark.btinternet.co.uk/MemTest.zip

To run, from the dos prompt type

MemTest -M 700

This will allocate 700Mb of memory and then try to create an opengl window.

If it succeeds it will display ‘Vendor - NVIDIA Corporation’
If it falls back to software rendering it will display ‘Vendor - Microsoft Corporation’.

I have emailed NVidia but it would be good to know this isn;t just me getting this problem.

Thanks

Maybe you’re running out of virtual address space and the driver fails because it can’t allocate contiguous address blocks for memory mapped io stuff? Just a thought …

Maybe you need a 64 bit machine

Just tested with a Radeon 9500Pro/Cat3.6.
OS is currently Win98SE, 512MB physical Memory, 1.5GB is my upper cap for virtual memory.

The allocation succeeded all the way up to 1800MB (MemTest -M 1800). Ie the reported vendor was “ATI Technologies” all the time. I didn’t dare to go any higher, given my virtual memory limit. Needless to say, 700 worked, too.

I don’t think this problem has anything to do with the 5900. Attempting to allocate that much memory on a 32-bit OS is always gonna be flakey. PERIOD.

I run into this problem everyday. When the heck are we gonna get our 64-bit machines? Moore’s Law is just a touch to slow for my liking.

Thanks for trying that out Zeckensack.

I’ve run some more tests, here are the results:

On a 5900:
Allocating - 800 Mb in 1 Blocks
Memory Allocation Succeeded
Allocating - 800 Mb in 2 Blocks
Memory Allocation Succeeded
Allocating - 800 Mb in 4 Blocks
Memory Allocation Succeeded
Allocating - 800 Mb in 8 Blocks
Memory Allocation Succeeded
Allocating - 800 Mb in 16 Blocks
Memory Allocation Succeeded

Allocating - 1000 Mb in 1 Blocks
Memory Allocation Block 1 failed
Allocating - 1000 Mb in 2 Blocks
Memory Allocation Block 2 failed
Allocating - 1000 Mb in 4 Blocks
Memory Allocation Succeeded
Allocating - 1000 Mb in 8 Blocks
Memory Allocation Succeeded
Allocating - 1000 Mb in 16 Blocks
Memory Allocation Succeeded

Allocating - 1200 Mb in 1 Blocks
Memory Allocation Block 1 failed
Allocating - 1200 Mb in 2 Blocks
Memory Allocation Block 2 failed
Allocating - 1200 Mb in 4 Blocks
Memory Allocation Block 3 failed
Allocating - 1200 Mb in 8 Blocks
Memory Allocation Succeeded
Allocating - 1200 Mb in 16 Blocks
Memory Allocation Succeeded

Allocating - 1400 Mb in 1 Blocks
Memory Allocation Block 1 failed
Allocating - 1400 Mb in 2 Blocks
Memory Allocation Block 2 failed
Allocating - 1400 Mb in 4 Blocks
Memory Allocation Block 3 failed
Allocating - 1400 Mb in 8 Blocks
Memory Allocation Block 8 failed
Allocating - 1400 Mb in 16 Blocks
Memory Allocation Succeeded

On a 4600:
Allocating - 800 Mb in 1 Blocks
Memory Allocation Succeeded
Allocating - 800 Mb in 2 Blocks
Memory Allocation Succeeded
Allocating - 800 Mb in 4 Blocks
Memory Allocation Succeeded
Allocating - 800 Mb in 8 Blocks
Memory Allocation Succeeded
Allocating - 800 Mb in 16 Blocks
Memory Allocation Succeeded

Allocating - 1000 Mb in 1 Blocks
Memory Allocation Succeeded
Allocating - 1000 Mb in 2 Blocks
Memory Allocation Succeeded
Allocating - 1000 Mb in 4 Blocks
Memory Allocation Succeeded
Allocating - 1000 Mb in 8 Blocks
Memory Allocation Succeeded
Allocating - 1000 Mb in 16 Blocks
Memory Allocation Succeeded

Allocating - 1200 Mb in 1 Blocks
Memory Allocation Block 1 failed
Allocating - 1200 Mb in 2 Blocks
Memory Allocation Block 2 failed
Allocating - 1200 Mb in 4 Blocks
Memory Allocation Block 4 failed
Allocating - 1200 Mb in 8 Blocks
Memory Allocation Succeeded
Allocating - 1200 Mb in 16 Blocks
Memory Allocation Succeeded

Allocating - 1400 Mb in 1 Blocks
Memory Allocation Block 1 failed
Allocating - 1400 Mb in 2 Blocks
Memory Allocation Block 2 failed
Allocating - 1400 Mb in 4 Blocks
Memory Allocation Block 4 failed
Allocating - 1400 Mb in 8 Blocks
Memory Allocation Succeeded
Allocating - 1400 Mb in 16 Blocks
Memory Allocation Succeeded

So after creating a GL window the 4600 allows more memory to be allocated but not a huge amount more.

If I comment out the lines:
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGBA);
glutCreateWindow(“MemTest”);

I can allocate 2Gb of memory, regardless of the graphics card and how many chunks I split the memory into.

Interestingly I have to actually comment those lines out. If I just don’t call those lines the memory I can allocate is the same as if I was creating a gl window. I don’t understand why that would be.

Edit: I’ve changed the MemTest exe, it now outputs the results to a file called Memtest.txt and doesn’t require any command line parameters.

[This message has been edited by Adrian (edited 08-25-2003).]