OpenGL/NVIDIA memory leak ?!?

i’ve an app, which get the error code out_of_memory after running a while on my Gfx card (GForce 2 GTS, the same on an TNT2 Ultra) both with same drivers: 23.81, but on a card like TNT 2 M64 it runs fine (same drivers too)
the app itself only creates texture objects, binds a 2d texture and uploads the data, after drawing it for several seconds it deletes the texture object.
I’ve calculated a leak of about 125kb (or 3times 125kb if agp mem is used for texture storing, !but i dont know how to check this!) by a texturesize of 204820484byte. There are no leaks on the TNT2 M64 card, which splits the texture into smaler parts (4 parts each 102410244) and the only difference between both cases i’ve found is that the TNT2 M64 has not set the flags:
PFD_GENERIC_ACCELERATED, PFD_GENERIC_FORMAT

but the one with leaks in gl have set: PFD_GENERIC_ACCELERATED

and there is another question:
what does gl do if i allocate 2 times a texture of 16mb(204820484) without compression and i only have 32mb on the card ?

OpenGL will gladly accept the texture. What the driver will do is a different question. Probably the texture will be put in AGP/system memory.

Leak is still there ?!? even if i reduce texture size, but when i dont use textures no leak appears ?!?

Hi,
I once had a memory leak because the resources for my texture was not being released when I called DeleteTexture. I would think that when a texture is deleted, there should be a reduction in memory used and therefore using the debugger (stepping through your code), you should see this reduction after the call to delete textures.

Hope this helps.

lobstah…

thats not a proplem of my app, its a leak in the opengl-api, because i delete texturedata immediately after ive used glTexture2D … and ive already checked my app for leaks, there are none (and if there where some, they would not cause opengl to get an leak !)

still not solved, so i push it to the top of the forum , and i have additionaly checked my app multiple times for leaks, but there are no leaks in myApp, so the error still is somewhere in OpenGL ?!?

Oops! Dropped down one slot, so I’ve pushed it back up for you

T2k,

Have you submitted a sample app that shows the bug to nVIDIA ?

Regards.

Eric

P.S.: I’d be happy to test it here if you want but I run a GF3 + 27.70.

PFD_GENERIC_ACCELERATED

This is strange - it means that the pixelformat is accelerated by an “MCD”, a mini client driver, not by an “ICD” (installable client driver) as e.g. the Nvidia driver.

The only “MCD” acceleration I ever saw was with the Matrox Millennium driver… AFAIK it means that the driver only exposes some rasterization capabilities, and the rest is done by the generic OpenGL implementation.

Michael

Eric:
Have you submitted a sample app that shows the bug to nVIDIA ?

No ?!? but maybe iam to stupid to find the correct email address or something relating to my problem ?!?

P.S.: I’d be happy to test it here if you want but I run a GF3 + 27.70.

sorry nop…

wimmer, ehm, sorry my fault both are not set… so i will try to disable them on the leaked systems …

I think you can’t set them, you can just query a specific pixelformat (with DescribePixelFormat) wether the flag is set.

Which is the pixelformat that does the leak?

Michael

there is a memory leak detector on flipcode. I have never used it but it is just source you include with your code, compile, then run. It makes a file telling you if there are leaks. It also has the option to TRY and crash your program to see if your error detection works right. Sorry, i dont have a link, but its supposed to be a great program

chxfryer: already tried it, it was linked in the beginner forum one or two days ago, but this app says no errors all ok, even if the error appears
wimmer: pixel format number 3:

dwFlags: 549 (PFD_DRAW_TO_WINDOW, PFD_SUPPORT_OPENGL, PFD_DOUBLEBUFFER, PFD_SWAP_EXCHANGE)
iPixelType: 0
cColorBits: 32
cRedBits: 8
cRedShift: 16
cGreenBits: 8
cGreenShift: 8
cBlueBits: 8
cBlueShift: 0
cAlphaBits: 0
cAlphaShift: 0
cAccumBits: 64
cAccumRedBits: 16
cAccumGreenBits: 16
cAccumBlueBits: 16
cAccumAlphaBits: 16
cDepthBits: 24
cStencilBits: 0

ive noticed that it always uses the accumulation buffer (but i thought its not supported by geforce cards ?!?) so ihave printed out all 45 available pixelformats, and tere is something i realy dont understand:
1st why no pixelformat has the PFD_GENERIC_ACCELERATED flag, (i thought in the past 1week ago it was there ?!? does this mean that i dont get HW-Acceleration ?
2nd alpha is not used ??? but alpha works in my app ???
3rd why is this pixelformat been selected, number 4 fits better, has same values except:

cAlphaBits: 8
cAlphaShift: 24

[This message has been edited by T2k (edited 03-21-2002).]

1st why no pixelformat has the PFD_GENERIC_ACCELERATED flag, (i thought in the past 1week ago it was there ?!? does this mean that i dont get HW-Acceleration ?

I think you are not looking for your pixel format correctly. To make the difference between HW-Accelerated and Software pixel format, you need to check the presence of PFD_GENERIC_FORMAT in dwFlags member of the PIXELFORMATDESCRIPTOR structure.

If PFD_GENERIC_FORMAT is set, then it is a software pixel format. If it’s not then it’s an hw-accelerated one.

Don’t ask me why: I discovered that by experimenting (don’t know why PFD_GENERIC_ACCELERATED is never used…).

Regards.

Eric

[This message has been edited by Eric (edited 03-21-2002).]

Originally posted by T2k:
2nd alpha is not used ??? but alpha works in my app ???

From the PIXELFORMATDESCRIPTOR documentation on MSDN:

cColorBits
Specifies the number of color bitplanes in each color buffer. For RGBA pixel types, it is the size of the color buffer, excluding the alpha bitplanes. For color-index pixels, it is the size of the color-index buffer.

cAlphaBits
Specifies the number of alpha bitplanes in each RGBA color buffer. Alpha bitplanes are not supported.

cAlphaShift
Specifies the shift count for alpha bitplanes in each RGBA color buffer. Alpha bitplanes are not supported.

Once again, this doesn’t make any sense but don’t ask me why.

Regards.

Eric

[This message has been edited by Eric (edited 03-21-2002).]

Alpha bitplanes are not supported.

I don’t think this information is stil accurate - this probably referred to the MS software implementation or something.

Don’t ask me why: I discovered that by experimenting (don’t know why PFD_GENERIC_ACCELERATED is never used…).

Again: PFD_GENERIC_ACCELERATED means MCD (simple rasterization-only driver, e.g. Matrix Millennium, practically never used), PFD_GENERIC_FORMAT means software (as you said). So absence of both PFD_GENERIC_FORMAT and PFD_GENERIC_ACCELERATED means hardare.

Pixelformat 3 sounds good, it’s in many cases the first HW-accelerated double-buffered format.

Why should 4 be a better fit? if you request 8 alpha bits, it will select pf 4 in ChoosePixelFormat, but otherwise, it returns the lowest matching pixelformat, I think.

Michael

The alpha bit planes are only necessary if you are doing destination alpha. This is fairly rare; most alpha blending (for transparency, say) only uses source alpha.

If you really do need to do destination alpha, you need to request cAlphaBits to be greater than 0. This works fine on my GeForce 3.

hmm, but thats not the problem, in PixelFormat 4 the error still occures, and if i choose a pixelformat where PFD_GENERIC_FORMAT is set there is no error, so i think its 99% not a fault of my app !!!