PDA

View Full Version : glcolor4 works in debugger but not in exe



macsam
04-28-2017, 08:03 AM
Hi, I am using win 8.1 and Visual studio 13 developing a 64 bit mfc app
My app works fine in the debugger but seems to ignore the colours when I run the exe. Everything is red.

Anyone any clues as to where to start looking?

Regards

MacSam

mhagain
04-28-2017, 09:29 AM
Look for unitialized local variables - this is the most common cause of this kind of behavioural difference between debug and release builds.

macsam
04-28-2017, 10:13 AM
Thanks, all these variables have been initialised. I appreciate we need to be precise. This works with 32 bit mfc exe. (maybe it was just luck)
if (cRed != 0)
gfRed = (GLfloat)(cRed / 255.0f); //(GLfloat)cRed / (GLfloat)255.0f;
else
gfRed = 0.0f;

if (cGreen != 0)
gfGreen = gfGreen = (GLfloat)(cGreen / 255.0f);// (GLfloat)cGreen / (GLfloat)255.0f;
else
gfGreen = 0.0f;

if (cBlue != 0)
gfBlue = (GLfloat)(cBlue / 255.0f);//(GLfloat)cBlue / (GLfloat)255.0f;
else
gfBlue = 0.0f;


glColor4f( (GLfloat)gfRed, (GLfloat)gfGreen, (GLfloat)gfBlue, (GLfloat)1.0f );
glVertex3f((GLfloat)(dE1), (GLfloat)(dN1), (GLfloat)dZ1);

mhagain
04-28-2017, 01:06 PM
What about the values of cRed, cGreen and cBlue? Are they initialized?

Also worth trying - since cRed, cGreen and cBlue seem to be of unsigned char type:
glColor4ub (cRed, cGreen, cBlue, 255);Does this do anything different?

macsam
04-29-2017, 03:57 AM
Thanks for your help Mhagain.

I seem to have fixed it. I think it was defaulting to the intel graphics card, I forced it to use the NVDIA card - it has more memory. Thank you for showing me the glColour4ub. That is another tool for me to use.

macsam
04-29-2017, 04:51 AM
glColor4ub (cRed, cGreen, cBlue, 255); will give me an opportunity to use a bit less memory,every byte counts.

BTW how do you get the code to display in the box?

mhagain
04-29-2017, 05:23 AM
In this case memory usage is quite unimportant, but as a general rule beware of "every byte counts" thinking when it comes to programming with a hardware accelerated 3D API. There are several cases where memory alignment and packing rules are far more important for performance, and they can mean burning a little extra memory in exchange for orders of magnitude more performance. Because it's not the 1970s any more you can treat memory as a cheap and plentiful resource that is there to be used (provided you don't do anything silly). After all, if you have a GPU with 2gb of RAM but you only ever use 128mb of that - you're wasting the other 1920mb.

For the code I used code tags. Try doing "reply with quote" to my post to see.

macsam
05-01-2017, 03:52 AM
In this case memory usage is quite unimportant, but as a general rule beware of "every byte counts" thinking when it comes to programming with a hardware accelerated 3D API. There are several cases where memory alignment and packing rules are far more important for performance, and they can mean burning a little extra memory in exchange for orders of magnitude more performance. Because it's not the 1970s any more you can treat memory as a cheap and plentiful resource that is there to be used (provided you don't do anything silly). After all, if you have a GPU with 2gb of RAM but you only ever use 128mb of that - you're wasting the other 1920mb.

For the code I used code tags. Try doing "reply with quote" to my post to see.

Thanks, is there anyway I can trap that there is a problem\issue with the graphics card?