problem of fps

Hi,
i display my model (about 10000 triangles) without texturing and i have about 200 fps with the per fragment lighting. But when i execute the texturing code i just have about 7 or 10 fps…
these are code parts :

 
int main(int argc, char *argv[])
{
	myLoader= new loader3ds() ;
		
	glutInit (&argc, argv) ;
	glutInitDisplayMode (GLUT_DOUBLE | GLUT_RGBA | GLUT_DEPTH) ;
	glutInitWindowSize (640, 640) ;
	glutInitWindowPosition (250,250) ;
	glutCreateWindow (argv [0]) ;
	glewInit();

	glEnable( GL_DEPTH_TEST );
	glClearColor(0, 0, 1, 0);
	
	myLoader->lighting() ;
	myLoader->loadTextures() ;
	
	glutReshapeFunc (reshape) ;
	glutKeyboardFunc (keyboard) ;
	glutDisplayFunc (display) ;
	glutIdleFunc(display);


	loadShader *myShaderLoader = new loadShader("vertexShader.vert", "fragmentShader.frag");

	glutMainLoop () ;
	return 0 ;
}
 

void loader3ds::loadTextures ()
{
glGenTextures (myLoader->getCounterObj()+1, texname);
int i ;
for(i=0 ; i <=/1/ myLoader->getCounterObj() ; i++)
{
if(strlen(myLoader->myObjects[i].textureNames) > 0)
{
SDL_Surface texture=IMG_Load(myLoader->myObjects[i].textureNames/“wood_b.jpg”//“bois.bmp”“eyeball.tif”//“eyeball.JPG”//myLoader->myObjects[i].textureNames//“wood_b.TGA”“herbe.bmp”*/);
cout<<"texture “<<texture<<” nomTex "<myObjects[i].textureNames<<endl ;//gérer le compteur d’objets dans 0xa300

		glBindTexture (GL_TEXTURE_2D, texname[i]) ;

		glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_BASE_LEVEL, 0);
		glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_MAX_LEVEL, 0);
		glTexParameteri (GL_TEXTURE_2D,	GL_TEXTURE_MAG_FILTER,GL_LINEAR);
		glTexParameteri (GL_TEXTURE_2D,	GL_TEXTURE_MIN_FILTER,GL_LINEAR);

		glTexImage2D(GL_TEXTURE_2D, 0, 3, texture->w, texture->h,
				0, GL_RGB, GL_UNSIGNED_BYTE, texture->pixels);

          }
}
    //delete texture
glEnable(GL_TEXTURE_2D) ;

}


Check if the texture width and height is a power-of-two in all cases.
If you’re running on an OpenGL 2.0 implementation it must support non-power-of-two textures for the GL_TEXTURE_2D target, but if the hardware doesn’t support it (thinking of GeForce 5xxx) you fall back to software rendering.
Read the OpenGL release notes for OpenGL 2.0 http://developer.nvidia.com/object/nv_ogl2_support.html and you find in chapter 2.2.4 that you can check if it’s HW accelerated by querying the availability of the GL_ARB_texture_non_power_of_two extension in addition to the OpenGL 2.0 version.

If that’s not it, try to remove the texture base level and max level, you’re not using mipmaps anyway and free the texture image if you don’t need it anymore to prevent memory leaks or swapping.

i don’t use mipmap but one texture hasn’t its dimensions equal to a power of two.
I’m using radeon 9600 SE, do you know if it support it ?

Originally posted by airseb:
i don’t use mipmap but one texture hasn’t its dimensions equal to a power of two.
I’m using radeon 9600 SE, do you know if it support it ?

According to http://www.delphi3d.net/hardware/extsupport.php?extension=GL_ARB_texture_non_power_of_two that extension is not present on Radeon 9600.

It will be nice to post your shaders.
Piece of code that you publish looks correct.

yooyo

i have changed the size of the texture which is non power of two and the performances are correct (about 200 fps).
Thanks !