opengl to opengl es

Got an opengl code that I am trying to run on ios (opengl es).
According to the visual output – on windows (opengl) it works perfectly, whereas on ios (opengl es) there seems to be something wrong with the normals – cause the visual presentation is faulty.

Are there any initial parameters that are different on opengl and opengles?
If so please let me know what I should change in order for the opengles to work properly?

Here is the code I have used:

To Initialize the opengl (code is written in delphi):

  glShadeModel(GL_SMOOTH);                 // Enables Smooth Color Shading
//**!!  glClearDepth(1.0);                       // Depth Buffer Setup
  glEnable(GL_DEPTH_TEST);                 // Enable Depth Buffer
  glDepthFunc(GL_LEQUAL);                    // The Type Of Depth Test To Do
  glBlendFunc(GL_SRC_ALPHA, GL_ONE);
  gldisable(GL_BLEND);
  glDisable(GL_TEXTURE_2D);
  glHint(GL_PERSPECTIVE_CORRECTION_HINT, GL_NICEST);   //Realy Nice perspective calculations

  glEnable(GL_LIGHTING);
  glEnable(GL_COLOR_MATERIAL);

  if Model.NumLights = 0 then
  begin

  glEnable(GL_LIGHT0);
  glEnable(GL_LIGHT1);
	
  L1 := Make( 1,  5,  -3, 0); //Light position in world
  glLightfv(gl_light0, GL_POSITION, @L1);

  L1 := Make(1  , 1 , 1 , 1); //Light colour
  glLightfv(gl_light0, GL_DIFFUSE, @L1);
  glLightfv(gl_light0, GL_SPECULAR, @L1);

  L1 := Make( 1, -5, 0, 0); //Light position in world
  glLightfv(gl_light1, GL_POSITION, @L1);

  L1 := Make(1  ,0.5  , 0  , 1); //Light colour
  glLightfv(gl_light1, GL_DIFFUSE, @L1);
  glLightfv(gl_light1, GL_SPECULAR, @L1);

  end
  else
  begin

    for I := 0 to Max(7, Model.NumLights-1) do
    begin
     glEnable(GL_LIGHT0+I);

     L1 := Make(Model.Lights[i].P[0],  Model.Lights[i].P[1],  Model.Lights[i].P[2], 0); //Light position in world
     glLightfv(gl_light0+I, GL_POSITION, @L1);

     L1 := Make(Model.Lights[i].R, Model.Lights[i].G , Model.Lights[i].B , 1); //Light colour
     glLightfv(gl_light0+I, GL_DIFFUSE, @L1);
     glLightfv(gl_light0+I, GL_SPECULAR, @L1);

    end;
  end;

  L1 := Make(1  , 1  , 1  , 0); //Light colour
  glMaterialfv(GL_FRONT_AND_BACK, GL_SPECULAR, @L1);

  Sh := 50;
  glMaterialfv(GL_FRONT_AND_BACK, GL_SHININESS, @Sh);
//**!!  glLightmodeli(GL_LIGHT_MODEL_TWO_SIDE, 0);
end;

To draw the data to screen:


type
  Verticies = array [0..2] of word;
Var I, J, z, Face: integer;
  MV: array of Verticies;
  dataV, dataN: array of Vertex3ds;
begin
  glClear(GL_COLOR_BUFFER_BIT or GL_DEPTH_BUFFER_BIT);    // Clear The Screen And The Depth Buffer
  glLoadIdentity();

  glTranslatef(0,0,Cz);
  glRotatef(Cx, 1, 0, 0);
  glRotatef(Cy, 0, 1, 0);

  glBindtexture(GL_TEXTURE_2D, tex);

  z := 0;
  SetLength(MV, NumFaces);
  SetLength(dataV, (NumFaces+1)*3);
  SetLength(dataN, (NumFaces+1)*3);
  FOR I:=0 to NumFaces-1 do
    For J := 0 to 2 do
    begin
      dataV[z] := V[F[i][J]];
      dataN[z] := N[nF[i][J]];
      MV[i][J] := z;
      inc(z);
    end;

  glEnableClientState(GL_NORMAL_ARRAY);
  glEnableClientState(GL_VERTEX_ARRAY);
  glNormalPointer(GL_FLOAT, 0, @dataN[0]); //@anormal[0][0][0]);
  glVertexPointer(3, GL_FLOAT,0, @dataV[0] ); //@avertex[0][0][0]);
  glDrawElements(GL_TRIANGLES,(NumFaces-1)*3,GL_UNSIGNED_SHORT, @MV[0]);
  glDisableClientState(GL_VERTEX_ARRAY);  // disable vertex arrays

//  glDrawElements(GL_TRIANGLES,(NumFaces-1)*3,GL_UNSIGNED_SHORT, @nF[0]);
  glDisableClientState(GL_NORMAL_ARRAY);

  SetLength(MV, 0);
  SetLength(dataV, 0);
  SetLength(dataN, 0);


Same code is being used both on windows (opengl), and on ios (opengles v1.1).

here is a sample of the result I get on both platforms:
http://bueno.co.il/test

From the images it looks as if depth testing is not enabled or the resolution of the depth buffer is insufficient (possibly due to badly chose near/far values?) - e.g. the driver side rear tire is drawn on top of the car body geometry.
Do you request a depth buffer when creating your application framebuffer/OpenGL context?

Thanks that was the depthBuffer that I needed.
Now it is working properly.