Height Map display performance.

I’ve written a application to essentially display a height map, and am now trying to increase it’s frame rate to something a little better then the 8 I’m currently getting on the target hardware. I’ve been researching this, but haven’t been able to come up with anything concrete as of yet. I’m currently reviewing the documents at http://vterrain.org/ looking for a viable solution, but since I have been buried in white papers all day I thought I was ask and see if anyone can nudge me in a relevant direction.

I’m limited to GL 2.1 and GLSL 1.2.

Here is a run down what I’m doing currently:

I’ve broken the height map into a 5x5 grid of tiles.
Each tile consists of :
128x128 vertices.
A VBO containing the position/normals/and texture coordinates
A VBO containing the rgb values
An IBO containing the points to display the tiles as a triangle strip.

Once a tile is filled in, I load it to the GPU, this only happens when a tile comes into/out of scope.

Once per frame I calculate which tiles need to be displayed, and only render the ones that would be translated into view.

Overall on average 7 tiles get rendered, which equates to 114,688 vertecies, with 229,362 points being drawn through glDrawRangeElements(GL_TRIANGLE_STRIP…

So I guess my questions are twofold:
First, is there any techniques I can apply that would increase the performance of this thing?
Second, is using triangle strips the ‘right’ thing to do? For example, if using triangles would be faster. ext.

Back to researching. Thanks for reading this.

what is your target hardware ?