Drawing objects as per screen coords in OpenGL-ES 2.0

Hi,

I have a 2D map that I have created in OpenGL-ES 2.0. I’d like to add a scalebar.

As this will not be moved when the user pans the map, I need it to be drawn in screen coordinates somehow, so that it doesn’t move.

I could use some advice on how to accomplish something like this.

If you don’t want it to move, don’t move it.

The map only moves because you make it move, typically by having the vertex shader apply a transformation to the vertex coordinates before storing them in gl_Position.

Any “overlays” should have their own transformation, which will normally be constant. You typically shouldn’t try to control the size of the scale bar using a scaling transformation; modify the geometry instead. Scaling will look ugly (e.g. borders and tick marks will be stretched).

Hi GClements,
from your comments how you say that I don’t apply movement transformations makes sense.

The piece I am confused mostly is how to start the vertex coordinate data. My screen is 480 pixels wide, 762 high, I am going to position this about 3 pixels from the bottom and 3 from the left of the screen, use 75% of the width which is 360 pixels. Maybe I thinking about this wrong. My first attempt will be a singular horizontal line.

So the initial line I’d like to create will be via screen dimensions {3,759,0,363,759,0} or {-237,-378,0,597,-378,0} if the origin of this is in the center.

Also would I creating a new model, projection, view, mvp matrices for this?

If you’re using shaders, you don’t necessarily need separate model-view and projection matrices, particularly for 2D (for 3D, you often need to separate them as lighting calculations need to be done before perspective projection).

But the general point is that you wouldn’t use the same matrices (or probably even the same shaders) for the scale bar as for the map.

The map’s transformation will typically include scaling and translation based upon zoom and pan, while the scale bar’s transformation would have neither (any scaling would be performed by modifying the vertices).

These almost sound contradictory: “If you’re using shaders, you don’t necessarily need separate model-view and projection matrices” and “But the general point is that you wouldn’t use the same matrices (or probably even the same shaders)”.

Do you mean once I draw my map I somehow reset the model-view, projection matrices for my scalebar, so they are not seperate as such but not the same either?
So hypothetically I have drawn my map, before I start my scalebar draw function, I change my view and projection matrix from settings for the map to the setting for the scalebar, these settings will be constant. Then I perform all the multiplication, apply shaders etc and draw away.

My scalebar initial line vertex data I mentioned above? Am I going down the right track there? All the rest of my vertices on the map are 152.34545,-27.098945,etc.

Started playing around with what I understood from your advice and created it without issue. I created a new view and projection in the existing matrices, these values were the size of the screen with the origin at the center.
Just needed to make sure that they get set back to the map matrices when required.
Thanks GClements!

I mean that you only need one matrix (model-view-projection), not two (model-view and projection). The fixed-function pipeline has separate model-view and projection matrices because lighting calculations are performed after the model-view transformation but before the projection transformation. If you aren’t using the fixed-function pipeline (which doesn’t exist in OpenGL ES 2) and either aren’t performing lighting calculations or aren’t using a perspective projection, then there isn’t any need for separate model-view and projection transformations.

[QUOTE=Hank Finley;1255548]Do you mean once I draw my map I somehow reset the model-view, projection matrices for my scalebar, so they are not seperate as such but not the same either?

So hypothetically I have drawn my map, before I start my scalebar draw function, I change my view and projection matrix from settings for the map to the setting for the scalebar, these settings will be constant. Then I perform all the multiplication, apply shaders etc and draw away.[/QUOTE]

Yes. You set the transformation(s) for the map based upon zoom/pan, draw the map, set different transformation(s) for the scale bar, draw the scale bar. You might even use different shaders for the map and the scale bar (if one set of shaders works fine for both, all well and good, but having one set of shaders which works for different types of object can end up being more complicated than having different shaders for each type of object).

My scalebar initial line vertex data I mentioned above? Am I going down the right track there? All the rest of my vertices on the map are 152.34545,-27.098945,etc.

The scale bar vertices would typically use “screen” coordinates (not necessarily pixel coordinates), not cartographic coordinates. If you want to make the scale bar longer or shorter, you’d move the vertices rather than changing the transformation (the reason being that changing the transformation would affect the size of tick marks, text, etc).

Thankyou! It is looking great.
I have made it so it resembles google maps scale bar, so it scales in multiples of 1, 2, 5.

I know there are many questions out there about rendering text, and I am currently wading through them. Placing the zoom level text onto the scale bar, I have just been looking at Texample2 and it’s write-up Rendering Text in OpenGL 2.0 ES on Android

As well as the scale bar I have to take into consideration that I will be labeling my roads on my map as well. Just wondering what your opinion is on using the above, or maybe you have a better direction to point me in.

Text rendering is one of those subjects which could easily fill a book or two. Large parts of it are platform-specific (i.e. getting glyphs and metrics from the platform’s font API). Other aspects (e.g. legibility of small, rotated text) involve experimenting with different filters; no filtering results in jagged edges, bilinear filtering results in text which is rather blurred. Better filters (e.g. Lanczos) can be implemented in a fragment shader, it just depends upon how much effort you want to put into it (enough has been written on the topic of “signal processing” to fill a sizeable library, and most of it is relevant to some extent).

If the platform’s font API supports rendering string into a bitmap at an arbitrary scale and rotation, you might be better off using that than doing it yourself (the font renderer will probably already be using something better than bilinear filtering).

Ok looks like a I’ll be reading a while about all this.
What about the process for loading road names? I was thinking of gradually loading and displaying them.
They would need to be centered along the road center-lines and angled accordingly. I managing this process as efficiently as possible is stumping me a little.

Hi GClements,
I wanted to pick up the conversation on rendering font glyph textures, I have an idea of a solution but the process I may need some help with.

I am using (what I think they call) sprite rendering, a glyph texture at a time. So the actual generation of the glyph textures are sorted I believe.
I’m going to create all my text in one set of vbo’s, it is going to create road name labels on my map. These will be positioned in the center of the road line, with the angle of the road line at that position. So it will hold a mixture of positions, angles and texture coordinates.

What I was thinking was, we formulated a point texture that always rendered the same size no matter what the scale. You just needed to supply a center position, a uniform size and texture coordinates. Using textures instead of points in a map OpenGl-ES-2-0

	public static String getPointVertexShader()
	{
		return
		"precision highp float;      						
"
		+ "uniform mat4 u_MVPMatrix;						
"
		+ "uniform vec2 u_pointSize;						
"
		+ "attribute vec4 a_Position;						
"
		+ "attribute vec2 a_TexCoordinate;					
"

		+ "varying vec2 v_TexCoordinate;					
"
		
		+ "void main()                  					
"
		+ "{                            					
"
		+ "   v_TexCoordinate = a_TexCoordinate.st * vec2(1.0, -1.0);
"
		+ "   gl_Position = u_MVPMatrix * a_Position;		
"
		+ "   gl_Position += vec4(gl_Position.w * u_pointSize * (a_TexCoordinate - vec2(0.5,0.5)), 0, 0);
"
		+ "}                              					
";
	}
	
	public static String getPointFragmentShader()
	{
		return
		      "precision highp float;		
"
			+ "uniform sampler2D u_Texture;	
"
			
			+ "varying vec2 v_TexCoordinate;
"
			
			+ "void main()					
"
			+ "{							
"
			+ "   gl_FragColor = (texture2D(u_Texture, v_TexCoordinate));
"		  
			+ "}							
";
	}

These characters will always be a fixed height on screen, so their size won’t be changing when the map is zoom in or out of, but they will pan with the map. So I could use the same principal with my glyphs.

What I need help with:

  1. How do I initially figure out the center texture positions? Should I just work them out using screen coordinates centering everything around 0x,0y?
  2. How do I manipulate these center positions in the shader with a scale? If the you zoom in the center positions need to get closer together.
  3. How do I manipulate these center positions to the actual world position. Each string will initially have a center position of 0,0 and this will change to where ever it needs to be in the world. (e.g. -0.03151243, -0.0472154)
  4. I will be calculating the angle of the road line and in-turn calculate the center positions on the same angle. How do I manipulate the vertices in the shader to the an angle?
  5. And I guess the last is, if I do it this way what would you suggest as far as, the kind of structure should I put the data into as far as the vbo’s are concerned.

Would really appreciate you help on this!