PDA

View Full Version : Using textures instead of points in a map OpenGl-ES 2.0



Hank Finley
08-23-2013, 08:01 PM
At present point I have built a map of road center-lines and property boundaries. Which you can zoom and pan around in. There are also 6 points of interest (drawn as green points in the image below). This possibly could hold up to 6000 points.

1114 1115

What I am trying to accomplish is, to change the points to circular textures in a square. I would like the positions to move with the zoom, however the scaling acts differently. I do not want the squared textures to scale at the same rate as the rest of drawing.

In fact, what I was thinking was to set the image to something like 5pt at 54000m and above, and scale slowly to 25pt until it gets to 10000m, anything below that remains at 25pt. Just using pt as an example unit.
1116 1117

I already have the center of each point loaded into a floatbuffer that is how I drew the initial points.
I add a square into the constructor, with position data and texture coord data, color is already in the texture png.

I have started to create the vertex and fragment shaders in getPointVertexShader() and getPointFragmentShader(), however I'm am not sure on how to finish them off, so I could use a little help there.

The other pieces that are confusing me are setting position, scale of the squares, and piece it all together in the draw.

Below is the renderer code so far. Any help will be greatly appreciated!


public class vboCustomGLRenderer implements GLSurfaceView.Renderer {

private final Context mActivityContext;

/**
* Store the model matrix. This matrix is used to move models from object space (where each model can be thought
* of being located at the center of the universe) to world space.
*/
private float[] mModelMatrix = new float[16];

/**
* Store the view matrix. This can be thought of as our camera. This matrix transforms world space to eye space;
* it positions things relative to our eye.
*/
private float[] mViewMatrix = new float[16];

/** Store the projection matrix. This is used to project the scene onto a 2D viewport. */
private float[] mProjectionMatrix = new float[16];

/** Allocate storage for the final combined matrix. This will be passed into the shader program. */
private float[] mMVPMatrix = new float[16];

/** This will be used to pass in the transformation matrix. */
private int mMVPMatrixHandle;

/** This will be used to pass in model position information. */
private int mLinePositionHandle;

/** This will be used to pass in model color information. */
private int mLineColorUniformLocation;

/** This will be used to pass in model position information. */
private int mPointPositionHandle;

/** How many bytes per float. */
private final int mBytesPerFloat = 4;

/** Offset of the position data. */
private final int mPositionOffset = 0;

/** Size of the position data in elements. */
private final int mPositionDataSize = 3;

/** How many elements per vertex for double values. */
private final int mPositionFloatStrideBytes = mPositionDataSize * mBytesPerFloat;

/** This is a handle to our per-vertex line shading program. */
private int mPerVertexLinesProgramHandle;

/** This is a handle to our points program. */
private int mPointsProgramHandle;

/** Store our model data in a float buffer. */
private final FloatBuffer mSquarePositions;
private final FloatBuffer mSquareTextureCoordinates;

/** This will be used to pass in model texture coordinate information. */
private int mTextureCoordinateHandle;

/** Size of the texture coordinate data in elements. */
private final int mTextureCoordinateDataSize = 2;

/** This will be used to pass in the texture. */
private int mTextureUniformHandle;

/** This is a handle to our texture data. */
private int mTextureDataHandle;

public double eyeX = 0;
public double eyeY = 0;
public float eyeZ = 1.5f;

// We are looking toward the distance
public double lookX = eyeX;
public double lookY = eyeY;
public float lookZ = 0.0f;

// Set our up vector. This is where our head would be pointing were we holding the camera.
public float upX = 0.0f;
public float upY = 1.0f;
public float upZ = 0.0f;

public double modelOffsetX = -(default_settings.mbrMinX + ((default_settings.mbrMaxX - default_settings.mbrMinX)/2));
public double modelOffsetY = -(default_settings.mbrMinY + ((default_settings.mbrMaxY - default_settings.mbrMinY)/2));

public double mScaleFactor = 1;
public double modelXShift = 0;
public double modelYShift = 0;
public double viewXShift = 0;
public double viewYShift = 0;

/**
* Initialize the model data.
*/
public vboCustomGLRenderer(final Context activityContext) {
mActivityContext = activityContext;

// X, Y, Z
final float[] squarePositionData =
{
// In OpenGL counter-clockwise winding is default. This means that when we look at a triangle,
// if the points are counter-clockwise we are looking at the "front". If not we are looking at
// the back. OpenGL has an optimization where all back-facing triangles are culled, since they
// usually represent the backside of an object and aren't visible anyways.

// Front face
-1.0f, 1.0f, 1.0f,
-1.0f, -1.0f, 1.0f,
1.0f, 1.0f, 1.0f,
-1.0f, -1.0f, 1.0f,
1.0f, -1.0f, 1.0f,
1.0f, 1.0f, 1.0f
};

// S, T (or X, Y)
// Texture coordinate data.
// Because images have a Y axis pointing downward (values increase as you move down the image) while
// OpenGL has a Y axis pointing upward, we adjust for that here by flipping the Y axis.
// What's more is that the texture coordinates are the same for every face.
final float[] squareTextureCoordinateData =
{
// Front face
0.0f, 0.0f,
0.0f, 1.0f,
1.0f, 0.0f,
0.0f, 1.0f,
1.0f, 1.0f,
1.0f, 0.0f
};

// Initialize the buffers.
mSquarePositions = ByteBuffer.allocateDirect(squarePositionData.lengt h * mBytesPerFloat)
.order(ByteOrder.nativeOrder()).asFloatBuffer();
mSquarePositions.put(squarePositionData).position( 0);

mSquareTextureCoordinates = ByteBuffer.allocateDirect(squareTextureCoordinateD ata.length * mBytesPerFloat)
.order(ByteOrder.nativeOrder()).asFloatBuffer();
mSquareTextureCoordinates.put(squareTextureCoordin ateData).position(0);
}


boolean loadComplete = false;

public void setDraw(boolean loadComplete){
this.loadComplete = loadComplete;
}

public void setEye(double x, double y){

eyeX -= (x / screen_vs_map_horz_ratio);
lookX = eyeX;
eyeY += (y / screen_vs_map_vert_ratio);
lookY = eyeY;

// Set the camera position (View matrix)
Matrix.setLookAtM(mViewMatrix, 0, (float)eyeX, (float)eyeY, eyeZ, (float)lookX, (float)lookY, lookZ, upX, upY, upZ);
}

public void setScaleFactor(float scaleFactor, float gdx, float gdy){

// Don't let the object get too small or too large.
//mScaleFactor = Math.max(0.1f, Math.min(mScaleFactor, 10000.0f));

mScaleFactor *= scaleFactor;

mRight = mRight / scaleFactor;
mLeft = -mRight;
mTop = mTop / scaleFactor;
mBottom = -mTop;

//The eye shift is in pixels will get converted to screen ratio when sent to setEye().
double eyeXShift = (((mWidth / 2) - gdx) - (((mWidth / 2) - gdx) / scaleFactor));
double eyeYShift = (((mHeight / 2) - gdy) - (((mHeight / 2) - gdy) / scaleFactor));

screen_vs_map_horz_ratio = (mWidth/(mRight-mLeft));
screen_vs_map_vert_ratio = (mHeight/(mTop-mBottom));

eyeX -= (eyeXShift / screen_vs_map_horz_ratio);
lookX = eyeX;
eyeY += (eyeYShift / screen_vs_map_vert_ratio);
lookY = eyeY;

Matrix.frustumM(mProjectionMatrix, 0, (float)mLeft, (float)mRight, (float)mBottom, (float)mTop, near, far);
}


protected String getLineVertexShader()
{
// TO DO: Explain why we normalize the vectors, explain some of the vector math behind it all. Explain what is eye space.
final String lineVertexShader =
"uniform mat4 u_MVPMatrix; \n" // A constant representing the combined model/view/projection matrix.

+ "attribute vec4 a_Position; \n" // Per-vertex position information we will pass in.
+ "attribute vec4 a_Color; \n" // Per-vertex color information we will pass in.

+ "varying vec4 v_Color; \n" // This will be passed into the fragment shader.

+ "void main() \n" // The entry point for our vertex shader.
+ "{ \n"
+ " v_Color = a_Color; \n" // Pass the color through to the fragment shader.
// It will be interpolated across the triangle.
+ " gl_Position = u_MVPMatrix \n" // gl_Position is a special variable used to store the final position.
+ " * a_Position; \n" // Multiply the vertex by the matrix to get the final point in
+ " gl_PointSize = 5.0; \n"
+ "} \n"; // normalized screen coordinates.

return lineVertexShader;
}

protected String getLineFragmentShader()
{
final String lineFragmentShader =
"precision mediump float; \n" // Set the default precision to medium. We don't need as high of a
+ "uniform vec4 u_Color; \n" // This is the color from the vertex shader interpolated across the
// triangle per fragment.
+ "void main() \n" // The entry point for our fragment shader.
+ "{ \n"
+ " gl_FragColor = u_Color; \n" // Pass the color directly through the pipeline.
+ "} \n";
return lineFragmentShader;
}

protected String getPointVertexShader()
{
// Define a simple shader program for our points.
final String pointVertexShader =
"uniform mat4 u_MVPMatrix; \n"
+ "attribute vec4 a_Position; \n"
+ "attribute vec2 a_TexCoordinate; \n" // Per-vertex texture coordinate information we will pass in.

+ "varying vec2 v_TexCoordinate; \n" // This will be passed into the fragment shader.

+ "void main() \n"
+ "{ \n"
+ " v_TexCoordinate = a_TexCoordinate; \n" // Pass through the texture coordinate.
+ " gl_Position = u_MVPMatrix * a_Position; \n" // gl_Position is a special variable used to store the final position.
+ "} \n";
return pointVertexShader;
}

protected String getPointFragmentShader()
{
final String pointFragmentShader =
"precision mediump float; \n" // Set the default precision to medium. We don't need as high of a precision in the fragment shader.
+ "uniform sampler2D u_Texture; \n" // The input texture.

+ "varying vec2 v_TexCoordinate;\n" // Interpolated texture coordinate per fragment.

+ "void main() \n" // The entry point for our fragment shader.
+ "{ \n"
+ " gl_FragColor = (texture2D(u_Texture, v_TexCoordinate));\n" // Pass the color directly through the pipeline.
+ "} \n";
return pointFragmentShader;
}

/**
* Helper function to compile a shader.
*
* @param shaderType The shader type.
* @param shaderSource The shader source code.
* @return An OpenGL handle to the shader.
*/
private int compileShader(String shader, final int shaderType, final String shaderSource)
{
int shaderHandle = GLES20.glCreateShader(shaderType);

if (shaderHandle != 0)
{
// Pass in the shader source.
GLES20.glShaderSource(shaderHandle, shaderSource);

// Compile the shader.
GLES20.glCompileShader(shaderHandle);

// Get the compilation status.
final int[] compileStatus = new int[1];
GLES20.glGetShaderiv(shaderHandle, GLES20.GL_COMPILE_STATUS, compileStatus, 0);

// If the compilation failed, delete the shader.
if (compileStatus[0] == 0)
{
Log.e("vboCustomGLRenderer: compileShader", "Error compiling shader: " + shader + " " + GLES20.glGetShaderInfoLog(shaderHandle));
GLES20.glDeleteShader(shaderHandle);
shaderHandle = 0;
}
}

if (shaderHandle == 0)
{
throw new RuntimeException("Error creating shader." );
}
return shaderHandle;
}

/**
* Helper function to compile and link a program.
*
* @param vertexShaderHandle An OpenGL handle to an already-compiled vertex shader.
* @param fragmentShaderHandle An OpenGL handle to an already-compiled fragment shader.
* @param attributes Attributes that need to be bound to the program.
* @return An OpenGL handle to the program.
*/
private int createAndLinkProgram(final int vertexShaderHandle, final int fragmentShaderHandle, final String[] attributes)
{
int programHandle = GLES20.glCreateProgram();

if (programHandle != 0)
{
// Bind the vertex shader to the program.
GLES20.glAttachShader(programHandle, vertexShaderHandle);

// Bind the fragment shader to the program.
GLES20.glAttachShader(programHandle, fragmentShaderHandle);

// Bind attributes
if (attributes != null)
{
final int size = attributes.length;
for (int i = 0; i < size; i++)
{
GLES20.glBindAttribLocation(programHandle, i, attributes[i]);
}
}

// Link the two shaders together into a program.
GLES20.glLinkProgram(programHandle);

// Get the link status.
final int[] linkStatus = new int[1];
GLES20.glGetProgramiv(programHandle, GLES20.GL_LINK_STATUS, linkStatus, 0);

// If the link failed, delete the program.
if (linkStatus[0] == 0)
{
Log.e("vboCustomGLRenderer: createAndLinkProgram", "Error compiling program: " + GLES20.glGetProgramInfoLog(programHandle));
GLES20.glDeleteProgram(programHandle);
programHandle = 0;
}
}

if (programHandle == 0)
{
throw new RuntimeException("Error creating program.");
}

return programHandle;
}

public static int loadTexture(final Context context, final int resourceId)
{
final int[] textureHandle = new int[1];

GLES20.glGenTextures(1, textureHandle, 0);

if (textureHandle[0] != 0)
{
final BitmapFactory.Options options = new BitmapFactory.Options();
options.inScaled = false; // No pre-scaling

// Read in the resource
final Bitmap bitmap = BitmapFactory.decodeResource(context.getResources( ), resourceId, options);

// Bind to the texture in OpenGL
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, textureHandle[0]);

// Set filtering
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MIN_FILTER, GLES20.GL_NEAREST);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MAG_FILTER, GLES20.GL_NEAREST);

// Load the bitmap into the bound texture.
GLUtils.texImage2D(GLES20.GL_TEXTURE_2D, 0, bitmap, 0);

// Recycle the bitmap, since its data has been loaded into OpenGL.
bitmap.recycle();
}

if (textureHandle[0] == 0)
{
throw new RuntimeException("Error loading texture.");
}

return textureHandle[0];
}

@Override
public void onSurfaceCreated(GL10 unused, EGLConfig config) {

// Set the background frame color
//White
GLES20.glClearColor(1.0f, 1.0f, 1.0f, 1.0f);

// Set the view matrix. This matrix can be said to represent the camera position.
// NOTE: In OpenGL 1, a ModelView matrix is used, which is a combination of a model and
// view matrix. In OpenGL 2, we can keep track of these matrices separately if we choose.
Matrix.setLookAtM(mViewMatrix, 0, (float)eyeX, (float)eyeY, eyeZ, (float)lookX, (float)lookY, lookZ, upX, upY, upZ);


final String lineVertexShader = getLineVertexShader();
final String lineFragmentShader = getLineFragmentShader();

final int lineVertexShaderHandle = compileShader("lineVertexShader", GLES20.GL_VERTEX_SHADER, lineVertexShader);
final int lineFragmentShaderHandle = compileShader("lineFragmentShader", GLES20.GL_FRAGMENT_SHADER, lineFragmentShader);

mPerVertexLinesProgramHandle = createAndLinkProgram(lineVertexShaderHandle, lineFragmentShaderHandle, new String[] {"a_Position", "a_Color"});

mMVPMatrixHandle = GLES20.glGetUniformLocation(mPerVertexLinesProgram Handle, "u_MVPMatrix");
mLinePositionHandle = GLES20.glGetAttribLocation(mPerVertexLinesProgramH andle, "a_Position");
mLineColorUniformLocation = GLES20.glGetUniformLocation(mPerVertexLinesProgram Handle, "u_Color");

GLES20.glUseProgram(mPerVertexLinesProgramHandle);


final String pointsVertexShader = getPointVertexShader();
final String pointsFragmentShader = getPointFragmentShader();

final int pointVertexShaderHandle = compileShader("pointsVertexShader", GLES20.GL_VERTEX_SHADER, pointsVertexShader);
final int pointFragmentShaderHandle = compileShader("pointsFragmentShader", GLES20.GL_FRAGMENT_SHADER, pointsFragmentShader);

mPointsProgramHandle = createAndLinkProgram(pointVertexShaderHandle, pointFragmentShaderHandle, new String[] {"a_Position", "a_TexCoordinate"});

// Load the texture
mTextureDataHandle = loadTexture(mActivityContext, com.ANDRRA1.R.drawable.andrra_point);

//mPointMVPMatrixHandle = GLES20.glGetUniformLocation(mPointsProgramHandle, "u_MVPMatrix");
mTextureUniformHandle = GLES20.glGetUniformLocation(mPointsProgramHandle, "u_Texture");
mPointPositionHandle = GLES20.glGetAttribLocation(mPointsProgramHandle, "a_Position");
mTextureCoordinateHandle = GLES20.glGetAttribLocation(mPointsProgramHandle, "a_TexCoordinate");

}

static double mWidth = 0;
static double mHeight = 0;
static double mLeft = 0;
static double mRight = 0;
static double mTop = 0;
static double mBottom = 0;
double mRatio = 0;
double screen_width_height_ratio;
double screen_height_width_ratio;
final float near = 1.5f;
final float far = 10.0f;

double screen_vs_map_horz_ratio = 0;
double screen_vs_map_vert_ratio = 0;

@Override
public void onSurfaceChanged(GL10 unused, int width, int height) {

// Adjust the viewport based on geometry changes,
// such as screen rotation
// Set the OpenGL viewport to the same size as the surface.
GLES20.glViewport(0, 0, width, height);
//Log.d("","onSurfaceChanged");

screen_width_height_ratio = (double) width / height;
screen_height_width_ratio = (double) height / width;

//Initialize
if (mRatio == 0){
mWidth = (double) width;
mHeight = (double) height;

//map height to width ratio
double map_extents_width = default_settings.mbrMaxX - default_settings.mbrMinX;
double map_extents_height = default_settings.mbrMaxY - default_settings.mbrMinY;
double map_width_height_ratio = map_extents_width/map_extents_height;
//float map_height_width_ratio = map_extents_height/map_extents_width;
if (screen_width_height_ratio > map_width_height_ratio){
mRight = (screen_width_height_ratio * map_extents_height)/2;
mLeft = -mRight;
mTop = map_extents_height/2;
mBottom = -mTop;
}
else{
mRight = map_extents_width/2;
mLeft = -mRight;
mTop = (screen_height_width_ratio * map_extents_width)/2;
mBottom = -mTop;
}

mRatio = screen_width_height_ratio;
}

if (screen_width_height_ratio != mRatio){
final double wRatio = width/mWidth;
final double oldWidth = mRight - mLeft;
final double newWidth = wRatio * oldWidth;
final double widthDiff = (newWidth - oldWidth)/2;
mLeft = mLeft - widthDiff;
mRight = mRight + widthDiff;

final double hRatio = height/mHeight;
final double oldHeight = mTop - mBottom;
final double newHeight = hRatio * oldHeight;
final double heightDiff = (newHeight - oldHeight)/2;
mBottom = mBottom - heightDiff;
mTop = mTop + heightDiff;

mWidth = (double) width;
mHeight = (double) height;

mRatio = screen_width_height_ratio;
}

screen_vs_map_horz_ratio = (mWidth/(mRight-mLeft));
screen_vs_map_vert_ratio = (mHeight/(mTop-mBottom));

Matrix.frustumM(mProjectionMatrix, 0, (float)mLeft, (float)mRight, (float)mBottom, (float)mTop, near, far);
}

ListIterator<mapLayer> orgNonAssetCatLayersList_it;
ListIterator<FloatBuffer> mapLayerObjectList_it;
ListIterator<Byte> mapLayerObjectTypeList_it;
mapLayer MapLayer;

@Override
public void onDrawFrame(GL10 unused) {

//Log.d("","onDrawFrame");
GLES20.glClear(GLES20.GL_DEPTH_BUFFER_BIT | GLES20.GL_COLOR_BUFFER_BIT);

if (loadComplete){
drawPreset();
orgNonAssetCatLayersList_it = default_settings.orgNonAssetCatMappableLayers.list Iterator();
while (orgNonAssetCatLayersList_it.hasNext()) {
MapLayer = orgNonAssetCatLayersList_it.next();

if (MapLayer.BatchedPointVBO != null){
}
if (MapLayer.BatchedLineVBO != null){
drawLineString(MapLayer.BatchedLineVBO, MapLayer.lineStringObjColor);
}
if (MapLayer.BatchedPolygonVBO != null){
drawPolygon(MapLayer.BatchedPolygonVBO, MapLayer.polygonObjColor);
}
}

MapLayer = default_settings.orgAssetCatNDRRALayer;
if (MapLayer.BatchedPointVBO != null){
drawTexturedPoint(MapLayer.BatchedPointVBO);
}
if (MapLayer.BatchedLineVBO != null){
}
if (MapLayer.BatchedPolygonVBO != null){
}

}
}

private void drawPreset()
{
Matrix.setIdentityM(mModelMatrix, 0);

// This multiplies the view matrix by the model matrix, and stores the result in the MVP matrix
// (which currently contains model * view).
Matrix.multiplyMM(mMVPMatrix, 0, mViewMatrix, 0, mModelMatrix, 0);

// This multiplies the modelview matrix by the projection matrix, and stores the result in the MVP matrix
// (which now contains model * view * projection).
Matrix.multiplyMM(mMVPMatrix, 0, mProjectionMatrix, 0, mMVPMatrix, 0);

GLES20.glUniformMatrix4fv(mMVPMatrixHandle, 1, false, mMVPMatrix, 0);

// Set the active texture unit to texture unit 0.
GLES20.glActiveTexture(GLES20.GL_TEXTURE0);

// Bind the texture to this unit.
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, mTextureDataHandle);

// Tell the texture uniform sampler to use this texture in the shader by binding to texture unit 0.
GLES20.glUniform1i(mTextureUniformHandle, 0);
}

/**
* Draws a textured square, representing a point.
*/
private void drawTexturedPoint(final FloatBuffer geometryBuffer)
{
// Pass in the position information
mSquarePositions.position(0);
GLES20.glVertexAttribPointer(mPointPositionHandle, mPositionDataSize, GLES20.GL_FLOAT, false, 0, mSquarePositions);

GLES20.glEnableVertexAttribArray(mPointPositionHan dle);

// Pass in the texture coordinate information
mSquareTextureCoordinates.position(0);
GLES20.glVertexAttribPointer(mTextureCoordinateHan dle, mTextureCoordinateDataSize, GLES20.GL_FLOAT, false, 0, mSquareTextureCoordinates);

GLES20.glEnableVertexAttribArray(mTextureCoordinat eHandle);

// Draw the cube.
GLES20.glDrawArrays(GLES20.GL_TRIANGLES, 0, 36);
}

private void drawLineString(final FloatBuffer geometryBuffer, final float[] colorArray)
{
// Pass in the position information
geometryBuffer.position(mPositionOffset);
GLES20.glVertexAttribPointer(mLinePositionHandle, mPositionDataSize, GLES20.GL_FLOAT, false, mPositionFloatStrideBytes, geometryBuffer);

GLES20.glEnableVertexAttribArray(mLinePositionHand le);

GLES20.glUniform4f(mLineColorUniformLocation, colorArray[0], colorArray[1], colorArray[2], 1f);

GLES20.glLineWidth(1.0f);
GLES20.glDrawArrays(GLES20.GL_LINES, 0, geometryBuffer.capacity()/mPositionDataSize);
}

private void drawPolygon(final FloatBuffer geometryBuffer, final float[] colorArray)
{
// Pass in the position information
geometryBuffer.position(mPositionOffset);
GLES20.glVertexAttribPointer(mLinePositionHandle, mPositionDataSize, GLES20.GL_FLOAT, false, mPositionFloatStrideBytes, geometryBuffer);

GLES20.glEnableVertexAttribArray(mLinePositionHand le);

GLES20.glUniform4f(mLineColorUniformLocation, colorArray[0], colorArray[1], colorArray[2], 1f);

GLES20.glLineWidth(1.0f);
GLES20.glDrawArrays(GLES20.GL_LINES, 0, geometryBuffer.capacity()/mPositionDataSize);
}
}

GClements
08-24-2013, 06:47 AM
What I am trying to accomplish is, to change the points to circular textures in a square. I would like the positions to move with the zoom, however the scaling acts differently. I do not want the squared textures to scale at the same rate as the rest of drawing.
Can you use GL_POINTS? The vertex shader writes the point size to gl_PointSize, and the fragment shader gets invoked for every fragment within a square of that size centred on the projected vertex coordinate. The fragment shader input variable gl_PointCoord holds the coordinates within the point (in the range (0,0) to (1,1), so (0.5,0.5) is the centre).

The only drawback is that the implementation is allowed to impose an upper limit on the size of points, and that limit can be as low as one pixel (i.e. gl_PointSize isn't guaranteed to be supported in a meaningful sense). However, if you're targeting a specific platform, then you can check whether that platform's point size limit is sufficient for your needs.

If you can't use points, then the way I would go about it would be to give all four vertices the coordinates of the point itself, then "fix" them in the shader based upon the texture coordinate, e.g.:


uniform vec2 u_pointSize; // point size in normalised device coordinates
uniform mat4 u_MVPMatrix;
attribute vec4 a_Position;
attribute vec2 a_TexCoordinate;

varying vec2 v_TexCoordinate;

void main()
{
v_TexCoordinate = a_TexCoordinate;
gl_Position = u_MVPMatrix * a_Position;
gl_Position += vec4(gl_Position.w * u_pointSize * (a_TexCoordinate - vec2(0.5,0.5), 0, 0);
}


Note that u_pointSize is a vec2 in normalised device coordinates; the value should be the size in pixels divided by the viewport size in pixels.

Alfonse Reinheart
08-24-2013, 08:36 AM
The only drawback

You mean, besides the fact that points are clipped by the center. So when a point goes halfway off screen, it just disappears (on a conforming implementation).

Hank Finley
08-26-2013, 01:33 AM
Hi Guys,

Like you say option 1 seems to have a couple of drawbacks so I will try the second.

To start with I have created a FloatBuffer that gets set in my zoom function, that will set the size as explained
size in pixels divided by the viewport size
For now I have just hard-coded the sizes to remain the same, I'll fix this up once everything is running.


private FloatBuffer mPointSize;
public void setScaleFactor(float scaleFactor, float gdx, float gdy){
....
mPointSize.clear();
float psize = 5f/480f;
mPointSize.put(psize);
mPointSize.put(psize);
}

The following are the shaders:
Vertex:

protected String getPointVertexShader()
{
// Define a simple shader program for our points.
final String pointVertexShader =
"uniform vec2 u_pointSize; \n" // point size in normalised device coordinates"
+ "uniform mat4 u_MVPMatrix; \n"
+ "attribute vec4 a_Position; \n"
+ "attribute vec2 a_TexCoordinate; \n" // Per-vertex texture coordinate information

+ "varying vec2 v_TexCoordinate; \n" // This will be passed into the fragment shader.

+ "void main() \n"
+ "{ \n"
+ " v_TexCoordinate = a_TexCoordinate; \n" // Pass through the texture coordinate.
+ " gl_Position = u_MVPMatrix * a_Position; \n" // gl_Position is a special variable used to store the final position.
+ " gl_Position += vec4(gl_Position.w * u_pointSize * (a_TexCoordinate - vec2(0.5,0.5), 0, 0); \n"
+ "} \n";
return pointVertexShader;
}
Fragment:

protected String getPointFragmentShader()
{
final String pointFragmentShader =
"precision mediump float; \n"
+ "uniform sampler2D u_Texture; \n"

+ "varying vec2 v_TexCoordinate;\n"

+ "void main() \n"
+ "{ \n"
+ " gl_FragColor = (texture2D(u_Texture, v_TexCoordinate));\n"
+ "} \n";
return pointFragmentShader;
}

At the moment runtime is throwing an error:
Error compiling shader: pointsVertexShader Compile failed.
ERROR: 0:10: '*' : Wrong operand types. No operation '*' exists that takes a left-hand operand of type '2-component vector of float' and a right operand of type 'const int' (and there is no acceptable conversion)

The following is an example of my draw function

public void onSurfaceCreated(GL10 unused, EGLConfig config) {
...//Other code
final String pointsVertexShader = getPointVertexShader();
final String pointsFragmentShader = getPointFragmentShader();

final int pointVertexShaderHandle = compileShader("pointsVertexShader", GLES20.GL_VERTEX_SHADER, pointsVertexShader);
final int pointFragmentShaderHandle = compileShader("pointsFragmentShader", GLES20.GL_FRAGMENT_SHADER, pointsFragmentShader);

mPointsProgramHandle = createAndLinkProgram(pointVertexShaderHandle, pointFragmentShaderHandle, new String[] {"a_Position", "a_TexCoordinate"});

// Load the texture
mTextureDataHandle = loadTexture(mActivityContext, com.ANDRRA1.R.drawable.andrra_point);

mPointSizeHandle = GLES20.glGetAttribLocation(mPointsProgramHandle, "u_pointSize");
mPointMVPMatrixHandle = GLES20.glGetUniformLocation(mPointsProgramHandle, "u_MVPMatrix");
mTextureUniformHandle = GLES20.glGetUniformLocation(mPointsProgramHandle, "u_Texture");
mPointPositionHandle = GLES20.glGetAttribLocation(mPointsProgramHandle, "a_Position");
mTextureCoordinateHandle = GLES20.glGetAttribLocation(mPointsProgramHandle, "a_TexCoordinate");
}

And finally my draw function:

private void drawTexturedPoint(final FloatBuffer geometryBuffer)
{
GLES20.glUseProgram(mPointsProgramHandle);

// Pass in the texture coordinate information
mPointSize.position(0);
GLES20.glVertexAttribPointer(mPointSizeHandle, mVec2DataSize, GLES20.GL_FLOAT, false, 0, mPointSize);

GLES20.glEnableVertexAttribArray(mPointSizeHandle) ;

// Pass in the position information
geometryBuffer.position(0);
//GLES20.glVertexAttribPointer(mPointPositionHandle, mPositionDataSize, GLES20.GL_FLOAT, false, 0, mSquarePositions);
GLES20.glVertexAttribPointer(mPointPositionHandle, mPositionDataSize, GLES20.GL_FLOAT, false, mPositionFloatStrideBytes, geometryBuffer);

GLES20.glEnableVertexAttribArray(mPointPositionHan dle);

// Pass in the texture coordinate information
mSquareTextureCoordinates.position(0);
GLES20.glVertexAttribPointer(mTextureCoordinateHan dle, mVec2DataSize, GLES20.GL_FLOAT, false, 0, mSquareTextureCoordinates);

GLES20.glEnableVertexAttribArray(mTextureCoordinat eHandle);

// Draw the cube.
GLES20.glDrawArrays(GLES20.GL_TRIANGLES, 0, geometryBuffer.capacity()/mPositionDataSize);
}

Hoping to fix the resolve the shader issue and see your comments on the rest of the pieces.

GClements
08-26-2013, 03:39 AM
You mean, besides the fact that points are clipped by the center. So when a point goes halfway off screen, it just disappears (on a conforming implementation).

Technically, they're clipped to the viewport (rather than the window/screen), and the viewport isn't constrained to the dimensions of the window.

GClements
08-26-2013, 03:42 AM
+ " gl_Position += vec4(gl_Position.w * u_pointSize * (a_TexCoordinate - vec2(0.5,0.5), 0, 0); \n"

There's a missing closing parenthesis:


+ " gl_Position += vec4(gl_Position.w * u_pointSize * (a_TexCoordinate - vec2(0.5,0.5)), 0, 0); \n"

Hank Finley
08-26-2013, 08:51 AM
Thanks G, should have spotted that.

Now I believe there is just one final thing. My textures are coming out as follows (You can see the thin black triangle being rendered):

1130

1131

Below is a refresher from above of what it should resemble.
1127

I believe that it is something to do with my position buffer and the fact that I'm trying to draw triangles, it seems to be linking up the individual points and it just happens that I have six. From what I understand I want to be creating triangles to render my png texture but only want one point coord to go through at a time allowing for the shader to do the rest?

Below I have cut down my code to what I think is relevant.

private final int mBytesPerFloat = 4;
private final int mPositionOffset = 0;
private final int mPositionDataSize = 3;
private final int mPositionFloatStrideBytes = mPositionDataSize * mBytesPerFloat;
private FloatBuffer mPointSize;
private final int mVec2DataSize = 2;


public vboCustomGLRenderer(final Context activityContext) {
mActivityContext = activityContext;

// S, T (or X, Y)
// Texture coordinate data.
final float[] squareTextureCoordinateData =
{
// Front face
0.0f, 0.0f,
0.0f, 1.0f,
1.0f, 0.0f,
0.0f, 1.0f,
1.0f, 1.0f,
1.0f, 0.0f
};

mSquareTextureCoordinates = ByteBuffer.allocateDirect(squareTextureCoordinateD ata.length * mBytesPerFloat).order(ByteOrder.nativeOrder()).asF loatBuffer();
mSquareTextureCoordinates.put(squareTextureCoordin ateData).position(0);
}


public void setScaleFactor(float scaleFactor, float gdx, float gdy){

....Reset projection to zoom in or out

//Update point size
float psize = 25f/480f;
mPointSize.position(0);
mPointSize.put(psize);
mPointSize.put(psize);
mPointSize.flip();
}


@Override
public void onSurfaceCreated(GL10 unused, EGLConfig config) {

....setLookAtM()
....Load compile and link shaders
....Load point texture

//Initialize point size
mPointSize = ByteBuffer.allocateDirect(8).order(ByteOrder.nativ eOrder()).asFloatBuffer();
float psize = 25f/480f;
mPointSize.put(psize);
mPointSize.put(psize);
mPointSize.flip();
}


private void drawTexturedPoint(final FloatBuffer geometryBuffer)
{
//GeometryBuffer holds all the points in one buffer.

GLES20.glUseProgram(mPointsProgramHandle);

mPointSizeHandle = GLES20.glGetAttribLocation(mPointsProgramHandle, "u_pointSize");
mPointMVPMatrixHandle = GLES20.glGetUniformLocation(mPointsProgramHandle, "u_MVPMatrix");
mTextureUniformHandle = GLES20.glGetUniformLocation(mPointsProgramHandle, "u_Texture");
mPointPositionHandle = GLES20.glGetAttribLocation(mPointsProgramHandle, "a_Position");
mTextureCoordinateHandle = GLES20.glGetAttribLocation(mPointsProgramHandle, "a_TexCoordinate");

// Pass in the texture coordinate information
mPointSize.position(0);
GLES20.glVertexAttribPointer(mPointSizeHandle, mVec2DataSize, GLES20.GL_FLOAT, false, 0, mPointSize);

GLES20.glEnableVertexAttribArray(mPointSizeHandle) ;

// Pass in the position information
geometryBuffer.position(0);
GLES20.glVertexAttribPointer(mPointPositionHandle, mPositionDataSize, GLES20.GL_FLOAT, false, mPositionFloatStrideBytes, geometryBuffer);

GLES20.glEnableVertexAttribArray(mPointPositionHan dle);

// Pass in the texture coordinate information
mSquareTextureCoordinates.position(0);
GLES20.glVertexAttribPointer(mTextureCoordinateHan dle, mVec2DataSize, GLES20.GL_FLOAT, false, 0, mSquareTextureCoordinates);

GLES20.glEnableVertexAttribArray(mTextureCoordinat eHandle);


GLES20.glUniformMatrix4fv(mPointMVPMatrixHandle, 1, false, mMVPMatrix, 0);

// Draw the cube.
GLES20.glDrawArrays(GLES20.GL_TRIANGLES, 0, geometryBuffer.capacity()/mPositionDataSize);
}

GClements
08-27-2013, 02:15 AM
I believe that it is something to do with my position buffer and the fact that I'm trying to draw triangles, it seems to be linking up the individual points and it just happens that I have six. From what I understand I want to be creating triangles to render my png texture but only want one point coord to go through at a time allowing for the shader to do the rest?

For rendering with GL_TRIANGLES, you need to pass 6 vertices (two triangles) for each point. You can pass the same vertex coordinates for all 6, and have the shader move the vertex coordinates outwards according to the point size, but the shader can't turn a single vertex into multiple vertices (a geometry shader can do that, but OpenGL ES doesn't have those).

If you use glDrawElements() rather than glDrawArrays(), each point only needs 4 distinct vertices and 6 indices, rather than 6 complete vertices.

Hank Finley
08-27-2013, 03:10 AM
Ok, so it was simpler to create the six vertices, so I've gone down that track.

However there must be something in the drawTexturedPoint() that I'm either not setting or setting incorrectly. At the moment the lines are still rendering however I cannot see any rendering for the points.


private void drawTexturedPoint(final FloatBuffer geometryBuffer)
{
//GeometryBuffer holds all the points in one buffer.

GLES20.glUseProgram(mPointsProgramHandle);

mPointSizeHandle = GLES20.glGetAttribLocation(mPointsProgramHandle, "u_pointSize");
mPointMVPMatrixHandle = GLES20.glGetUniformLocation(mPointsProgramHandle, "u_MVPMatrix");
mTextureUniformHandle = GLES20.glGetUniformLocation(mPointsProgramHandle, "u_Texture");
mPointPositionHandle = GLES20.glGetAttribLocation(mPointsProgramHandle, "a_Position");
mTextureCoordinateHandle = GLES20.glGetAttribLocation(mPointsProgramHandle, "a_TexCoordinate");

// Set the active texture unit to texture unit 0.
GLES20.glActiveTexture(GLES20.GL_TEXTURE0);
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, mTextureDataHandle);
// Tell the texture uniform sampler to use this texture in the shader by binding to texture unit 0.
GLES20.glUniform1i(mTextureUniformHandle, 0);

// Pass in the texture coordinate information
mPointSize.position(0);
GLES20.glVertexAttribPointer(mPointSizeHandle, mVec2DataSize, GLES20.GL_FLOAT, false, 0, mPointSize);

GLES20.glEnableVertexAttribArray(mPointSizeHandle) ;

// Pass in the position information
geometryBuffer.position(0);
GLES20.glVertexAttribPointer(mPointPositionHandle, mPositionDataSize, GLES20.GL_FLOAT, false, mPositionFloatStrideBytes, geometryBuffer);

GLES20.glEnableVertexAttribArray(mPointPositionHan dle);

// Pass in the texture coordinate information
mSquareTextureCoordinates.position(0);
GLES20.glVertexAttribPointer(mTextureCoordinateHan dle, mVec2DataSize, GLES20.GL_FLOAT, false, 0, mSquareTextureCoordinates);

GLES20.glEnableVertexAttribArray(mTextureCoordinat eHandle);


GLES20.glUniformMatrix4fv(mPointMVPMatrixHandle, 1, false, mMVPMatrix, 0);

// Draw the cube.
GLES20.glDrawArrays(GLES20.GL_TRIANGLES, 0, geometryBuffer.capacity()/mPositionDataSize);
}

I placed in a bunch of error capturing it is returning a 1281 code, on the following line.
GLES20.glVertexAttribPointer(mPointSizeHandle, mVec2DataSize, GLES20.GL_FLOAT, false, 0, mPointSize);

Hank Finley
08-28-2013, 01:40 AM
I did some more research on the error code and it turns out to be a GL_INVALID_VALUE.

So my next question is, if that is the case, and I am populating my vec2 u_pointSize with:

//Update point size
float psize = 25f/480f;
mPointSize.position(0);
mPointSize.put(psize);
mPointSize.put(psize);
mPointSize.flip();
Is that correct?

and when I pass the shader the mPointSize buffer, is that correct?

// Pass in the texture coordinate information
mPointSize.position(0);
GLES20.glVertexAttribPointer(mPointSizeHandle, mVec2DataSize, GLES20.GL_FLOAT, false, 0, mPointSize); //Error code gets sent back after this line.
GLES20.glEnableVertexAttribArray(mPointSizeHandle) ;

GClements
08-28-2013, 02:57 AM
and when I pass the shader the mPointSize buffer, is that correct?

// Pass in the texture coordinate information
mPointSize.position(0);
GLES20.glVertexAttribPointer(mPointSizeHandle, mVec2DataSize, GLES20.GL_FLOAT, false, 0, mPointSize); //Error code gets sent back after this line.
GLES20.glEnableVertexAttribArray(mPointSizeHandle) ;

Is the point size a uniform or an attribute? The last posted version of the GLSL code has it as a uniform, but the application is treating it as an attribute. If it is an attribute, then it needs to be specified per-vertex.

Hank Finley
08-28-2013, 03:48 AM
ok I see what you mean, I'll keep it as uniform as you implied in your solution, so I have altered:

mPointSizeHandle = GLES20.glGetUniformLocation(mPointsProgramHandle, "u_pointSize");

Not quite sure how to pass the buffer through however.

I tried:

GLES20.glUniform2fv(mPointSizeHandle, 1, mPointSize);

Looking a little strange however:
1133

Hank Finley
08-29-2013, 01:41 AM
Hi GClements, could you show me an example of how the u_pointsize vec2 variable should be set.

Explaining further, I have this FloatBuffer mPointSize that I'll pass to the shader program for u_pointsize. You said:

u_pointSize is a vec2 in normalised device coordinates; the value should be the size in pixels divided by the viewport size in pixels.

What I'm asking is, how to populate mPointSize? I realize that you have explained that above but I still cannot figure this out.

GClements
08-29-2013, 01:09 PM
Hi GClements, could you show me an example of how the u_pointsize vec2 variable should be set.
I'm not familiar with the Java-specific buffer stuff, but:


GLES20.glUniform2f(mPointSizeHandle, 10.0/width, 10.0/height);

should produce points 10 pixels in diameter (width and height should be the screen dimensions in pixels).

It's not clear whether your image is the result of a small number of really large points or just too many points.

Also, are the texture coordinates correct? They should be (0,0), (0,1), (1,0) and (1,1); if they're too large, the points will be enlarged as well.

Hank Finley
08-29-2013, 11:13 PM
Trying to simplify everything down so I can figure out the issue.

I have one point with the six vertice positions that I am passing through as follows:

float[] singlePointPositionData = {
-0.010406494f, -0.22647285f, 0.0f,
-0.010406494f, -0.22647285f, 0.0f,
-0.010406494f, -0.22647285f, 0.0f,
-0.010406494f, -0.22647285f, 0.0f,
-0.010406494f, -0.22647285f, 0.0f,
-0.010406494f, -0.22647285f, 0.0f
};


float[] squareTextureCoordinateData = {
0.0f, 0.0f,
0.0f, 1.0f,
1.0f, 0.0f,
0.0f, 1.0f,
1.0f, 1.0f,
1.0f, 0.0f
};

Using this now:

GLES20.glUniform2f(mPointSizeHandle, 10.0f/480f, 10.0f/762f); //Values are the size of my veiwport in pixels

At present it displays with a stretched triangle (it should once everything is in order, display with the points position coordinates above, within the red box):
1134

GClements
08-30-2013, 06:25 AM
At present it displays with a stretched triangle
That's strange. The vertex shader offsets each point by a fixed multiple of "a_TexCoordinate - vec2(0.5,0.5)", so the vertices should at least be centred around the correct point regardless of the size.

All I can suggest is to:


Experiment with changing individual numbers to see how that affects the final result. That may provide enough clues to track down the source of the problem.

Post more complete code: the draw call, anything which sets the state (uniforms and attributes) used by the draw call, anything which assigns to variables used in the code, etc (use pastebin if it exceeds the size limits of the forum). Currently, the code is split into many fragments across multiple posts, some of which will have since changed.

Hank Finley
08-31-2013, 12:53 AM
1. I found an issue with my position buffer, which once fixed it rendered the first square. This did not render the texture just produced a black square. Which is fine for now I would rather work on getting the individual squares rendering correctly first.

2. I added in another 6 vertices for a second square, since the first is displaying.

3.This is what it displays:
(Notice on the left the first point is rendered, the red squares are just me rendering another set of all six points as primitive points just to show where they should be)
1138

4. Could it be the stride or offset between triangles that I have wrong? I tried altering them to see what happens but haven't really made much progress except a few program crashes.

5. Code:

final float[] squareTextureCoordinateData =
{
// Front face
0.0f, 0.0f,
0.0f, 1.0f,
1.0f, 0.0f,
0.0f, 1.0f,
1.0f, 1.0f,
1.0f, 0.0f
};


final float[] singlePointData =
{
-0.010406494f, -0.22647285f, 0.0f,
-0.010406494f, -0.22647285f, 0.0f,
-0.010406494f, -0.22647285f, 0.0f,
-0.010406494f, -0.22647285f, 0.0f,
-0.010406494f, -0.22647285f, 0.0f,
-0.010406494f, -0.22647285f, 0.0f,

-0.0031433105f, -0.22737312f, 0.0f,
-0.0031433105f, -0.22737312f, 0.0f,
-0.0031433105f, -0.22737312f, 0.0f,
-0.0031433105f, -0.22737312f, 0.0f,
-0.0031433105f, -0.22737312f, 0.0f,
-0.0031433105f, -0.22737312f, 0.0f
};


private void drawTexturedPoint(final FloatBuffer geometryBuffer)
{
GLES20.glUseProgram(mPointsProgramHandle);

mPointSizeHandle = GLES20.glGetUniformLocation(mPointsProgramHandle, "u_pointSize");
mPointMVPMatrixHandle = GLES20.glGetUniformLocation(mPointsProgramHandle, "u_MVPMatrix");
mTextureUniformHandle = GLES20.glGetUniformLocation(mPointsProgramHandle, "u_Texture");
mPointPositionHandle = GLES20.glGetAttribLocation(mPointsProgramHandle, "a_Position");
mTextureCoordinateHandle = GLES20.glGetAttribLocation(mPointsProgramHandle, "a_TexCoordinate");

// Set the active texture unit to texture unit 0.
GLES20.glActiveTexture(GLES20.GL_TEXTURE0);
// Bind the texture to this unit.
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, mTextureDataHandle);
// Tell the texture uniform sampler to use this texture in the shader by binding to texture unit 0.
GLES20.glUniform1i(mTextureUniformHandle, 0);

GLES20.glUniform2f(mPointSizeHandle, 10.0f/480f, 10.0f/762f);

// Pass in the position information
geometryBuffer.position(0);
GLES20.glVertexAttribPointer(mPointPositionHandle, 3, GLES20.GL_FLOAT, false, (3*4), geometryBuffer);

GLES20.glEnableVertexAttribArray(mPointPositionHan dle);

// Pass in the texture coordinate information
mSquareTextureCoordinates.position(0);
GLES20.glVertexAttribPointer(mTextureCoordinateHan dle, 2, GLES20.GL_FLOAT, false, 0, mSquareTextureCoordinates);
GLES20.glEnableVertexAttribArray(mTextureCoordinat eHandle);

GLES20.glUniformMatrix4fv(mPointMVPMatrixHandle, 1, false, mMVPMatrix, 0);

// Draw the cube.
GLES20.glDrawArrays(GLES20.GL_TRIANGLES, 0, geometryBuffer.capacity()/3);
}

6. All the renderer code

public class vboCustomGLRenderer implements GLSurfaceView.Renderer {

private final Context mActivityContext;

/**
* Store the model matrix. This matrix is used to move models from object space (where each model can be thought
* of being located at the center of the universe) to world space.
*/
private float[] mModelMatrix = new float[16];

/**
* Store the view matrix. This can be thought of as our camera. This matrix transforms world space to eye space;
* it positions things relative to our eye.
*/
private float[] mViewMatrix = new float[16];

/** Store the projection matrix. This is used to project the scene onto a 2D viewport. */
private float[] mProjectionMatrix = new float[16];

/** Allocate storage for the final combined matrix. This will be passed into the shader program. */
private float[] mMVPMatrix = new float[16];

/** This will be used to pass in the transformation matrix. */
private int mMVPMatrixHandle;

/** This will be used to pass in model position information. */
private int mLinePositionHandle;

/** This will be used to pass in model color information. */
private int mLineColorUniformLocation;

/** This will be used to pass in model position information. */
private int mPointPositionHandle;

/** How many bytes per float. */
private final int mBytesPerFloat = 4;

/** Offset of the position data. */
private final int mPositionOffset = 0;

/** Size of the position data in elements. */
private final int mPositionDataSize = 3;

/** How many elements per vertex for double values. */
private final int mPositionFloatStrideBytes = mPositionDataSize * mBytesPerFloat;

/** This is a handle to our per-vertex line shading program. */
private int mLinesProgramHandle;

/** This is a handle to our points program. */
private int mPointsProgramHandle;

private FloatBuffer mPointSize;

/** Store our model data in a float buffer. */
private final FloatBuffer mSquareTextureCoordinates;

/** This will be used to pass in model texture coordinate information. */
private int mTextureCoordinateHandle;

/** Size of the texture coordinate data in elements. */
private final int mVec2DataSize = 2;

/** This will be used to pass in the texture. */
private int mTextureUniformHandle;

/** This is a handle to our texture data. */
private int mTextureDataHandle;

private int mPointSizeHandle;
private int mPointMVPMatrixHandle;

public double eyeX = 0;
public double eyeY = 0;
public float eyeZ = 1.5f;

// We are looking toward the distance
public double lookX = eyeX;
public double lookY = eyeY;
public float lookZ = 0.0f;

// Set our up vector. This is where our head would be pointing were we holding the camera.
public float upX = 0.0f;
public float upY = 1.0f;
public float upZ = 0.0f;

public double modelOffsetX = -(default_settings.mbrMinX + ((default_settings.mbrMaxX - default_settings.mbrMinX)/2));
public double modelOffsetY = -(default_settings.mbrMinY + ((default_settings.mbrMaxY - default_settings.mbrMinY)/2));

public double mScaleFactor = 1;
public double modelXShift = 0;
public double modelYShift = 0;
public double viewXShift = 0;
public double viewYShift = 0;

static float mWidth = 0;
static float mHeight = 0;
static double mLeft = 0;
static double mRight = 0;
static double mTop = 0;
static double mBottom = 0;
double mRatio = 0;
double screen_width_height_ratio;
double screen_height_width_ratio;
final float near = 1.5f;
final float far = 10.0f;

double screen_vs_map_horz_ratio = 0;
double screen_vs_map_vert_ratio = 0;

FloatBuffer tempPoint;

public vboCustomGLRenderer(final Context activityContext) {
mActivityContext = activityContext;

final float[] squareTextureCoordinateData =
{
// Front face
0.0f, 0.0f,
0.0f, 1.0f,
1.0f, 0.0f,
0.0f, 1.0f,
1.0f, 1.0f,
1.0f, 0.0f
};
mSquareTextureCoordinates = ByteBuffer.allocateDirect(squareTextureCoordinateD ata.length * mBytesPerFloat)
.order(ByteOrder.nativeOrder()).asFloatBuffer();
mSquareTextureCoordinates.put(squareTextureCoordin ateData).position(0);

final float[] singlePointData =
{
-0.010406494f, -0.22647285f, 0.0f,
-0.010406494f, -0.22647285f, 0.0f,
-0.010406494f, -0.22647285f, 0.0f,
-0.010406494f, -0.22647285f, 0.0f,
-0.010406494f, -0.22647285f, 0.0f,
-0.010406494f, -0.22647285f, 0.0f,

-0.0031433105f, -0.22737312f, 0.0f,
-0.0031433105f, -0.22737312f, 0.0f,
-0.0031433105f, -0.22737312f, 0.0f,
-0.0031433105f, -0.22737312f, 0.0f,
-0.0031433105f, -0.22737312f, 0.0f,
-0.0031433105f, -0.22737312f, 0.0f
};

/*
Coordinates for all six points
-0.010406494, -0.22647285, 0.0
-0.0031433105, -0.22737312, 0.0
0.0064849854, -0.22831154, 0.0
0.009414673, -0.23442268, 0.0
0.009063721, -0.23643303, 0.0
0.008743286, -0.23848152, 0.0
*/

tempPoint = ByteBuffer.allocateDirect(singlePointData.length * mBytesPerFloat)
.order(ByteOrder.nativeOrder()).asFloatBuffer();
tempPoint.put(singlePointData).position(0);
}

boolean loadComplete = false;

public void setDraw(boolean loadComplete){
this.loadComplete = loadComplete;
}

public void setEye(double x, double y){

eyeX -= (x / screen_vs_map_horz_ratio);
lookX = eyeX;
eyeY += (y / screen_vs_map_vert_ratio);
lookY = eyeY;

// Set the camera position (View matrix)
Matrix.setLookAtM(mViewMatrix, 0, (float)eyeX, (float)eyeY, eyeZ, (float)lookX, (float)lookY, lookZ, upX, upY, upZ);
}

public void setScaleFactor(float scaleFactor, float gdx, float gdy){

// Don't let the object get too small or too large.
//mScaleFactor = Math.max(0.1f, Math.min(mScaleFactor, 10000.0f));

mScaleFactor *= scaleFactor;

mRight = mRight / scaleFactor;
mLeft = -mRight;
mTop = mTop / scaleFactor;
mBottom = -mTop;

//The eye shift is in pixels will get converted to screen ratio when sent to setEye().
double eyeXShift = (((mWidth / 2) - gdx) - (((mWidth / 2) - gdx) / scaleFactor));
double eyeYShift = (((mHeight / 2) - gdy) - (((mHeight / 2) - gdy) / scaleFactor));

screen_vs_map_horz_ratio = (mWidth/(mRight-mLeft));
screen_vs_map_vert_ratio = (mHeight/(mTop-mBottom));

eyeX -= (eyeXShift / screen_vs_map_horz_ratio);
lookX = eyeX;
eyeY += (eyeYShift / screen_vs_map_vert_ratio);
lookY = eyeY;

Matrix.frustumM(mProjectionMatrix, 0, (float)mLeft, (float)mRight, (float)mBottom, (float)mTop, near, far);

float psize = 25f/480f;
mPointSize.position(0);
mPointSize.put(psize);
mPointSize.put(psize);
mPointSize.flip();
}

protected String getLineVertexShader()
{
// TO DO: Explain why we normalize the vectors, explain some of the vector math behind it all. Explain what is eye space.
final String lineVertexShader =
"uniform mat4 u_MVPMatrix; \n" // A constant representing the combined model/view/projection matrix.

+ "attribute vec4 a_Position; \n" // Per-vertex position information we will pass in.
+ "attribute vec4 a_Color; \n" // Per-vertex color information we will pass in.

+ "varying vec4 v_Color; \n" // This will be passed into the fragment shader.

+ "void main() \n" // The entry point for our vertex shader.
+ "{ \n"
+ " v_Color = a_Color; \n" // Pass the color through to the fragment shader.
// It will be interpolated across the triangle.
+ " gl_Position = u_MVPMatrix \n" // gl_Position is a special variable used to store the final position.
+ " * a_Position; \n" // Multiply the vertex by the matrix to get the final point in
+ " gl_PointSize = 15.0; \n"
+ "} \n"; // normalized screen coordinates.

return lineVertexShader;
}

protected String getLineFragmentShader()
{
final String lineFragmentShader =
"precision mediump float; \n" // Set the default precision to medium. We don't need as high of a
+ "uniform vec4 u_Color; \n" // This is the color from the vertex shader interpolated across the
// triangle per fragment.
+ "void main() \n" // The entry point for our fragment shader.
+ "{ \n"
+ " gl_FragColor = u_Color; \n" // Pass the color directly through the pipeline.
+ "} \n";
return lineFragmentShader;
}

protected String getPointVertexShader()
{
// Define a simple shader program for our points.
final String pointVertexShader =
"uniform mat4 u_MVPMatrix; \n"
+ "uniform vec2 u_pointSize; \n"
+ "attribute vec4 a_Position; \n"
+ "attribute vec2 a_TexCoordinate; \n" // Per-vertex texture coordinate information we will pass in.

+ "varying vec2 v_TexCoordinate; \n" // This will be passed into the fragment shader.

+ "void main() \n"
+ "{ \n"
+ " v_TexCoordinate = a_TexCoordinate; \n" // Pass through the texture coordinate.
+ " gl_Position = u_MVPMatrix * a_Position; \n" // gl_Position is a special variable used to store the final position.
+ " gl_Position += vec4(gl_Position.w * u_pointSize * (a_TexCoordinate - vec2(0.5,0.5)), 0, 0);\n"
+ "} \n";
return pointVertexShader;
}

protected String getPointFragmentShader()
{
final String pointFragmentShader =
"precision mediump float; \n" // Set the default precision to medium. We don't need as high of a precision in the fragment shader.
+ "uniform sampler2D u_Texture; \n" // The input texture.

+ "varying vec2 v_TexCoordinate;\n" // Interpolated texture coordinate per fragment.

+ "void main() \n" // The entry point for our fragment shader.
+ "{ \n"
+ " gl_FragColor = (texture2D(u_Texture, v_TexCoordinate));\n" // Pass the color directly through the pipeline.
+ "} \n";
return pointFragmentShader;
}

/**
* Helper function to compile a shader.
*
* @param shaderType The shader type.
* @param shaderSource The shader source code.
* @return An OpenGL handle to the shader.
*/
private int compileShader(String shader, final int shaderType, final String shaderSource)
{
int shaderHandle = GLES20.glCreateShader(shaderType);

if (shaderHandle != 0)
{
// Pass in the shader source.
GLES20.glShaderSource(shaderHandle, shaderSource);

// Compile the shader.
GLES20.glCompileShader(shaderHandle);

// Get the compilation status.
final int[] compileStatus = new int[1];
GLES20.glGetShaderiv(shaderHandle, GLES20.GL_COMPILE_STATUS, compileStatus, 0);

// If the compilation failed, delete the shader.
if (compileStatus[0] == 0)
{
Log.e("vboCustomGLRenderer: compileShader", "Error compiling shader: " + shader + " " + GLES20.glGetShaderInfoLog(shaderHandle));
GLES20.glDeleteShader(shaderHandle);
shaderHandle = 0;
}
}

if (shaderHandle == 0)
{
throw new RuntimeException("Error creating shader." );
}
return shaderHandle;
}

/**
* Helper function to compile and link a program.
*
* @param vertexShaderHandle An OpenGL handle to an already-compiled vertex shader.
* @param fragmentShaderHandle An OpenGL handle to an already-compiled fragment shader.
* @param attributes Attributes that need to be bound to the program.
* @return An OpenGL handle to the program.
*/
private int createAndLinkProgram(final int vertexShaderHandle, final int fragmentShaderHandle, final String[] attributes)
{
int programHandle = GLES20.glCreateProgram();

if (programHandle != 0)
{
// Bind the vertex shader to the program.
GLES20.glAttachShader(programHandle, vertexShaderHandle);

// Bind the fragment shader to the program.
GLES20.glAttachShader(programHandle, fragmentShaderHandle);

// Bind attributes
if (attributes != null)
{
final int size = attributes.length;
for (int i = 0; i < size; i++)
{
GLES20.glBindAttribLocation(programHandle, i, attributes[i]);
}
}

// Link the two shaders together into a program.
GLES20.glLinkProgram(programHandle);

// Get the link status.
final int[] linkStatus = new int[1];
GLES20.glGetProgramiv(programHandle, GLES20.GL_LINK_STATUS, linkStatus, 0);

// If the link failed, delete the program.
if (linkStatus[0] == 0)
{
Log.e("vboCustomGLRenderer: createAndLinkProgram", "Error compiling program: " + GLES20.glGetProgramInfoLog(programHandle));
GLES20.glDeleteProgram(programHandle);
programHandle = 0;
}
}

if (programHandle == 0)
{
throw new RuntimeException("Error creating program.");
}

return programHandle;
}

public static int loadTexture(final Context context, final int resourceId)
{
final int[] textureHandle = new int[1];

GLES20.glGenTextures(1, textureHandle, 0);

if (textureHandle[0] != 0)
{
final BitmapFactory.Options options = new BitmapFactory.Options();
options.inScaled = false; // No pre-scaling

// Read in the resource
final Bitmap bitmap = BitmapFactory.decodeResource(context.getResources( ), resourceId, options);

// Bind to the texture in OpenGL
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, textureHandle[0]);

// Set filtering
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MIN_FILTER, GLES20.GL_NEAREST);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MAG_FILTER, GLES20.GL_NEAREST);

// Load the bitmap into the bound texture.
GLUtils.texImage2D(GLES20.GL_TEXTURE_2D, 0, bitmap, 0);

// Recycle the bitmap, since its data has been loaded into OpenGL.
bitmap.recycle();
}

if (textureHandle[0] == 0)
{
throw new RuntimeException("Error loading texture.");
}

return textureHandle[0];
}

@Override
public void onSurfaceCreated(GL10 unused, EGLConfig config) {

// Set the background frame color
//White
GLES20.glClearColor(1.0f, 1.0f, 1.0f, 1.0f);

// Set the view matrix. This matrix can be said to represent the camera position.
// NOTE: In OpenGL 1, a ModelView matrix is used, which is a combination of a model and
// view matrix. In OpenGL 2, we can keep track of these matrices separately if we choose.
Matrix.setLookAtM(mViewMatrix, 0, (float)eyeX, (float)eyeY, eyeZ, (float)lookX, (float)lookY, lookZ, upX, upY, upZ);

//Load the line shaders
final String lineVertexShader = getLineVertexShader();
final String lineFragmentShader = getLineFragmentShader();

final int lineVertexShaderHandle = compileShader("lineVertexShader", GLES20.GL_VERTEX_SHADER, lineVertexShader);
final int lineFragmentShaderHandle = compileShader("lineFragmentShader", GLES20.GL_FRAGMENT_SHADER, lineFragmentShader);

mLinesProgramHandle = createAndLinkProgram(lineVertexShaderHandle, lineFragmentShaderHandle, new String[] {"a_Position", "a_Color"});

//Load the point(square texture) shaders
final String pointsVertexShader = getPointVertexShader();
final String pointsFragmentShader = getPointFragmentShader();

final int pointVertexShaderHandle = compileShader("pointsVertexShader", GLES20.GL_VERTEX_SHADER, pointsVertexShader);
final int pointFragmentShaderHandle = compileShader("pointsFragmentShader", GLES20.GL_FRAGMENT_SHADER, pointsFragmentShader);

mPointsProgramHandle = createAndLinkProgram(pointVertexShaderHandle, pointFragmentShaderHandle, new String[] {"u_pointSize", "a_Position", "a_TexCoordinate"});


// Load the texture
mTextureDataHandle = loadTexture(mActivityContext, com.ANDRRA1.R.drawable.andrra_point);
}

@Override
public void onSurfaceChanged(GL10 unused, int width, int height) {

// Adjust the viewport based on geometry changes,
// such as screen rotation
// Set the OpenGL viewport to the same size as the surface.
GLES20.glViewport(0, 0, width, height);
//Log.d("","onSurfaceChanged");

screen_width_height_ratio = (double) width / height;
screen_height_width_ratio = (double) height / width;

//Initialize
if (mRatio == 0){
mWidth = (float) width;
mHeight = (float) height;

//map height to width ratio
double map_extents_width = default_settings.mbrMaxX - default_settings.mbrMinX;
double map_extents_height = default_settings.mbrMaxY - default_settings.mbrMinY;
double map_width_height_ratio = map_extents_width/map_extents_height;
//float map_height_width_ratio = map_extents_height/map_extents_width;
if (screen_width_height_ratio > map_width_height_ratio){
mRight = (screen_width_height_ratio * map_extents_height)/2;
mLeft = -mRight;
mTop = map_extents_height/2;
mBottom = -mTop;
}
else{
mRight = map_extents_width/2;
mLeft = -mRight;
mTop = (screen_height_width_ratio * map_extents_width)/2;
mBottom = -mTop;
}

mRatio = screen_width_height_ratio;
}

if (screen_width_height_ratio != mRatio){
final double wRatio = width/mWidth;
final double oldWidth = mRight - mLeft;
final double newWidth = wRatio * oldWidth;
final double widthDiff = (newWidth - oldWidth)/2;
mLeft = mLeft - widthDiff;
mRight = mRight + widthDiff;

final double hRatio = height/mHeight;
final double oldHeight = mTop - mBottom;
final double newHeight = hRatio * oldHeight;
final double heightDiff = (newHeight - oldHeight)/2;
mBottom = mBottom - heightDiff;
mTop = mTop + heightDiff;

mWidth = (float) width;
mHeight = (float) height;

mRatio = screen_width_height_ratio;
}

screen_vs_map_horz_ratio = (mWidth/(mRight-mLeft));
screen_vs_map_vert_ratio = (mHeight/(mTop-mBottom));

Matrix.frustumM(mProjectionMatrix, 0, (float)mLeft, (float)mRight, (float)mBottom, (float)mTop, near, far);
checkGLError("onSurfaceChanged");
}

ListIterator<mapLayer> orgNonAssetCatLayersList_it;
ListIterator<FloatBuffer> mapLayerObjectList_it;
ListIterator<Byte> mapLayerObjectTypeList_it;
mapLayer MapLayer;

@Override
public void onDrawFrame(GL10 unused) {

GLES20.glClear(GLES20.GL_DEPTH_BUFFER_BIT | GLES20.GL_COLOR_BUFFER_BIT);

if (loadComplete){
preset();
orgNonAssetCatLayersList_it = default_settings.orgNonAssetCatMappableLayers.list Iterator();
while (orgNonAssetCatLayersList_it.hasNext()) {
MapLayer = orgNonAssetCatLayersList_it.next();

if (MapLayer.BatchedPolygonVBO != null){
drawPolygon(MapLayer.BatchedPolygonVBO);
}
}
//Draws all points from a seperate buffer as primitive points.
drawPoint(default_settings.orgAssetCatNDRRALayer.B atchedPointVBO);
//Draws the first two squares. Defined amd initialized in constructor.
drawTexturedPoint(tempPoint);
checkGLError("onDrawFrame");
}
}

private void preset()
{
Matrix.setIdentityM(mModelMatrix, 0);

// This multiplies the view matrix by the model matrix, and stores the result in the MVP matrix
// (which currently contains model * view).
Matrix.multiplyMM(mMVPMatrix, 0, mViewMatrix, 0, mModelMatrix, 0);

// This multiplies the modelview matrix by the projection matrix, and stores the result in the MVP matrix
// (which now contains model * view * projection).
Matrix.multiplyMM(mMVPMatrix, 0, mProjectionMatrix, 0, mMVPMatrix, 0);
}

private void drawPolygon(final FloatBuffer geometryBuffer)
{
final float[] colorArray = {0f,0f,0f};

GLES20.glUseProgram(mLinesProgramHandle);

mMVPMatrixHandle = GLES20.glGetUniformLocation(mLinesProgramHandle, "u_MVPMatrix");
mLinePositionHandle = GLES20.glGetAttribLocation(mLinesProgramHandle, "a_Position");
mLineColorUniformLocation = GLES20.glGetUniformLocation(mLinesProgramHandle, "u_Color");

GLES20.glUniformMatrix4fv(mMVPMatrixHandle, 1, false, mMVPMatrix, 0);

// Pass in the position information
geometryBuffer.position(mPositionOffset);
GLES20.glVertexAttribPointer(mLinePositionHandle, mPositionDataSize, GLES20.GL_FLOAT, false, mPositionFloatStrideBytes, geometryBuffer);

GLES20.glEnableVertexAttribArray(mLinePositionHand le);

GLES20.glUniform4f(mLineColorUniformLocation, colorArray[0], colorArray[1], colorArray[2], 1f);

GLES20.glLineWidth(1.0f);
GLES20.glDrawArrays(GLES20.GL_LINES, 0, geometryBuffer.capacity()/mPositionDataSize);
}

private void drawPoint(final FloatBuffer geometryBuffer)
{
final float[] colorArray = {1f,0f,0f};

GLES20.glUseProgram(mLinesProgramHandle);

mMVPMatrixHandle = GLES20.glGetUniformLocation(mLinesProgramHandle, "u_MVPMatrix");
mLinePositionHandle = GLES20.glGetAttribLocation(mLinesProgramHandle, "a_Position");
mLineColorUniformLocation = GLES20.glGetUniformLocation(mLinesProgramHandle, "u_Color");

GLES20.glUniformMatrix4fv(mMVPMatrixHandle, 1, false, mMVPMatrix, 0);

// Pass in the position information
geometryBuffer.position(mPositionOffset);
GLES20.glVertexAttribPointer(mLinePositionHandle, mPositionDataSize, GLES20.GL_FLOAT, false, mPositionFloatStrideBytes, geometryBuffer);

GLES20.glEnableVertexAttribArray(mLinePositionHand le);

GLES20.glUniform4f(mLineColorUniformLocation, colorArray[0], colorArray[1], colorArray[2], 1f);

GLES20.glDrawArrays(GLES20.GL_POINTS, 0, geometryBuffer.capacity()/mPositionDataSize);
}

/**
* Draws a textured square, representing a point.
*/
private void drawTexturedPoint(final FloatBuffer geometryBuffer)
{
GLES20.glUseProgram(mPointsProgramHandle);

mPointSizeHandle = GLES20.glGetUniformLocation(mPointsProgramHandle, "u_pointSize");
mPointMVPMatrixHandle = GLES20.glGetUniformLocation(mPointsProgramHandle, "u_MVPMatrix");
mTextureUniformHandle = GLES20.glGetUniformLocation(mPointsProgramHandle, "u_Texture");
mPointPositionHandle = GLES20.glGetAttribLocation(mPointsProgramHandle, "a_Position");
mTextureCoordinateHandle = GLES20.glGetAttribLocation(mPointsProgramHandle, "a_TexCoordinate");

// Set the active texture unit to texture unit 0.
GLES20.glActiveTexture(GLES20.GL_TEXTURE0);
// Bind the texture to this unit.
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, mTextureDataHandle);
// Tell the texture uniform sampler to use this texture in the shader by binding to texture unit 0.
GLES20.glUniform1i(mTextureUniformHandle, 0);

GLES20.glUniform2f(mPointSizeHandle, 15.0f/480f, 15.0f/762f);

// Pass in the position information
geometryBuffer.position(0);
GLES20.glVertexAttribPointer(mPointPositionHandle, 3, GLES20.GL_FLOAT, false, (3*4), geometryBuffer);

GLES20.glEnableVertexAttribArray(mPointPositionHan dle);

// Pass in the texture coordinate information
mSquareTextureCoordinates.position(0);
GLES20.glVertexAttribPointer(mTextureCoordinateHan dle, 2, GLES20.GL_FLOAT, false, 0, mSquareTextureCoordinates);
GLES20.glEnableVertexAttribArray(mTextureCoordinat eHandle);

GLES20.glUniformMatrix4fv(mPointMVPMatrixHandle, 1, false, mMVPMatrix, 0);

// Draw the cube.
GLES20.glDrawArrays(GLES20.GL_TRIANGLES, 0, geometryBuffer.capacity()/3);
}

private void checkGLError(String op) {
int error;
String errorType = "";

while ((error = GLES20.glGetError()) != GLES20.GL_NO_ERROR) {

switch (error) {
case GLES20.GL_INVALID_ENUM: //1280
errorType = "GL_INVALID_ENUM";
break;
case GLES20.GL_INVALID_VALUE: //1281
errorType = "GL_INVALID_VALUE";
break;
case GLES20.GL_INVALID_OPERATION: //1292
errorType = "GL_INVALID_OPERATION";
break;
case 1283: //1283
errorType = "GL_STACK_OVERFLOW";
break;
case 1284: //1284
errorType = "GL_STACK_UNDERFLOW";
break;
case GLES20.GL_OUT_OF_MEMORY: //1285
errorType = "GL_OUT_OF_MEMORY";
break;
case GLES20.GL_INVALID_FRAMEBUFFER_OPERATION: //1286
errorType = "GL_INVALID_FRAMEBUFFER_OPERATION";
break;
default:
errorType = "Unknown GL Error Type";
}
Log.e("ANDRRA|" + GLES20.GL_NO_ERROR + "|", op + ": OpenGL Error: " + error + " -" + errorType);
}
}
}

GClements
08-31-2013, 04:42 AM
You appear to have 12 sets of vertex coordinates, and telling glDrawArrays to draw that many vertices, but the texture coordinate array only has 6 sets of texture coordinates. All of the attribute arrays must have enough data for all of the vertices (i.e. for 4 triangles, you need 12 sets of texture coordinates).

Hank Finley
08-31-2013, 06:30 AM
I'm not really sure how it works but is there a way to apply the one set of texture coordinates to each position set? Seems a bit of waste to duplicate the same texture data over and over. I could possibly have 15000 of these. Does the declaration of 'uniform' in the shader do this?

Other than the above question, I have gone forward and created the full texture coordinates for all squares and they are almost there:
1142

So the actual andrra_point.png image is not actually rendering as you can see from the black squares. It is loaded in onSurfaceCreated() as follows:
Can you see if I have missed anything, I have included everything that has anything to do with the texture.


public void onSurfaceCreated(GL10 unused, EGLConfig config) {
//... other code

//Load the point(square texture) shaders
final String pointsVertexShader = getPointVertexShader();
final String pointsFragmentShader = getPointFragmentShader();

final int pointVertexShaderHandle = compileShader("pointsVertexShader", GLES20.GL_VERTEX_SHADER, pointsVertexShader);
final int pointFragmentShaderHandle = compileShader("pointsFragmentShader", GLES20.GL_FRAGMENT_SHADER, pointsFragmentShader);

mPointsProgramHandle = createAndLinkProgram(pointVertexShaderHandle, pointFragmentShaderHandle, new String[] {"u_pointSize", "a_Position", "a_TexCoordinate"});

// Load the texture
mTextureDataHandle = loadTexture(mActivityContext, com.ANDRRA1.R.drawable.andrra_point);
}

Above mTextureDataHandle gets an int assigned to it from loadTexture()


public int loadTexture(final Context context, final int resourceId)
{
final int[] textureHandle = new int[1];

GLES20.glGenTextures(1, textureHandle, 0);

if (textureHandle[0] != 0)
{
final BitmapFactory.Options options = new BitmapFactory.Options();
options.inScaled = false; // No pre-scaling

// Read in the resource
final Bitmap bitmap = BitmapFactory.decodeResource(context.getResources( ), resourceId, options);

// Bind to the texture in OpenGL
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, textureHandle[0]);

// Set filtering
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MIN_FILTER, GLES20.GL_NEAREST);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MAG_FILTER, GLES20.GL_NEAREST);

// Load the bitmap into the bound texture.
GLUtils.texImage2D(GLES20.GL_TEXTURE_2D, 0, bitmap, 0);

// Recycle the bitmap, since its data has been loaded into OpenGL.
bitmap.recycle();
}

if (textureHandle[0] == 0)
{
throw new RuntimeException("Error loading texture.");
}

return textureHandle[0];
}


protected String getPointVertexShader()
{
// Define a simple shader program for our points.
final String pointVertexShader =
"uniform mat4 u_MVPMatrix; \n"
+ "uniform vec2 u_pointSize; \n"
+ "attribute vec4 a_Position; \n"
+ "attribute vec2 a_TexCoordinate; \n" // Per-vertex texture coordinate information we will pass in.

+ "varying vec2 v_TexCoordinate; \n" // This will be passed into the fragment shader.

+ "void main() \n"
+ "{ \n"
+ " v_TexCoordinate = a_TexCoordinate; \n" // Pass through the texture coordinate.
+ " gl_Position = u_MVPMatrix * a_Position; \n" // gl_Position is a special variable used to store the final position.
+ " gl_Position += vec4(gl_Position.w * u_pointSize * (a_TexCoordinate - vec2(0.5,0.5)), 0, 0);\n"
+ "} \n";
return pointVertexShader;
}


protected String getPointFragmentShader()
{
final String pointFragmentShader =
"precision mediump float; \n" // Set the default precision to medium. We don't need as high of a precision in the fragment shader.
+ "uniform sampler2D u_Texture; \n" // The input texture.

+ "varying vec2 v_TexCoordinate;\n" // Interpolated texture coordinate per fragment.

+ "void main() \n" // The entry point for our fragment shader.
+ "{ \n"
+ " gl_FragColor = (texture2D(u_Texture, v_TexCoordinate));\n" // Pass the color directly through the pipeline.
+ "} \n";
return pointFragmentShader;
}


private void drawTexturedPoint(final FloatBuffer geometryBuffer, final FloatBuffer textureBuffer)
{
GLES20.glUseProgram(mPointsProgramHandle);

mPointSizeHandle = GLES20.glGetUniformLocation(mPointsProgramHandle, "u_pointSize");
mPointMVPMatrixHandle = GLES20.glGetUniformLocation(mPointsProgramHandle, "u_MVPMatrix");
mTextureUniformHandle = GLES20.glGetUniformLocation(mPointsProgramHandle, "u_Texture");
mPointPositionHandle = GLES20.glGetAttribLocation(mPointsProgramHandle, "a_Position");
mTextureCoordinateHandle = GLES20.glGetAttribLocation(mPointsProgramHandle, "a_TexCoordinate");

// Set the active texture unit to texture unit 0.
GLES20.glActiveTexture(GLES20.GL_TEXTURE0);
// Bind the texture to this unit.
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, mTextureDataHandle);
// Tell the texture uniform sampler to use this texture in the shader by binding to texture unit 0.
GLES20.glUniform1i(mTextureUniformHandle, 0);

GLES20.glUniformMatrix4fv(mPointMVPMatrixHandle, 1, false, mMVPMatrix, 0);

GLES20.glUniform2f(mPointSizeHandle, mPointSize/mWidth, mPointSize/mHeight);

// Pass in the position information
geometryBuffer.position(0);
GLES20.glVertexAttribPointer(mPointPositionHandle, mPositionDataSize, GLES20.GL_FLOAT, false, mPositionFloatStrideBytes, geometryBuffer);
GLES20.glEnableVertexAttribArray(mPointPositionHan dle);

// Pass in the texture coordinate information
textureBuffer.position(0);
GLES20.glVertexAttribPointer(mTextureCoordinateHan dle, mVec2DataSize, GLES20.GL_FLOAT, false, 0, textureBuffer);
GLES20.glEnableVertexAttribArray(mTextureCoordinat eHandle);

// Draw the cube.
GLES20.glDrawArrays(GLES20.GL_TRIANGLES, 0, geometryBuffer.capacity()/mPositionDataSize);
}

GClements
09-01-2013, 07:09 AM
I'm not really sure how it works but is there a way to apply the one set of texture coordinates to each position set?
Desktop OpenGL can do this using instanced rendering, but OpenGL ES doesn't have that (at least, not as part of the core API, although the GL_NV_draw_instanced extension provides the basic features of instanced rendering, so you might want to check whether that is available on the target platforms).


Seems a bit of waste to duplicate the same texture data over and over. I could possibly have 15000 of these.
Check whether you can use GL_POINTS. That only requires one set of vertex coordinates per point and no texture coordinates. IOW, 3 floats per point rather than 30 floats per point with glDrawArrays() or 20 floats per point with glDrawElements().


Does the declaration of 'uniform' in the shader do this?
"uniform" means that the value is constant throughout the draw call, i.e. the same for all vertices of all primitives.


Other than the above question, I have gone forward and created the full texture coordinates for all squares and they are almost there:

So the actual andrra_point.png image is not actually rendering as you can see from the black squares. It is loaded in onSurfaceCreated() as follows:
Can you see if I have missed anything, I have included everything that has anything to do with the texture.

Assuming that the texture has an alpha channel, you need to enable blending, e.g.


glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glEnable(GL_BLEND)

or:


glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
glEnable(GL_BLEND)

The latter should be used if the texture data has pre-multiplied alpha, the former if not.

If the texture contains only an alpha channel, then the colour should be specified as a uniform which is copied to gl_FragColor.rgb and the .r component of the texture used for gl_FragColor.a.

Hank Finley
09-01-2013, 04:04 PM
Hi I added in glBlendFunc() and glEnable(GL_BLEND), visually this did not have an affect. I have tried this in a couple of different spots without luck.

[Edit] Actually it was the image itself, I tried with another image that I knew was already working with OpenGL-ES 2.0 from a downloadable tutorial http://www.learnopengles.com/android-lesson-seven-an-introduction-to-vertex-buffer-objects-vbos/
This image rendered, however it was upside down.

My image was a png that I had created in Fireworks, it had two layers in it, Layer 1: a bitmap, Layer 2: a vector path. It also was transparent around the edges.

Upon finding that the tutorial image did work, I altered it a little by wiping out a corner of the bitmap making it transparent. This causes it to render the corner black and show the rest fine.

Now I can see adding your glBlendFunc() and glEnable(GL_BLEND) provides the transparency.

Questions
1. what are the specification when creating a texture?
2. Is it my png that is not compatible?
3. Is my texture coordinate array in the fragment shader working properly? I ask because no matter how I manipulate the values the image always displays upside down. I have tried reversing the t coords, have tried reversing all coords. Doesn't seem to change.

Hank Finley
09-03-2013, 09:00 AM
After looking at the images that were working I found that creating a texture 256x256 must have been a factor, my old texture was 54x54. After increasing the size it worked.

So last thing I'm wondering about is, why it is upside-down. Reversed t coord no change.

final float[] squareNDRRATextureCoordinateData =
{
0.0f, 0.0f,
0.0f, 1.0f,
1.0f, 0.0f,
0.0f, 1.0f,
1.0f, 1.0f,
1.0f, 0.0f
};

GClements
09-03-2013, 10:24 PM
why it is upside-down.
The usual reason is a combination of:


Textures are typically (but not always) stored with the top-most row first (and texture-loading functions typically return the top-most row first regardless of storage order).
OpenGL interprets the data passed to glTexImage2D as starting at texture coordinate (0,0).
The texture coordinates have been assigned on the basis that (0,0) is the lower-left (consistent with OpenGL screen coordinates).




Reversed t coord no change.

Because the shader is using the texture coordinates to offset the vertex coordinates from the point's centre, if you flip the texture coordinates passed to the shader, you'll also flip the vertex coordinates, so the two cancel each other out.

You'll need to either reverse the order in which the rows are passed to glTexImage2D, or make the vertex shader flip the Y coordinate (for either the vertex coordinates or the texture coordinates, but not both).

Hank Finley
09-04-2013, 02:47 AM
GClements thank you for all your help! My textured points look fantastic, this issue is now resolved.
Curious have you much experience with GIS?

Used this to flip the texture coords:


protected String getPointVertexShader()
{
// Define a simple shader program for our points.
final String vertexShader =
"uniform mat4 u_MVPMatrix; \n"
+ "uniform vec2 u_pointSize; \n"
+ "attribute vec4 a_Position; \n"
+ "attribute vec2 a_TexCoordinate; \n" // Per-vertex texture coordinate information we will pass in.

+ "varying vec2 v_TexCoordinate; \n" // This will be passed into the fragment shader.

+ "void main() \n"
+ "{ \n"
//+ " v_TexCoordinate = a_TexCoordinate; \n" // Pass through the texture coordinate.
+ " v_TexCoordinate = a_TexCoordinate.st * vec2(1.0, -1.0);\n"// Pass through the texture coordinate.
+ " gl_Position = u_MVPMatrix * a_Position; \n" // gl_Position is a special variable used to store the final position.
+ " gl_Position += vec4(gl_Position.w * u_pointSize * (a_TexCoordinate - vec2(0.5,0.5)), 0, 0);\n"
+ "} \n";
return vertexShader;
}

GClements
09-04-2013, 08:44 AM
Curious have you much experience with GIS?
I'm moderately active in developing GRASS (http://grass.osgeo.org/).

Hank Finley
09-04-2013, 10:04 AM
Ok hopefully you might know what's going on here, I have this map that I'm trying to design on android using opengl-es 2.0. For the most part everything is looking pretty good. I have plotted out lines and points as per their coordinates.

However when I do a comparison in MapInfo there are noticeable variations in the way lines are drawn. I have attached two images, first is the MapInfo image the second is from my android, (apologies about the pixelation)
Looking at the vertical lines in the MapInfo image, they seem more inline (straighter) than on the android. I realize that the GIS system warp the maps somehow, just wondering if you know what method they use and if I can do the same thing with OpenGL?
1145

1146

GClements
09-04-2013, 01:16 PM
However when I do a comparison in MapInfo there are noticeable variations in the way lines are drawn. I have attached two images, first is the MapInfo image the second is from my android, (apologies about the pixelation)
Looking at the vertical lines in the MapInfo image, they seem more inline (straighter) than on the android. I realize that the GIS system warp the maps somehow, just wondering if you know what method they use and if I can do the same thing with OpenGL?
From looking at the image, I'd suspect a precision issue. Ensure that any fixed offset is removed before converting from int or double to float (i.e. for the coordinates passed to OpenGL, 0,0 should be the centre of your data set, not the Gulf of Guinea). Try adding "highp" qualifiers to any shader variables.

Cartographic projections (e.g. lat-lon to UTM) won't have noticeable curvature at the scale of a street or even a town, and they won't introduce "wobble". At worst, they may introduce an affine transformation (translation, scale, rotation, shear), but applying an affine transformation to points which lie in a straight line results in transformed points which also lie in a straight line.

Hank Finley
09-04-2013, 10:30 PM
The data is in a projection of 4326 lat-long.
The data is stored as doubles, as soon as I extract it, it is set to floating point.
After this I assess the min and max bounds of all the data, from which I calculate my offset.
I then create the various vbo's, using the floating point coordinates, minus the offset.

The only thing I tried was setting the precision in the shader:
(No visible change)

protected String getLineVertexShader()
{
final String vertexShader =
"precision highp float; \n" // Set the default precision to high.
+ "uniform mat4 u_MVPMatrix; \n" // A constant representing the combined model/view/projection matrix.

+ "attribute vec4 a_Position; \n" // Per-vertex position information we will pass in.

+ "void main() \n" // The entry point for our vertex shader.
+ "{ \n"
+ " gl_Position = u_MVPMatrix * a_Position;\n"// Multiply the vertex by the matrix to get the final point in
+ " gl_PointSize = 15.0; \n"
+ "} \n"; // normalized screen coordinates.
return vertexShader;
}


protected String getLineFragmentShader()
{
final String fragmentShader =
"precision highp float; \n" // Set the default precision to high.
+ "uniform vec4 u_Color; \n" // This is the color from the vertex shader interpolated across the triangle per fragment.
+ "void main() \n" // The entry point for our fragment shader.
+ "{ \n"
+ " gl_FragColor = u_Color; \n" // Pass the color directly through the pipeline.
+ "} \n";
return fragmentShader;
}

GClements
09-05-2013, 08:10 AM
The data is stored as doubles, as soon as I extract it, it is set to floating point.

Can you clarify what you mean here.

If the data is converted to single precision (i.e. "float") before removing the offset, that's going to hurt the precision. Specifically, coordinates in the region of (152,-27) stored in single precision will have a precision of around 1.5 metres in the X direction and 0.2 metres in the Y.

Hank Finley
09-09-2013, 06:57 AM
Hi, finally got around to modifying the way these coordinates were set. It has worked perfectly, thanks for all the help and advice!
Kind regards Hank.