Lefteris

03-29-2011, 12:11 AM

Hey all,

I am having a problem in my application, where I am trying to draw some 2D objects in the screen. Each object's pixel length is determined by a formula, exactly like a true type font's pixel length is determined.

Quoting chapter 1 of True type Font specification:

Values in the em square are converted to values in the pixel coordinate system by multiplying them by a scale. This scale is:

pointSize * resolution 72 points per inch * units_per_em.

where pointSize is the size at which the glyph is to be displayed, and resolution is the resolution of the output device. The 72 in the denominator reflects the number of points per inch.

For example, assume that a glyph feature is 550 FUnits in length on a 72 dpi screen at 18 point. There are 2048 units per em. The following calculation reveals that the feature is 4.83 pixels long.

550 * 18 * 7272 * 2048 = 4.83

As you can see this formula gives floating points values and this is exactly what happens in my case too.

When drawing the objects before using the formula, I was using glViewport to set the part of the screen where the object would be drawn and then drew it setting gluOrtho2D to the bounding box of the element so that I can draw using local coordinates.

That worked all fine but now I have to use floating point values to get the part of the screen where each object will be drawn and I see that glViewport accepts only integers.

I saw that in OpenGl 4.0 there is a glViewportIndexed function which accepts floating point values but this does not exist in openGL ES 2.0 . And my application needs to be able to work with both.

So what approach would you recommend? Thanks in advance!

Edit: I have thought of rounding to the nearest integer value and try it like that of course but I am not sure how that would kill the looks of the elements and their proportions. Maybe I should try it so first and see.

I am having a problem in my application, where I am trying to draw some 2D objects in the screen. Each object's pixel length is determined by a formula, exactly like a true type font's pixel length is determined.

Quoting chapter 1 of True type Font specification:

Values in the em square are converted to values in the pixel coordinate system by multiplying them by a scale. This scale is:

pointSize * resolution 72 points per inch * units_per_em.

where pointSize is the size at which the glyph is to be displayed, and resolution is the resolution of the output device. The 72 in the denominator reflects the number of points per inch.

For example, assume that a glyph feature is 550 FUnits in length on a 72 dpi screen at 18 point. There are 2048 units per em. The following calculation reveals that the feature is 4.83 pixels long.

550 * 18 * 7272 * 2048 = 4.83

As you can see this formula gives floating points values and this is exactly what happens in my case too.

When drawing the objects before using the formula, I was using glViewport to set the part of the screen where the object would be drawn and then drew it setting gluOrtho2D to the bounding box of the element so that I can draw using local coordinates.

That worked all fine but now I have to use floating point values to get the part of the screen where each object will be drawn and I see that glViewport accepts only integers.

I saw that in OpenGl 4.0 there is a glViewportIndexed function which accepts floating point values but this does not exist in openGL ES 2.0 . And my application needs to be able to work with both.

So what approach would you recommend? Thanks in advance!

Edit: I have thought of rounding to the nearest integer value and try it like that of course but I am not sure how that would kill the looks of the elements and their proportions. Maybe I should try it so first and see.