2D texture image on 3D object...how?

I am very new to OpenGL and 3D graphics programming but I have a task to use OpenGL to display a 3D textured shape defined by a wavefront OBJ definition file. I already have code that takes an internal model representation (populated from another input format) and uses this model to instruct OpenGL to render the object. Now I want to populate the internal model with data from OBJ.

I am currently stumped by the problem of applying textures to the object and realise I don’t understand some aspects of what I am doing.

At least one tool on the net draws my OBJ data correctly so I am confident that the input data is valid.

My OpenGL specific questions are:

  1. The OBJ file defines vt (vertex texture co-ordinate) entries with 3 floating point values. All of the examples I can find on the net mention vt entries with only 2 values. Now my assumption is that these values need to be passed to the glTexCoordPointer() and this function can accept coordinates made up of 1,2,3 or 4 values. So…what would OpenGL do with or expect from 3 floating point values between 0 and 1 for each TexCoord?
    (I am guessing that because I am setting up a 2D texture I only need two of these data values…which two will depend on what OpenGL will do with them.

  2. I see plenty of examples showing how to use a bitmap as a texture on a face and I assume in these examples that the entire bitmap is shrunk or stretched to fit over each flat surface that is rendered. However my input bitmap is a single 512x512 TGA file that contains all of the different textures for each side of the object tiled across the image. I am not sure what mechanism is used to ensure that the correct part of the image is applied to the correct side of the 3D object. This is similar to rendering a 6 sided dice using a single texture that contains images of all six different sides.
    I see no reference anywhere to defining which parts of a texture to use where so I must assume there are some rules about wrapping a complete 3D object in a single 2D image? Can anyone point me to where these rules a written?

Although some of my problems relate to OBJ file parsing, any help with understanding what OpenGL will do with different types of texture data would probably help greatly.

Thanks in advance

For mapping 2D textures, you provide a 2D point like glTexCoord2f(…, …) before every vertex. This function takes two points only, which is appropriate for a 2D texture. I believe when a program exports 2D texture coordinates it will put in the 3rd as zero (as opposed to ignoring it).

So your code would be like:
glBegin(…)
glTexCoord2f(…, …);
glVertex3f(…, …, …);

glEnd();

I would not use the *Pointer functions until you get everything working normally.

As for the second problem, OpenGL does not include any built in libraries for reading advanced file formats like TGA, PNG, or JPG. You have to either write your own importer (not recommended) or use a library. This library has a tutorial for reading a TGA file and storing it in an OpenGL texture. I hope that helps.

TGA library:
http://www.lighthouse3d.com/opengl/terrain/index.php3?tgalib

  1. about your 3 texcoords : either it is 3d texturing (unlikely), or it is 2d texturing + q division. See http://www.glprogramming.com/red/chapter09.html#name6
    in this case you would use something like glTexCoord4f(vt1,vt2,0,vt3);

  2. What you describe is sometimes named “texture atlas”. this is done simply with texture coordinates (and duplicated vertices for discontinuities). So if your OBJ models are done right, you have nothing special to do.

Thank you both for the suggestions, especially the pointer to the glprogrammers site.

I tried the suggestion about the q parameter. It made no difference. However the third vertex coordinate in my data is not always 0 or 1, in fact I see no examples of 1 bit I do see a few fractional values, so I am not convinced I can ignore them.

I have not heard the term texture atlas before, but will assume that if the vertices are correct then the texture will just work…however the face vertices are all correct (as determined by the shape of my object being correct) so there could be an issue with the texture vertices.

Also Irfran opens the TGA file and shows nothing of interest in its properties, so I have to rule out the possibility of a 3D Texture.

I have a TGA parser implemented, and this works fine with TGA files shipped with 3DMax data. Interestingly the 3DMax data has uncompressed TGA textures and the OBJ data has RLE encoded TGA textures…so perhaps that is an issue. It looks like my library decompresses the TGA, but perhaps that is broken.

My two options seem to be…

  1. determine what the third texture vertex value is being used for if it is not a 3D texture…this is probably more an OBJ question.
  2. verify that my openGL library is valid for this input data. I am using OpenGL ES which among other things does not support 3D Textures. Perhaps there are other features of OpenGL that are not supported in the way needed for parsing my OBJ data.

Thanks again for the suggestions
Nic

  1. search for “vt u v w” here :
    http://www.martinreddy.net/gfx/3d/OBJ.spec
    apparently this spec states that it is for a 3d texture coordinate…

(OT)
If it would have been the q parameter to allows to fine tune the interpolation. Providing 4d texcoords (s, t, 0, q) would actually mean 2d (s/q,t/q).
See figure 2 in this paper, same s and t coords, different q : http://developer.nvidia.com/object/Projective_Texture_Mapping.html

  1. you don’t mention any specific problem you have , please be more precise. It is almost sure the problem you have does not comes from “your openGL library being valid or not”. More likely there is something you don’t do correctly. Try with simple meshes, ie your dice example, with a single texture for it. Check wotsit.org for OBJ fileformat, etc.

The specific problem I am still having is that the texture is not being applied correctly to the 3D shape. By correctly I mean that I recognise parts of the texture but they are mapped to the wrong pieces of the object. For example, the window of a building is mapped to the roof.

By using another tool (3DViewer) I have viewed the input data and exported it as .osg. I am happy now that the 3D texture data is not significant because the .osg data does not contain any 3D texture co-ordinate but the image looks correct.

I have now removed many of the faces of the object and I am debugging with two rectangular areas that should contain (as it happens) images of windows. Interestingly one of the shapes contains no texture data at all, and another only contains part of the texture. I can only assume that I am passing the texture data or coordinates incorrectly.

Unfortunately I am using OpenGL ES which only supports rendering from arrays…so I can not draw each element individually.

I need to use glTexCoordPointer and glDrawElements. The code to do this already exists and works with data from another input format.

In my test I am drawing 12 triangles. As such I pass 36 vertex index values to glDrawElements and an array of 36 floating point texture co-ordinates to glTexCoordPointer (each co-ordinate being 2 values so an array of 72 floating point values in total)

I assume that for each vertex index the vertex array is accessed to get a 3D co-ordinate and two floating point texture co-ordinates are read from the texture coord array. The data I see in these arrays seems to match what I have coded in the OBJ file.

Would someone be able to clarify what a texture co-ordinate is?

I assume that each pair of texture co-ordinate values is a fractional offset into the texture bit map. Is this correct?

Therefore if I have a 512x512 bitmap (using 1 byte per pixel) then a tex coord of (0.25,0.5) would reference the 64th pixel in the 128th row of the image? (i.e. 1/4 width and 1/2 height)?

Thanks

The missing piece to the puzzle is that the array of vertex indices passed to glDrawElements is the same array of indices used to access the texture coord array. I had assumed that the tex coord array was traversed sequentially as each vertex was drawn.

I assumed this because the OBJ input file has a unique and different index for the texture co-ordinates that does not correspond to the vertex index value. There are also more texture co-ordinates than vertex co-ordinates.

My image now looks much better…however some of the faces have either the wrong texture or a distorted texture.

Time to look at the input data more closely.