View Full Version : texturing
11-21-2008, 05:51 PM
I have a few questions about textures and how they're stored in OpenGL and/or in hardware.
Note: I'm not asking how to do it through OpenGL calls (glTexImage2D), I'm just asking about the internal workings of it.
I know very little about hardware and the first two are pretty basic hardware questions so bear with me.
I heard somewhere that textures should be stored in VRAM, so that is why the following questions are focused on that.
1) I know that VRAM (video RAM) is a variant of DRAM but is a VRAM a separate physical memory like DRAM, or is it part of the graphics card?
2) What other purposes does VRAM serve, other than to store the current framebuffer? Do all applications store the things to be rendered in VRAM by default or is it an optional thing?
3) When I pass in my texture data through glTexImage2D, is it being stored in VRAM? If so, (and I'm pretty sure I can do this) does that mean I can dealloc my local texture data after the call to glTexImage2D? If it's not stored in VRAM, how can I have it so that it is?
<s>4) So if OpenGL does store the textures in VRAM along with the current framebuffer, what else does OpenGL use VRAM for?</s>
Ignore. This is pretty much question #2.
5) Lets say I close my application, do I need to free up the textures in VRAM myself or does OpenGL handle that for me?
6) How can I free up a texture in VRAM (I don't need that texture anymore)? Through glDeleteTextures?
7) Suppose I have a texture in VRAM, how do I read/write each of its texture data?
11-21-2008, 06:41 PM
1) it depends. traditional video cards do have their own dedicated high performance vram for better performance. However most integrated and/or mobile graphic hardware have only a very small dedicated part, and eat a part of main cpu RAM for extras (textures, geometry...) often with nice marketing names such as ATI HyperMemory and NVIDIA TurboCache : this is slower because the data bus between GPU and VRAM is way faster than going from GPU to RAM through pci bus.
2) everything the GPU has to read and write should be in VRAM for high speed. framebuffer(s) (multiple applications in parallel, offscreen buffers...), textures, vertex buffers (all vertex attributes such as vertex position, color, normal, custom ones). You can not easily control "where" is stored the data you provide to OpenGL, the driver act as an abstraction layer to manage the VRAM/RAM usage. See the VBO and PBO specs for more details on static and dynamic data.
3) Both in VRAM and RAM actually, so that when a texture is deleted from vram to make space for others, the driver can transparently reload it to vram later. Yes you should dealloc your local texture block after glTexImage2D.
4) see 2)
5) should be deallocated by the driver. But it is better practice to explicitly deallocate it when application closes.
7) write : from framebuffer : glCopyTexSubImage2D, from cpu : gltex[Sub]Image2d
read : I think it is not possible directly, but you can draw a textured quad and glReadPixels it. Bringing back data from GPU has a performance penalty, better avoid it if possible.
For higher performance of both reads and writes (async operation) use PBO.
nice tutorial on PBO :
11-24-2008, 05:32 PM
Thanks for the excellent reply!
I have some questions regarding VBOs:
1) Is the availability of VBOs dependent on the current version of OpenGL on the machine or does it depend on the graphics card? Does OpenGL ES 1.0 support VBOs?
2) In the following link: http://www.ozone3d.net/tutorials/opengl_vbo.php, the second line in the introduction paragraph it saids that "It had been introduced by NV_vertex_array_range and ATI_vertex_array_object extensions promoted to ARB_vertex_buffer_object and <u>finally integrated into the OpenGL 1.5 specification</u>." Does that mean vertex buffer objects are part of the core OpenGL core library or is it still only referred to in OpenGL extensions?
3) I've never used OpenGL extensions before but from what I've seen, it just seems like they're just header files that I just include into my projects?
4) For setting up function pointers to the respective OpenGL calls: glGenBuffersARB = (PFNGLGENBUFFERSARBPROC)wglGetProcAddress("glGenBuffersARB");, is that statement only for a machine running on Windows (the reason why I ask that is because I notice the wgl prefix in that statement; isn't that wiggle (running OpenGL on Windows)? If that syntax is Windows-specific, how do I run it on other platforms?
11-25-2008, 01:58 AM
1) VBO were promoted to OpenGL core since version 1.5. All card that support OpenGL 1.5 and higher support VBO, whose that do not, may support VBO through the extension GL_ARB_vertex_buffer_object.
2) see 1)
3) Yes, you need to include glext.h...
4) ...and as you say looking for functions entry point. Or you can use libraries like Glew or Glee that do it for you
wglGetProcAddress is specific to Windows. On linux you can use the glx function: glXGetProcAddress the same way and you need to include glx.h in addition.
11-25-2008, 05:33 AM
#if defined _WIN32
#define GPA(ext) wglGetProcAddress(...)
#else // LINUX F.E.
#define GPA(ext) glXGetProcAddress(...)
function = GPA("Extension_Function");
11-25-2008, 04:18 PM
1) Just to clarify, if the graphics card does support OpenGL 1.5 or higher, I <u>don't</u> need to add glext.h. If it has a "lower" version, I <u>need</u> to add the header file?
2) Does OpenGL ES 1.0 support VBOs (if it doesn't, I need to had glext.h?)
11-26-2008, 02:36 AM
1) always include glext.h, then at runtime check what is actaully available. sometimes a GL 1.4 card include some of the more modern extensions.
2) I don't know
Powered by vBulletin® Version 4.2.2 Copyright © 2016 vBulletin Solutions, Inc. All rights reserved.