View Full Version : BPTC for non-image data

09-20-2011, 08:14 AM
Has anyone tried to use BPTC compression for non-image data (normal-maps, vertex spatial coordinates, etc.)? I need a method for fast data decompression on the GPU. Any suggestion is welcome.

09-21-2011, 01:35 AM
For normal vector decompression check out the following paper:

It is really simple, fast, and saves 50% on floating-point normal vectors with only little error. No lookup-table required. It can also be used for normal maps and may additional be compressed with luminance-alpha texutre compression.

09-21-2011, 11:31 AM
Thank you Quirin for your paper!

I'll take a look at your algorithm. I asked for the normal-maps compression only because they are sensitive to changes of values by the compression. What I really need is a general method for compressing meshes (unnormalized floating point values), especially gridded data.

Thanks again!

09-22-2011, 12:51 AM
Do you mean denormalized floating point values? That is kind of unusal, I guess. I am not sure if all GPUs support it. People typically use other for transmitting data into the shader cores
If you can live with that, look at:


The runtime should be straightfoward to implement these days.

09-22-2011, 02:21 AM
Thanks again!

Hardware-Compatible Vertex Compression Using Quantization and Simplification is rather old approach and compresses vertex attributes. I'll try to use that approach also for some applications.

The "modern" approach (at least I'm trying to force it) stores everything in textures and uses attributless rendering. That's why I asked for texture compression algorithm. BPTC is nice for sharp edges (which are not rare in the models). I agree that it is not supported on SM 4.0 hardware, but since I'm still experimenting, it is not such a big issue.

I haven't tried to check image quality, but on Fermi "default" BPTC compression takes only about 50% more than DXT1. I have read that BPTC is too slow, but this algorithm implemented in drivers is quite fast.

Proposal for the site moderators: Maybe it is not such a bad idea to have a forum where different scientific papers concerning OpenGL can be posted. It is more than useful to have a repository of papers and presentations.

09-22-2011, 02:45 AM
In fact, I have worked on a compression method for vertex positiosn that saves about 2/3 of the data with only little rendering error. It is really simple and it adapts dynamically. It is orthogonal to many LOD methods.

It will be published in a couple of weeks and I might put a preprint on my page (http://www9.informatik.uni-erlangen.de/publications/publication/Pub.2011.tech.IMMD.IMMD9.adapti/) if I am allowed to.

09-22-2011, 02:53 AM
I'm looking forward to reading your new paper!

Alfonse Reinheart
09-22-2011, 11:26 AM
Maybe it is not such a bad idea to have a forum where different scientific papers concerning OpenGL can be posted.

Wouldn't that make more sense for a wiki? Like, I don't know, this one? (http://www.opengl.org/wiki/Main_Page) ;)

09-22-2011, 02:15 PM
Well, some kind of repository is much better way for organizing scientific white papers. A wiki has a completely different purpose.

09-23-2011, 01:27 AM
Sorry but a wiki GL page listing all papers considered interesting by OpenGL users is a good idea.
A forum is well, for discussions.

09-23-2011, 02:33 AM
OK! I agree. The form is not so important if the content is useful. :)