Binary shaders support in OpenGL

Hi,
I’m using a heavy shader which takes a lot of time to compile so I’m trying to use binary shaders.

I’d like to compile a shader once and just load it onto shader pipeline next time I use it. It’s a common practice in Direct3D by using CreatePixelShader or CreateVertexShader functions to load compiled binary shaders.

Only thing I googled is glShaderBinary which is part of OpenGL ES and I cannot use it because glGetIntegerv with parameter GL_NUM_SHADER_BINARY_FORMATS returns 0 (obviously because I don’t have OpenGL ES extension).

Is there a way to make it work without OpenGL ES?
What’s the minimal version of OpenGL standard that supports this?

Thanks

What you want is glProgramBinary, and its associated gets.

The minimum GL version that has this as a core feature is 4.1. However, pretty much any hardware that is still supported by the IVH (outside of Intel hardware of course) exposes it via ARB_get_program_binary on any platform. So all Radeon HD-class hardware and GeForce 6xxx and above.

Except for MacOS X :frowning:

Yes, but there’s a lot of stuff MacOSX doesn’t have from OpenGL.

Thanks,
I have a few more questions.

Link provided by Alfonse says that there are vendor-specific format which can be provided by 3rd party extensions.

Does that means there are formats which produce device-independent binary program codes?

That would mean that I can compile shaders into binary format which could be loaded from completely different hardware configuration. That is important because Direct3D generates code which is completely independent. It can even be produced on a machine without 3d-accelerated graphic card using console command fxd which is part of DirectX SDK and can be run on any computer with hardware that supports chosen shader language version.

Target machines I would use are PC computers (including Windows and Linux).

Does that means there are formats which produce device-independent binary program codes?

The API is open to permit such formats to exist. But none do currently. Nor are they likely to in the future.

That makes me sad :frowning:

Is there any enumeration in GLEW and/or MESA with program binary formats they support so I could get something besides the raw numbers?

I’d like to know what’s the idea behind some of those formats so I could study more about them.

Thanks again

The idea behind binary formats is that they load fast on their dedicated hardware.

They are the opposite of being able to run on completely different hardware. This however is the aim of GLSL.

The format value is nothing more than a way to tell where a binary came from and whether an implementation will accept this binary. You’re not going to get anything “besides the raw numbers” because they don’t mean anything. NVIDIA drivers of a particular version spit out one number, AMD drivers of a particular version spit out a different number. What matters is which numbers the implementations support.

Program binary is intended to be a way of caching compiled shaders, not shipping pre-compiled shaders with your application (though you can try to do that too, as long as you precompile them on all hardware of interest). In either case, the format field is how you detect if the implementation can load that particular version of the shader.

Thanks for the detailed info.

Sorry for being so curious, but I really wan’t to go into how OpenGL shaders work.

I’m now studying of ways how to create programs and I’m looking at ARB assembly language. It seems that assembly could be compiled really fast into binary and easily converted from my Direct3D shader assembly code.
However, there is a problem with that. ARB_fragment_program instructions list does not include branching and looping operations so I cannot convert my rep, if_lt and if_gt commands.

There is also NV_gpu_program4 instruction set which supports them, but they are nvidia-specific.

Is there an AMD version of instruction set or something similar since those NV functions are not likely to be available on radeon graphic cards?

Thanks

Nope, only NV supports ‘arb’ programs with newer profiles.

How long does your shader take to compile? Can you offload compilation to separate thread?

It has two combinatio of fragment / vertex shaders. One for raytracing and other for computing shadows. They take about 2-3 minutes using HLSL compiler (together).

I think I found a solution. I transformed HLSL compiled assembly into GLSL code. I’t not human-readable, but compiles in a few seconds…

Have you tried to port the HLSL source code to GLSL at some point? And how complex are your shaders if they take 2-3 minutes? For just 2 shader programs that sounds like an awful lot.

BTW: I’m aware you don’t want to compile from source everytime. I just don’t see why you would transform HLSL assembly instead of just trying to port to GLSL directly.

I want to try to keep my code hidden (not human-readable). I don’t want my code to be reverse engineered.

I don’t want my code to be reverse engineered.

Do you think people can’t reverse engineer HLSL compiled assembly?

In general, you gave up the right to secrecy the moment you started using shaders of any form.

Shaders I wrote are too complex to be reverse engineered. Too much optimization is applied. Only vertex shaders are simple enough to be reverse engineered.

Also, generated assembly is a few hundred lines of code long and less readable than disassembly of an executable file.

It would take weeks to get an idea of how those shaders work. This code simply isn’t worth that much of someone’s time.

Just out of curiosity, are your working on some paper or some work for a company doing new stuff worthy of protection like that? Raytracing and “computing shadows” don’t sound revolutionary enough to try and hide it from hungry eyes. :slight_smile:

I develop shaders and earn money for that.

They’re nothing revolutionary (maybe a few tricks in some of them).

The only reason I wan’t to hide their source is because I spent many hours developing them and don’t want them to be publicly available on the internet (protecting my business).

Shaders I wrote are too complex to be reverse engineered. Too much optimization is applied. Only vertex shaders are simple enough to be reverse engineered.

Also, generated assembly is a few hundred lines of code long and less readable than disassembly of an executable file.

Keep dreaming. A few hundred lines of code is nothing to reverse engineer. Worst-case, for someone who has a pretty good understanding of what your shader is trying to do, it might take a day to figure it out.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.