Dumping ObjectCode from Shaders?

Hello peoples!

Maybe this is a dump question, but is there a way to dumb the compiled glsl-object-code?? I could not find anything about this in the net or in this forums.
Wouldn’t be that an increase of speed?? I mean everytime I load a glsl-pair, I have to check everything. Is there no smarter way? The next reason for this technique is that I can keep my shader files outside an archive and nobody can read it. Well, people still could steal it, but its some way of codehiding.
But my main concerne is speed.

Thanks in advance.
rya.

No, but his was discussed in the extension spec and on these forums. It would almost certainly increase speed. The object code would have to be a generic binary format for the program which the driver would then optimize.

If you are worried about people stealing your code, then encrypt it (obfuscate).

DieselGL Said: But my main concerne is speed.
V-man Said: It would almost certainly increase speed. The object code would have to be a generic binary format for the program which the driver would then optimize.
The driver can optimize the GLSL code, as it is sent straight to the driver. If your concern is speed, I believe as GLSL compilers improve you’ll see an increase in the effectiveness of optimizations. It would make no sense to create another low level language that is standardized (and thus likely not 1 to 1 with the hardware itself).

The idea of storing/reusing precompiled GLSL code seem useful to me : with a lot a shaders to compile, application startup is slow. And whatever optimizations are done in drivers, it will be to enhance execution time, not really compile time.

Once compiled, one can imagine a glGetCompiledGLSL(&pointer) to retrieve a “black box” hardware-dependent compiled shader.

Then you can store it on disk, and when this shader is needed again just glSetCompiledGLSL(&pointer). This time the driver has just to make a quick check to be sure that this machine code is valid for the hardware, and upload it.

Of course this will not help obfuscate your shader (maybe if you precompile for each target hardware … ok ok forget it).

> Once compiled, one can imagine a
> glGetCompiledGLSL(&pointer) to retrieve a “black
> box” hardware-dependent compiled shader.

This is what GL_OES_shader binary does.
The extension is not in the extension registry but in the appendix of the OpenGL ES 2.0 specification.

OES shader binary

Overview This extension adds the ability to load pre-compiled shader binaries instead of using the shader compiler to compile shader sources. This allows OpenGL ES 2.0 implementations to not require a shader compiler which can be a significant savings in the memory footprint required on a handheld device.

…snip…

the link stage in the OpenGL ES implementation and can be quite expensive in terms of number of CPU cycles required and the additional memory footprint required by the OpenGL ES implementation

Issues

Should a GetShaderBinary call be supported? No.

RESOLUTION:

The following reasons were given for not supporting GetShaderBinary: - a lot of complexity in managing associated state with a read-back binary - use case for get binary not that strong - decided to get more experience with ES 2.0 before implementing get binary.

too bad, it was almost there …
TkK, thanks for the info anyway.

Well even if that GetShaderBinary would be supported it wouldn’t be of much use for your purpose: OpenGL ES 2 require only OES_shader_source or OES_shader_binary to be supported, so with OES_shader_binary only you can’t supply a shader source. But the existence of OES_shader_binary means that there will be offline compilers. So you could have precompiled shaders for the most common hardware platforms.

Originally posted by ZbuffeR:
Once compiled, one can imagine a glGetCompiledGLSL(&pointer) to retrieve a “black box” hardware-dependent compiled shader.

You would need to identify the hw everytime your program runs. Most IHVs are turned off by this whole issue.

It would be better to have a generic binary like the one supported by Direct3D. vsa.exe and psa.exe are command line compilers just for this purpose.

I would prefer the glGetCompiledGLSL idea as that would give the most benifit.

No, you wouldn’t need to detect the hardware everytime.

I would do it this way: Ship the app without precompiled shaders. When the app runs, send the shader to the hw, get the compiled binary, store it on disk. The next time you run the app, check, wether the binary is available, already, and send that instead. If the hw complains, just send the original shader.

This way you get

  1. Faster load times, after a shader has been used once.
  2. Binaries that fit the hw, even if the driver changes and the binary has to change, too, you get an error, you send the original shader and get the up-to-date binary, which you store on disk.
  3. Complete hw/file-format/whatever-independency, because you only GET the stuff from the driver, you are not supposed to be able to work with that stuff yourself.

I don’t see any reason, why that would be so hard to implement. Every driver has its own internal format it translates the shader to, it’s the simple idea to give this string back and to accept strings in this format.

And i don’t see a reason why we would need a specific generic byte-code, as is the intention of the previously mentioned extension. Sure, every hw is different, so forcing drivers to accept a unified format, which might be far from hw-friendly, is really a stupid idea. Also, you can’t put in “optimizations” in a generic format, because those optimizations might be highly hw-dependent.

So, by simply returning the parsed/optimized code, which is done for ONE card (not ONE vendor, but it can even be card-dependent), an application can be able to speed up shader-compilation drastically, with only a few lines of code, and it should be completely fool-proof.

Jan.

Thanks Jan for making it crystal clear, it is exactly what I meant.

Originally posted by Jan:
[b]No, you wouldn’t need to detect the hardware everytime.

I would do it this way: Ship the app without precompiled shaders. When the app runs, send the shader to the hw, get the compiled binary, store it on disk. The next time you run the app, check, wether the binary is available, already, and send that instead. If the hw complains, just send the original shader.
[…]
Jan.[/b]
The driver can already do that without the need of new extensions, there’s no reason why it cannot hash the shaders and persist them on disk.

The idea seems interresting. However, I wonder how can this be achieved. Let me try to explain more why I questionate myself.

Who can garantee that a ‘compiled binary’ shader has exactly the same bits from one card to another one ? Maybe all geforces can have the same binary for a shader, but I think the main problem is to reuse a code compiled by (or for) an Nvidia card under let’s say an ATI one. Virtual machines might not behave the same, more optimizations might be really different, as they strongly rely on hardware.

I might be wrong, but at least that would help me understand why.

To Jide, V-man, and others :
It is NOT about shipping compiled GLSL code, nor dealing with a common binary code across multiple GPUs.

evanGLizr, you have a good point. But it means more hassle on the driver side.

jide: Read my and ZbuffeRs posts again. We do agree with you, that it wouldn’t work that way, but our idea is different.

The problem with your idea comes from when the user upgrades his machine. Obviously upgrading from an ATi card to an nVidia card is going to cause all kinds of problems, but even upgrading in the same family tree will create issues.

Another problem is not getting driver optimizations. If you have cached a copy of a relatively unoptimized binary, and the user installs a new driver, you won’t know to recompile the original shader.

I don’t see, where there are any problems:

If you install another card, changing from ATI to nVidia, or vice versa, the driver will dismiss the precompiled shader as invalid. Of course, each vendor needs to put a small header into the shader, to be able to validate the shader.

Additionally, the driver can put in a version-number, the shader has been compiled with. So, if i install a new driver, the next time i run my app all shaders will be “invalid”, because the driver knows, that the shaders have been compiled with an older driver and therefore need to be recompiled.

So, WHERE is the problem??? If you change your hardware, your shaders get recompiled. If you install a newer (or older) driver, that has an improved shader-compiler, your shaders get recompiled. EVERYTIME something important happens, your shaders get recompiled.

BUT, how often do you change hw or your driver? Maybe once a month, you upgrade your driver. So, ONCE A MONTH you pay the additional cost to compile all the shaders your apps use. In contrast to EVERYTIME YOUR APP STARTS, this might be a nice improvement and it is easy to implement, on both sides, the application AND the driver.

Jan.

First, sorry Zbuffer, I knew this wasn’t the question, but I think my post is, anyway, relevant. If you think not, please do not care or tell me if you think a new thread should be best.

Then, for the last post of Jan. This is certainly right (thought I’m not a ‘hardware-man’). However, for people that wants to hide their code, this is still a problem: shaders sources need to be present each time the program has to recompile, this turns out. However, as some would still appreciate it, I agree too: most of the times, when you start a program using shaders, it wouldn’t spend much time during loading process… this will indoubtly be appreciated by the end-user.

Now, to try to develop more about that. As everyone knows here, any program (like binary) could be looked up ‘cowardly’. I mean one can do reverse-engineering on a binary to study it: this is the main way crackers crack applications.
So, having the shader beeing compiled each time a program first need it, ensures that (at least enforces that) the compiled code wouldn’t be seenable by people. Of course, we could try to read directly the graphic memory, but I think it’s very more dangerous than if the code were directly readable from a file stored on an hard-disk.

So, I finally wonder how far this would be a so great capability: compiling shaders is not like compiling a full linux kernel :wink:

Hope that’s not too bad :slight_smile:

Well, there are always ways to crack an application. You can’t do anything about that. I mean, i can encrypt my files, but if you want the shaders, glIntercept can give you the code. So, if you want to be safe, you would need to send encrypted shaders to the gfx-card. Urgh, bad thing!

And with precompiled shaders it would be the same thing.

And, well shaders are not so long or complex, usually you know, what a shader does, anyway, so you won’t be hiding real secrets there.

If you are talking about CRACKING, well then you would need to make sure that a program doesn’t REPLACE a shader. That means, you want to bind a lighting-shader, and the crack simply binds something different.

So, if you are concerned about security, forget it. There is no way, you can make this safe and any approach to implement anything security-related into OpenGL is a waste of time.

Therefore this whole discussion has quickly been reduced to the problem of speed.

And yes, compiling shaders does not take that long. But in coming titles more and more shaders will be used, so the impact of shader-compiling will get stronger. I use 3 different shaders in my app, but my code selects only relevant parts of the shader and therefore disables/removes unnecessary code. This way i already get around 20 different shaders, at the moment, that are actually in use. Although they all are built from only 3 different base-shaders. I don’t want to know, how many shaders the Unreal 3 Engine uses.

This is one (IMHO promising) idea how to solve this problem. It is all about improving speed at startup and therefore it is all about satisfying the end-user. Nothing more. It’s not a necessarity, it would be a nice feature.

Jan.

Hi People!!

Sorry for my late answer, I was kinda busy the last week. :slight_smile:

Thanks for your very interesting posts and the discussion.

I agree that there should be a way to save the shader once it has been compiled on the users computer. This shouldn’t be to difficult for ATI or NVIDIA to realize that, I think. When the user changes the GraphicsCard - just recompile and everything is ok. :slight_smile: Perhaps someone is picking up this idea…

@obfuscating

I save m Shaders in an xml File. Well, I could put it in a encrypted zip, but I think I will just use a simple encryption like Cesar’s method. Or is there already a Tool ( perhaps a powerful one, means opensource ? ) for that???

rya.
DieselGL.

Someone can put opengl32.dll into app dir and intercept all OpenGL calls. In one of this call’s will be your shader soruce.

Accidently I found that using CG shaders in separate thread and context cannot be intercepted using glIntercept. I didn’t try glDEBuger.

yooyo