PDA

View Full Version : March'04 meeting notes



KRONOS
05-28-2004, 02:03 PM
Nobody posted this... :p
So, finally, we will get GL2 and the feature list is complete. No ubber buffer included but also no word from EXT_render_target...

And the markting group seems to be on the move...

Commnents?

mogumbo
05-28-2004, 02:43 PM
Just to clarify... when they say that feature set will be included in OpenGL 2.0, does that mean those features will be included as extensions, or will they be included as core features?

zeckensack
05-28-2004, 02:49 PM
Link for the lazy (http://www.opengl.org/about/arb/notes/meeting_note_2004-03-02.html) .

Corrail
05-28-2004, 02:53 PM
Originally posted by KRONOS:
No ubber buffer included but also no word from EXT_render_target...But ATI_draw_buffers will be included so I think there will be an extension like EXT_render_target. Does anyone know something more about this issue?


Originally posted by mogumbo:
Just to clarify... when they say that feature set will be included in OpenGL 2.0, does that mean those features will be included as extensions, or will they be included as core features?AFAIK as core features

Korval
05-28-2004, 03:40 PM
Unfinished extensions, like EXT_render_target, aren't up for consideration for the core. And for good reason. Even an extension like VBO needs to live its life as an extension before absorption into the core, just to make sure that there are no kinks or unseen problems in it.

Our dear friend John Stauffer from Apple (one of the 3 companies behind EXT_RT) did note that superbuffers was becoming complex, and that a different design could solve consumer's problems.

Given this line "Instead, the WG will continue evolving its proposals and getting ISV feedback, allowing a chance for fully thought out alternate proposals to be brought forward.", it sounds like they decided to hold off on superbuffers until alternatives like EXT_RT can be brought forth and tested.

BTW, nVidia, if you're listening (and you care), I'll buy your cards again if you give us this extension before ATi ;)


But ATI_draw_buffers will be included so I think there will be an extension like EXT_render_target.Considering that a vote wasn't taken at this meeting on it, I doubt that it (EXT_RT) will be included into the core. You have to remember, it is neither implemented, nor even finished yet.

dorbie
05-28-2004, 10:39 PM
So what's thrown out from 1.*?

And can we get implementations to fix polygon offset once and for all (sorry it's a pet peeve).

Corrail
05-29-2004, 03:01 AM
Originally posted by Korval:
Considering that a vote wasn't taken at this meeting on it, I doubt that it (EXT_RT) will be included into the core. You have to remember, it is neither implemented, nor even finished yet.Yes, I know. But does that mean that you have to use pbuffer for MRT or render to texture in OpenGL 2.0?? I really won't like that...

krychek
05-29-2004, 04:05 AM
What's the shader language metafile? Is it the equivalent of the effect files in d3d?

evanGLizr
05-29-2004, 12:11 PM
Originally posted by krychek:
What's the shader language metafile? Is it the equivalent of the effect files in d3d?From my complete ignorance, I would guess it's a binary format for glsl (ala bytecode).

evanGLizr
05-29-2004, 12:26 PM
Originally posted by dorbie:
And can we get implementations to fix polygon offset once and for all (sorry it's a pet peeve).Hmmm interesting comment, in which way do implementations need to be fixed and how would you define the r factor for a floating point depth buffer?

Korval
05-29-2004, 02:44 PM
So what's thrown out from 1.*?Nothing. 2.0 isn't a revision, it's just a new name for 1.6. They aren't removing stuff or breaking backwards compatibility.


Yes, I know. But does that mean that you have to use pbuffer for MRT or render to texture in OpenGL 2.0?? I really won't like that...There's nothing stopping nVidia/3DLabs/Apple from developing&implementing EXT_RT on top of 2.0. You can still have extensions, after all.


What's the shader language metafile? Is it the equivalent of the effect files in d3d?Since nVidia is behind it, I would imagine that it would be the equivalent of a D3DX Effects file.

Corrail
05-30-2004, 02:24 AM
Originally posted by Korval:
There's nothing stopping nVidia/3DLabs/Apple from developing&implementing EXT_RT on top of 2.0. You can still have extensions, after all.
Yes, of course. But what about ATI and other hardware vendors? Ok, if there might be EXT_RT on NV/3DLabs/Apple you only can hope that ATI will implement it too.

davepermen
05-30-2004, 02:37 AM
i like that pdf that is linked in there. GREAT DESIGN! :D

i espencially like the page where there is a HUGE

API

and nothing else really :D but it fills the screen :D

fun :D great art.

KRONOS
05-31-2004, 12:05 AM
Originally posted by Korval:
Nothing. 2.0 isn't a revision, it's just a new name for 1.6. They aren't removing stuff or breaking backwards compatibility.But isn't vertex/fragment programs taken out of the core? If this is true then it is a revision...

harsman
05-31-2004, 12:29 AM
Vertex and fragment programs were never core to begin with, just an ARB extension.

Madoc
05-31-2004, 12:54 AM
I _cannot_ understand why anyone would _ever_ want to use a high level shading language. I wouldn't say it's really any easier than VP and FP, you can still write quick and dirty code with VP and FP. It seems like a completely useless and meaningless abstraction, it doesn't have _any_ of the real advantages of a high level language. I am honestly puzzled by the motivation behind this. I am also quite ignorant due to lack of interest, perhaps someone can enlighten me?

zeckensack
05-31-2004, 01:39 AM
Originally posted by Madoc:
I _cannot_ understand why anyone would _ever_ want to use a high level shading language. I wouldn't say it's really any easier than VP and FP, you can still write quick and dirty code with VP and FP. It seems like a completely useless and meaningless abstraction, it doesn't have _any_ of the real advantages of a high level language. I am honestly puzzled by the motivation behind this. I am also quite ignorant due to lack of interest, perhaps someone can enlighten me?You use high level languages for function calls, complex control flow, scoping and linker-style functionality. All of these are quite nice to have for larger pieces of code.

But I also think that ARB_fp and ARB_vp are quite expressive for "low level" languages and I prefer using them right now.

Madoc
05-31-2004, 03:51 AM
Perhaps I'm more ignorant than I thought, but aren't these functions strictly fictitious in SLs? The thought really gets on my nerves... grrr

valoh
05-31-2004, 04:48 AM
Originally posted by Madoc:
It seems like a completely useless and meaningless abstraction, it doesn't have _any_ of the real advantages of a high level language. I am honestly puzzled by the motivation behind this. I am also quite ignorant due to lack of interest, perhaps someone can enlighten me?Well, do you use x86 or the corresponding ISA on your platform for software development?

No? Then the same arguments not doing this apply to the GPU too. And with every GPU version the arguments become stronger.
In the long run GPUs will be an additional SIMD coprocessor array resp. vector coprocessor to your system.

Regarding the "March'04 meeting notes" topic: can anybody give additional informations to the shader language metafiles?
Byte code? effect file format? compatible to directx/cgfx? Which time frame?

Adruab
05-31-2004, 08:04 AM
Regarding functions etc. in High Level Languages. No they are not just for show, yes in a lot of cases it is faster to inline them (so compilers will do that), but future hardware will support dynamic branching (nvidia's 6800 support dynamic loop branching at the very least).

True at the moment it is probably better to work in a low level language. However, compilers are improving and the better they get and the more complicated GPUs get, the bigger an advantage a compiler can have. The main nicety at the moment is the time for development. There are plenty of interviews with people saying how an optimized version of their high level shader came out to one instruction where their version of optimized low level came out to 1 instruction less and took them 7x as long.

I too am curious about this metafile thing. If it indeed was similar to effects, wouldn't that belong in glu (unless they are trying to offer ISVs the option to optimize data changing/flow in those as well)? Either way I really look forward to it :) .

Oh, and a question.... Are they actually going to call this OpenGL 2.0? I think that would be a shame, because it really is GL 1.6 or something, just normal GL + a bunch of programmable stuff... oh for the nice and clean solution (still cleaner in some respects than DX ;) )

Ostsol
05-31-2004, 08:12 AM
Originally posted by Madoc:
I _cannot_ understand why anyone would _ever_ want to use a high level shading language. I wouldn't say it's really any easier than VP and FP, you can still write quick and dirty code with VP and FP. It seems like a completely useless and meaningless abstraction, it doesn't have _any_ of the real advantages of a high level language. I am honestly puzzled by the motivation behind this. I am also quite ignorant due to lack of interest, perhaps someone can enlighten me?The advantage of high-level shading languages is that the driver has alot more freedom and ease in optmization than with low-level code. This takes away some of the need for the shader programmer to optmize to each vendor.

Madoc
06-01-2004, 01:01 AM
Originally posted by valoh:
Well, do you use x86 or the corresponding ISA on your platform for software development?
Yes. Of course. Particularly for something performance critical such as what's done for every vertex.
That said, VP and FP are hardly assembly level now, innit? They are specialised low level languages but far from assembly, and quite notably so in terms of ease of use.


Originally posted by Adruab:
but future hardware will support dynamic branching (nvidia's 6800 support dynamic loop branching at the very least).
Yes, that's the only thing I can come up with as a valid point myself, future developments in programmable hardware. It still seems to me that the priority in developing these languages was not from a practical point of view. How long will it be before a decent slice of the market will benefit noticeably from this?


Originally posted by Adruab:
There are plenty of interviews with people saying how an optimized version of their high level shader came out to one instruction where their version of optimized low level came out to 1 instruction less and took them 7x as long.
I find this hard to believe. Who said this? Someone that can't be assumed to be a really bad programmer?
What I see is that when you implement something in a low level language you usually discover a completely different formulation for your problem which is far more efficient when implemented thus. I have some longish "vertex shaders" that I have implemented in C, x86 assembler and VP. Each implementation is quite different in what it does but they all arrive at the same result. I know no compiler could do as much to rearrange the problems so efficiently for the available instructions. And you _definitely_ see the difference between that and a naive conversion.

I find it generally hard to believe (if not comical) that a high level language is intended to better optimise code. When has this ever been the case?
So long as the low level languages are sufficiently close to the hardware there are logical optimisations that only the programmer will be able to do. VPs and FPs still give the drivers a fair amount of freedom for other optimisations.

Of course, VP and FP could actually have little to do with the actual hardware and then everything I've said is moot and arguments for high level languages are all good. I wouldn't be too surprised, I've seen enough _BAD_ design decisions.

sqrt[-1]
06-01-2004, 02:17 AM
Originally posted by Madoc:
Yes, that's the only thing I can come up with as a valid point myself, future developments in programmable hardware. It still seems to me that the priority in developing these languages was not from a practical point of view. How long will it be before a decent slice of the market will benefit noticeably from this?
Considering that this language (or at least the framework) is designed to last TEN YEARS I would think this is a valid point. And with next gen GUI's using hardware acceleration (ie OSX and Longhorn) a "newish" and fast 3D card MAY become standard so sales reps can show off eyecandy.




Originally posted by Adruab:
There are plenty of interviews with people saying how an optimized version of their high level shader came out to one instruction where their version of optimized low level came out to 1 instruction less and took them 7x as long.
I find this hard to believe. Who said this? Someone that can't be assumed to be a really bad programmer?
Don't quote me on this but I think it was the Half life 2 guys.



What I see is that when you implement something in a low level language you usually discover a completely different formulation for your problem which is far more efficient when implemented thus. I have some longish "vertex shaders" that I have implemented in C, x86 assembler and VP. Each implementation is quite different in what it does but they all arrive at the same result. I know no compiler could do as much to rearrange the problems so efficiently for the available instructions. And you _definitely_ see the difference between that and a naive conversion.

I find it generally hard to believe (if not comical) that a high level language is intended to better optimise code. When has this ever been the case?
So long as the low level languages are sufficiently close to the hardware there are logical optimisations that only the programmer will be able to do. VPs and FPs still give the drivers a fair amount of freedom for other optimisations.

Of course, VP and FP could actually have little to do with the actual hardware and then everything I've said is moot and arguments for high level languages are all good. I wouldn't be too surprised, I've seen enough _BAD_ design decisions.[/QB]This is exactly the point, the ASM level interfaces only map "roughly" to the Existing hardware. (from Nvidia and ATI - I sure 3Dlabs have plenty to say about this) Even then, you have lots of little "rules" for each vendor. (ie with ATI you have swizzling and texture dependency issues but have possible bonuses of co-issue instructions. With Nvidia, you can actually go faster when code is re-arranged and you execute more instructions)

Also keep in mind that once feature becomes "core" in OpenGL it is going to be there for a very long time -so it had better be generic enough to handle future developments.

On your last point, you say that it is a BAD design decision to not map the ASM interfaces directly to hardware, If you were to write a ASM interface, whose hardware would you map it to? ATI? Nvidia? 3Dlabs? These are not CPU's where each vendor is implementing the same opcodes in hardware. (this would only work if one vendor got 95%+ of the market which would make their opcodes a standard for competitors to implement. ie think Intel which allowed AMD to come along later)

Madoc
06-01-2004, 04:40 AM
Originally posted by sqrt[-1]:
On your last point, you say that it is a BAD design decision to not map the ASM interfaces directly to hardware, If you were to write a ASM interface, whose hardware would you map it to? ATI? Nvidia? 3Dlabs? These are not CPU's where each vendor is implementing the same opcodes in hardware. (this would only work if one vendor got 95%+ of the market which would make their opcodes a standard for competitors to implement. ie think Intel which allowed AMD to come along later)Huh, no, not directly. I hope they're relatively close though. After all the languages are not that low level, the instructions are very much alike to what a vector library might do and my understanding is that the GFX hardware operates with similar functions.
I was thinking mostly of the awful mess of all the multitexture extensions, which IMO made a really lousy job of exposing HW functionality.

martinho_
06-01-2004, 09:12 AM
I _cannot_ understand why anyone would _ever_ want to use a high level shading language. I wouldn't say it's really any easier than VP and FP, you can still write quick and dirty code with VP and FP. It seems like a completely useless and meaningless abstraction, it doesn't have _any_ of the real advantages of a high level language. I am honestly puzzled by the motivation behind this. I am also quite ignorant due to lack of interest, perhaps someone can enlighten me? With low level languages:

-There is no portable way of using all the instructions of every hardware.

-Long shaders are difficult to write.

-When new hardware appears your old shaders will not use any of the new functionality because of your old code.

With high level languages:

-The compiler is free to optimize the code using all the power of the present and future graphics hardware, and It usually does it better than most of low level programmers.

-Shaders are easier to write (and shaders will become long programs someday).

And, to finish, some words from John Carmack:


At that point, a higher level graphics API will finally make good sense. There is debate over exactly what it is going to look like, but the model will be like C. Just like any CPU can compile any C program (with various levels of efficiency), any graphics card past this point will be able to run any shader. Some hardware vendors are a bit concerned about this, because bullet point features that you have that the other guy doesn't are a major marketing feature, but the direction is a technical inevitability. They will just have to compete on price and performance. Oh, darn.

Madoc
06-02-2004, 01:55 AM
That seems to make sense and it's very pretty but doesn't meet any practical concern of mine. I am most concerned with how _old_ hardware will run my code. New hardware will just do it better. Whatever I'm aiming at today, I don't need to worry about how tomorrow's hardware will run it.

If I rewrite something in assembler and it's 3 times as fast as the C implementation then I _know_ I've done something useful and the chances of future hardware and compilers beating that are both slim and utterly irrelevant.

I also have to repeat that these languages are not that low level. I don't see them as providing such a barrier for compilers and they're also essentialy vector and math operations which haven't changed in _centuries_. It's not like we're dealing with the subtleties of register stacks and instruction parallelism of a given processor architecture (I won't argue about "register" limitations in today's hardware, that's just about quantities and depends on the program, not the language).

These high level languages still seem like a bit of a marketing orientated thing to me. I still haven't heard anything said that gives me any practical reason to want to use them. I guess ease of use and familiarity might be a point for some people, fair enough.

I guess it might also depend on the application. In my exporience, _really_ long shaders are too slow for any realistic RT application these days (I'm excluding demos). My really long shaders are not really intended for real-time (or applications that care about compatibility at all), though they're still VP/FP and they still want to be as fast as possible. I guess future hardware might run them in real time, I don't care, we'll be doing different stuff _then_.

It just seems a bit early to talk about a complete move, IMO.

Ostsol
06-02-2004, 06:43 AM
Originally posted by Madoc:
If I rewrite something in assembler and it's 3 times as fast as the C implementation then I _know_ I've done something useful and the chances of future hardware and compilers beating that are both slim and utterly irrelevant.Have you tried making such a comparison? Write a long shader in both ARB_fp and GLSL and compare the performance.

Madoc
06-03-2004, 12:02 AM
I will, though right now I have other things to get on with. I was however speaking about the general case of low vs high level languages, by no means am I expecting similar results from shader languages.
It's silly really, VP and FP have nothing to do with assembler. I would actually hesitate to even call them low level (err... what was the actual definition again?), it's only the syntax that's similar.

I don't want to sound obstinately contrary to HLSLs, I keep wanting to agree but I can't convince myself. I still don't get what all the big hype is about.

Since day 1 I've been complaining about the limitations and rigidity of shader languages in dealing with small variations in state. I've seen high level languages appear but not a solution to the more fundamental problems behind today's programmable pipes. Fixed function is all about adjusting state for shaders, lights etc. and it works efficiently, with programmable shaders each little variation means a new shader and the number of shaders would grow exponenetially. In practice we end up with less flexible and/or less efficient renderers. Sure, we have more flexibility in what shaders we can design but inflexibility in the interaction between states.

Ostsol
06-03-2004, 04:52 AM
I think that one of the big optimizations that can come from writing CPU assembler code by hand rather than trusting the compiler is in avoiding memory access. By keeping everything in registers as much as possible, one avoids any potential bandwidth and latency issues that may slow performance. That's not the case in shader languages, since the only memory access there is comes from texture reads and framebuffer writes. Temporary storage in a shader program can only be within registers. As such, this is one optimization that cannot be taken advantage of.

This leaves instruction counts and architecture matching (which includes register counts, on relevant hardware). I remember the early days of D3D's HLSL where the resulting assembler shader code was much more efficient in terms of instruction counts, but lately the compiler has gotten very good indeed, the difference coming down to within one or two instructions. I'm sure that GLSL implementations will achieve similar results, eventually. Actually, GLSL will do better, since it's possible to do specific architecture matching.

Korval
06-03-2004, 11:15 AM
with programmable shaders each little variation means a new shader and the number of shaders would grow exponenetially.Have you not heard of uniforms? Have you not yet discovered that you can have conditional branching (in vertex shaders, at least)?

The days when you needed to make a new shader for adding/removing various features are over. You can make one monolithic vertex shader and just pass various uniforms that deal with flow control. Granted, you have to accept the performance penalty in doing so.

Also, I would like to point out that glslang's shader linking facility allows you to build little modules of shader code that you can link together as needed (in theory, at runtime, though nVidia's compiler doesn't really allow for that). So, at least there, you have some possibility for dynamic shader construction.

plasmonster
06-03-2004, 01:43 PM
I see Madoc's point. For performance reasons, one would need quite a few shaders to implement a general pipe; the branching support/performance just isn't there right now. Sure, you can theoretically make it happen, but it's going to be slow, ergo essentially useless.

Think of implementing the 2000+ shaders in Quake3 in GLSL, for example. Brrrr, shiver me timbers. This shader model requires a scripted wrapper. You could then pick an optimal path based on the contents of the shader, and so on. But it would be impractical to implement this in VP/FP directly, there are simply too many permutations. I think that the shader model requirements for performance, quantity, and artist facility will ultimately determine the method used.

As for the question of high level shading laguages; well, I wouldn't want to go back ASM in my project. The benefits of high level laguages transcend mere convenience into the realms of time and creativity.

Korval
06-03-2004, 04:56 PM
the branching support/performance just isn't there right now. Sure, you can theoretically make it happen, but it's going to be slow, ergo essentially useless.I know this is the case for fragment programs, but what about vertex programs (people still use those, right)? What is the performance cost for looping there?

plasmonster
06-03-2004, 05:57 PM
@Korval
I was mainly referring to FPs, but also the general problem of configuring a single general pipe for thousands of shaders; this is difficult with VPs as well. Although, CG's 'interface' is an intriguing possibility.

My ideal configuration would require only a single VP and FP for the entire pipe, but this really isn't practical in current hardware. Suppose one could parse a single VP/FP into its constituent parts, replacing if/else with separate programs. The problem would still be intractable with a large number of configuration possibilities. Again, I'm referring mainly to FPs.

Adruab
06-03-2004, 07:37 PM
Wow, lots has happened so far. I somewhat agree with the speed benefit for asm GPU programming.

At the moment, since GPU programs must be under a certain number of instructions and a certain number of registers it does make more sense to be able to program manually within those boundaries. High level scripts can evaluate to the exact same as assembled files would be assuming all the normal options are available (which they are not at the moment in GLSL... read nvidia half type). Either way, writing in a super generic fasion just doesn't work right now given the common range of cards out there (hence MS's effect files for multiple versions...).

In the end I think it's a mistake to not upkeep the assembly versions in GL (I'm not sure if this is the case or not). However, OpenGL is supposed to be forward looking, and this is what they are trying to do. At the moment, DX's system is easier for the current and past hardware. The more complicated GPU hardware gets, the better the high level languages will perform. In my opinion, an interesting option would be the multipass compiler (like Ashli), though it seems like there would need to be a lot of information about how to compile it to make it very successful in practical circumstances.

Madoc
06-04-2004, 04:17 AM
FPs are where most of the serious shader work is, in my experience. Per-vertex/per-pixel hybrids have horrible artifacts that often defeat the purpose of doing anything per-pixel. I end up doing little in VP if I use FP for anything much.

I'm not sure what a good solution might be. I would like to see more modular programs but I haven't thought it through well enough to propose an actual API solution.
Being able to switch a program module that does fog, for instance, would be useful. Also being able to globally use one method or another without having to repeat shaders for none, method1, method2 etc. I see fog and the fog method as something that is part of the environment, not the shader (though they obviously can and should interact to some extent).
I would even dare go as far having distinct shader/light-interaction code fragments that can be interchanged depending on the light type and even allow enabling and disabling of multiple lights.

I actually thought about implementing a system that could programmatically generate shaders from code snipplets to deal with all the cases as they were required. It seemed that it could work well though I don't recall the details too well now. I should imagine that a more dynamic driver-side such system should be quite possible. Possibly it would be a difficult API to design well.

Back on the vertex/pixel shader hybrid thing, you can actually shift a lot of work to the vertex shader as special case optimisations. ie. known viewer or light distance allow more approximate methods, avoiding renormalisation, interpolating vectors etc. This would also be useful as a lightweight per-object state switch that doesn't require trillions of different shaders. It could certainly make a drastic difference performance wise, these are very valuable optimisations.

Adruab
06-05-2004, 11:36 PM
Since you mentioned an API design for different snippets of code and such... Microsoft seems to be coming up with something like that with native support in Longhorn ( Window\'s Graphics Foundation (http://download.microsoft.com/download/1/8/f/18f8cee2-0b64-41f2-893d-a6f2295b40c8/TW04079_WINHEC2004.ppt) ).

Not only that but Unreal Technology 3 says they will support dynamic recompilation of changing parts of shaders (e.g. varying types and number of lights through recompiling the shader, or at least that is my impression).

There are also similar implementations now for simple cases. An example in windows dev conf. presentations comes to mind, for having an array of compiled shaders based on numbers, like numbers of bones and lights (at run time you just index into the array to get the correct version with out doing any dynamic branching on the hardware).

V-man
06-06-2004, 04:59 AM
You won't be able to release GLSL code cause drivers for ATI and NV are still beta, so you should stick to VP, FP for now, and plan on GLSL.

If you will be working with GLSL, you have to keep in mind the hw limits (don't do looping, don't do branching on ATI, ...)

Shading is in it's *infancy*, but after a couple of generation (~2 years), the hw will be fancy enough so you can code more freely, without worrying about incapable GPUs.

The NV 6800 is pretty powerful as is ... living proof! Seen the videos?

Madoc
06-06-2004, 09:50 PM
I saw the Unreal 3 tech presentation just after making that post. I thought it might be common practice when I saw that. I do some very simple switching of code snipplets myself but I currently don't go beyond basic material terms and textures. I'd like something really flexible and a whole lot of optimised paths but there's really quite a lot to it, you also need to start handling register declarations and usage, for example, it's not just patching code together.

I'm very impressed with those Unreal 3 shots, I thought it would be a little while still before we saw that kind of quality in games. I wonder what the performance implication of that kind of HDR and HDR glow are. Looks great. I also wonder how much of that stuff is a hack, ie. the tinted soft shadows.

CrazyButcher
06-06-2004, 10:28 PM
and I wonder how much time has gone into the art, when the source art is reaching film quality with millions of polys per model, art generation will be taking a while for those next gen games.

dorbie
06-08-2004, 09:08 AM
The tinted soft shadows needn't be a hack. It's a generic textured light issue. You could even potentially render the projective image (a cubemap in that case) tinted through arbitrary (and even moving) transparent surfaces modeled in 3D and make it work. Once you handle light correctly the need for hacks diminishes significantly IMHO. The thing I'm most curious about is handling shaders correctly in the context of a robust lighting solution. You can't just implement any old shader, key terms must be modulated by shadow terms and others moved into passes that aren't fragment culled, making some kind of restricted shader framework inevitable. Doom 3 has the monolithic shader with preset lighting terms, Unreal 3 seems to move a bit beyond that with some programmabiliy, however that needs to coexist elegantly with your lighting. Half-life 2 seems to take the approach of allowing a selection from several available terms. The iridescence shader in the Unreal 3 demo with lighting was intriguing and I wonder how much of a hack *that* was, was someone able to write some kind of thin film component refraction shader and apply it to an environment map term easily for example or was it painstakingly implemented to coexist with existing effects but reimplementing a totally different shader path?

Madoc
06-09-2004, 05:12 AM
Do you mean something like Humus' demo? That did tinted shadows (I don't remember the details) but it seemed a bit expensive (is it not?). Sure, you can do all sorts of stuff but you need to have the processing power. How could they be doing so much?
If the art team has to say "this light casts coloured shadows" and a transparent object moving before another light doesn't give a tinted shadow or the case is not expected, then it's a hack (by my definition, anyway). Does it really work all round or is it a special case thing?

About the shaders, you could get away with saying that if you did something like in HL2 and allowed custom terms that respect a given framework, innit? Or are they explicitly claiming more?

dorbie
06-09-2004, 07:59 AM
It depends, the art team needn't specify anything of the sort under many circumstances.

Render to texture once would be possible then it's just a cubemap projection potentially under transformation, only dynamic translucent objects need be rerendered to texture and you could afford to do that each frame and of course the resolution is set arbitrarily.

You need only render select occluders to the shadow texture in fact this is essential.

Jan
06-09-2004, 08:41 AM
I assume, that that demo simply uses a cubemap to texture that object (but that´s not necessary) and additionally uses that cubemap for a texture-lookup to "fake" those tinted shadows, which is really the most simple thing to do.

I am very sure, that this won´t work with all transparent objects, like windows, and windows behind windows (so that objects in between get correctly colored).

I assume, that this doesn´t work with Humus´ demo, either, does it? A few days ago i was thinking about how his demo works exactly, but i always came to the solution, that it will only work in very special cases, not in general. Or am i wrong about that?

Jan.

Adruab
06-09-2004, 08:59 AM
Yeah, transparent shadowing is very complicated. I believe the humus demo only works correctly with one layer of transparent objects (as far as I can tell). To get true tinted shadows you would have to do a depth peeling of sorts to encorporte many different filtered colors.

I haven't really looked into how UT3 is supposed to use soft shadows, but I think it's similar to how pixar does it (with oversampled shadow maps). As for tinted stuff, I believe they were doing things on a per object basis, which would provide correct depth in the case of planes (though could still have issues with self overlapping objects...).

spasi
06-09-2004, 10:37 PM
For soft shadows, they said in a video presentation that they use two shadow cube maps. One "crisp" and one blurred. Then they interpolate the two maps based on the distance of each pixel from the light source.

ZbuffeR
06-10-2004, 06:44 AM
From the description of Humus demo about tinted shadows, it will color as well an opaque object placed _between_ the light and the tinted glass.

Adruab
06-10-2004, 07:49 AM
Yeah, that's what I was guessing too. If they put the front depth value in, they'd at least get on level that wasn't shadowed (but that would break other full shadowing along that ray), oh well :p .

Video presentations! I've only read about the specs and seen screenshots. *sigh* and once again I'm out of the loop :p . Off I go to find them. Though that multi level thing is cool (how ATI, among others was doing depth bluring). Hmmm, that gives me way too many ideas :) . Wouldn't there still be artifacts where two soft parts would interact (since you'd technically need multiple depth values at that point)?

Humus
06-11-2004, 05:33 PM
Originally posted by Madoc:
Do you mean something like Humus' demo? That did tinted shadows (I don't remember the details) but it seemed a bit expensive (is it not?).It's not much more expensive than regular shadow mapping.

Humus
06-11-2004, 05:38 PM
Originally posted by Jan:
I assume, that this doesn´t work with Humus´ demo, either, does it? A few days ago i was thinking about how his demo works exactly, but i always came to the solution, that it will only work in very special cases, not in general. Or am i wrong about that?

Jan.It should work fine as long as you have no overlapping of transparent objects in the shadow map. If there's overlapping only the window that's closest to the light will be taken into account, which I guess could look reasonable in most cases, but I don't know really. You could extend the technique to work with many layers though by using many passes and sorting your geometry.

Humus
06-11-2004, 05:40 PM
Originally posted by ZbuffeR:
From the description of Humus demo about tinted shadows, it will color as well an opaque object placed _between_ the light and the tinted glass.No it won't. It will look correct in this case too. It will only tint objects that are behind the window from the light's point of view.

Adruab
06-11-2004, 05:50 PM
You may want to ammend the description then... it says that you don't write the depth value with the shadow color (or at least that is omitted). If that is indeed how it works, then how would you prevent the object from casting the color on objects in front? Also, if you do write the depth value, what happens to the objects that are supposed to further occlude the light behind it (the colored shadow would shine through those objects...)? Seems like a catch 22 to me.... Did you figure out a way around these things?

Humus
06-12-2004, 08:39 AM
Well, if an object is in front of the window, it will of course be in front of the window in the shadow map too. So you'll get (white, depth) in the shadow map.

Jan
06-12-2004, 10:41 AM
Originally posted by Humus:
Well, if an object is in front of the window, it will of course be in front of the window in the shadow map too. So you'll get (white, depth) in the shadow map.But what if you have a transparent object, an opaque object behind that and another opaque object behind that. AFAIK your demo does correctly shadow the first opaque object with the color from the glass and the second object with a black shadow (because the object in the middle occludes the light completely.

How did you do that ?

Jan.

dorbie
06-12-2004, 04:21 PM
It does this by not depth filling transparent objects when rendering the depth shadow map. As Humus has posted anything in front of the glass will be written to the shadow map, (depth and color). The only limitation is multiple layers of transparent objects, which it definitely can't handle without improvements. I assume when rendering the transparent object itself the light contribution isn't modulated by the shadow color or it'd self shade.

Humus
06-13-2004, 03:34 PM
Originally posted by dorbie:
I assume when rendering the transparent object itself the light contribution isn't modulated by the shadow color or it'd self shade.Correct.

Adruab
06-14-2004, 02:05 PM
If it's not depth filling for transparent objects then how could it reject the case where an object is in front of a window for instance?

In a shadow mapping system, without depth information for the window from the light's perspective, there would be no way to reject those cases, right? And if that's the case then you'd get the window color shining on objects in front of the window with respect to the light. Am I just confused and you are actually using a stencil method (which seems like it would require deferred shading of sorts... or many passes)?

ZbuffeR
06-15-2004, 05:23 AM
>> If it's not depth filling for transparent objects then how could it reject the case where an object is in front of a window for instance?

because the object in front is not transparent, so it actually write its (nearer) depth to the depth map and writes white to the color map.

Humus already explained that, re-read the previous posts.

Adruab
06-15-2004, 12:37 PM
lol, wow, I can't believe I missed that (I read it, I just misinterpreted it...). Sorry guys. So yeah it would just be the multi-translucent layers that would be an issue. Cool....

Has anyone looked into solving that problem?