View Full Version : OpenGL 3.1 and Cg/Cg FX
06-18-2009, 04:32 AM
I am building a 3D engine for a commercial title. We are targeting Win, Linux and Apple platforms. The engine will be shader-only and for now, it supports(well.. kinda :P) D3D10 and OpenGL 3.1.
Because of this I was thinking to "play" with Cg Fx. [which is the only way(apart from creating your own fx system) for OpenGL to support MS-fx-like shaders].
My question is... does Cg/Cg fx supports OGL3.1? For example.. do the Ogl3.1 semantics (eg InstanceID) work with Cg? Is there something that is not supported? I never touched Cg before and I have no idea.
What do you suggest for a Cross-platform(win,linux,mac) ogl3.1/d3d10 commercial 3d engine? Maybe Cg/Cg FX is not a good idea?
Thanks for your time.
06-18-2009, 05:30 AM
sadly you cannot use CG FX if you plan to lunch you engine or any ATI radeon (or radeon HD) card :p
The only supported profiles of cgfx on radeons are ARB vertex/fragment program 1.0 (with suxx)
and glslv/glslf (with is bugged, slow and almost unusable thru cgfx api)
The only reasonable solution is to write own FX-like api around
dx10 and glsl - writing own fx compiller is not an easy task but you will be trully platform independent.
06-18-2009, 10:54 AM
Yep. GLSL profiles were quite unusable in Cg a year ago. I am not sure about the current state of Cg and I don't care. At that time, I decided to use Cg compiler only for translating Cg to GLSL, dropped CgGL, and implemented the GLSL codepath myself (and ARBVP1/FP1 profiles too!). This way, you would have a 100% control of the code. An experienced programmer would further improve it to support bindable_uniform or uniform_buffer_object (additional preprocessing of the GLSL code required).
06-18-2009, 02:29 PM
... I believe Cg profile gp4gp supports the INSTANCEID semantic (see the Cg manual for a complete list of all available profiles and their semantics) ...
06-22-2009, 06:22 AM
yes instanceid semantic is SUPPORTED but gp4 profile is supported ONLY on nvidia cards - so forget about cg :)
06-23-2009, 02:14 AM
i have a strange feeling on how to rely on Cg at all. if you care about running your apps on many cards i would choose GLSL for OpenGL. i had the same question half year ago and decided to write own OpenGL FX file format with techniques and fallbacks which turned out to be not that difficult as expected. Now it runs on any card i tried (except Intel of course!) falling back on last technique in the hierarchy if needed.
p.s. im a big fan of FX files from Direct3D 9 times.
06-23-2009, 01:55 PM
Funny thing is that if you code up your own flavor you'll probably end up with something very close to Cg.
I've been a proponent of the roll your own course but have recently become lazy beyond all reckoning. I've personally found that ultra high level material/effect descriptions that are free of the mixed nuts and bolts of language specifics are easily converted to language flavor of choice and absolve the author of low level hardware concerns. The idea being to describe what to draw rather than how to draw it. The cool thing about an abstraction like this is that it scales with technology, looks better in the future as a future renderer is able to reinterpret the description through the lens of the latest tech. Another way to distance/insulate yourself from the platform/shader quagmire, where content is concerned at any rate.
06-24-2009, 02:21 AM
agree. own flavor with GLSL support on any card whatever GLSL version is supported. was the perfect solution for ATI cards. because i had also a basic Direct3D renderer with FX files i was pulling my hear out trying to figure out how to make it as easy as possible to support shaders (FX files) in both renderers with minimal effort. CgFX used to many glGet calls at that time to apply renderstates and the like, the performance was beyond horrible. after all the native feature is the best since Cg is NVIDIA brand.
Powered by vBulletin® Version 4.2.0 Copyright © 2013 vBulletin Solutions, Inc. All rights reserved.