nivida drivers linux

Hi everyone,

I’m writing a raytracer on a gpu

The problem is I want to use FBO’s. I am a registered developer, but no registered workstation developer (I’ve had no response on that application). I would like to use the latest nvidia drivers (>76), but I haven’t encountered them anywhere …

Does anybody here have any idea on where to encounter them?

Thanx in advance,
rob(dot)van(dot)dyck(ad)gmail(dot)com

A link to my current output :
http://lumumba.luc.ac.be/sivlardemalle/bugatti800_600.png

Thanx in advance,
Rob.

Did you get the driver now ?
If you got it . please share it with me . I have seeked for her long long time ~_~
email : stanlylee@126.com

Nvidia has release their new driver for linux: 76.74

Hi,

No, I haven’t been able to find them, I google for them every day …

Regards,
Rob.

Cool, they released it yesterday.
http://www.nvidia.com/object/linux_display_ia32_1.0-7664.html

I visit the Nvidia Website everyday . but yesterdaty :frowning:

Hello!

This driver seems to have a GLSL bug.

When I use a GLSL shader with the discard command in it
my renderwindow will stay black and my app seems to be “trapped” in the driver. I have to kill it manually.
The shader compiled, linked and was validated without a problem or any warnings.

Could someone try to reproduce this bug to make sure that it is not my machine / fault.

Thx.

Edit: the previous driver worked fine

I have latest NVidia drivers installed under Linux
and shaders with the discard command work fine for me. The test example is from the osgshaders example in the OpenSceneGraph-0.9.9 distrbution.

Running glxinfo on my systems gives me:

OpenGL renderer string: GeForce FX 5900XT/AGP/SSE2/3DNOW!
OpenGL version string: 2.0.0 NVIDIA 76.64

Originally posted by namespace:
[b]Hello!

This driver seems to have a GLSL bug.

When I use a GLSL shader with the discard command in it
my renderwindow will stay black and my app seems to be “trapped” in the driver. I have to kill it manually.
The shader compiled, linked and was validated without a problem or any warnings.

Could someone try to reproduce this bug to make sure that it is not my machine / fault.

Thx.

Edit: the previous driver worked fine[/b]

OpenGL renderer string: GeForce 6800 GT/AGP/SSE2
OpenGL version string: 2.0.0 NVIDIA 76.64

This is getting really strange. Here is my shader (simple parallaxmapping):

uniform sampler2D t_base;	// basemap, color(rgb), specularmak(a)
uniform sampler2D t_bump;	// bumpmap, normal(rgb), height(a)
uniform sampler2D t_attenuation_xz; // xz-attenuation map
uniform sampler2D t_attenuation_y; // y-attenuation map

uniform float u_specular_exp;	// als texcoord für t_spec
uniform float u_parallax_scale;	// heightmap scale für parallax offset
uniform vec4 u_light_color;

varying vec3 v_to_viewer;	// vektor zum viewer (tangentspace)
varying vec3 v_to_light;	// vektor zum light vector (tangentspace)
varying vec3 v_half;		// half vector (tangentspace)

varying vec3 v_attenuation_sts;

void main()
{
        float attenuation = texture2D(t_attenuation_xz, v_attenuation_sts.xz).x;
	attenuation *= texture2D(t_attenuation_y, vec2(v_attenuation_sts.y, 0)).x;
	
          //if(attenuation <= 0.0)
          // discard;
        
        // height aus bumpmap laden
	float height = texture2D(t_bump, gl_TexCoord[0].xy).a;
	
	vec2 texcoords = gl_TexCoord[0].xy;

	// inputvektoren normalisieren
	vec3 to_viewer = normalize(v_to_viewer);
	vec3 to_light = normalize(v_to_light);
	vec3 half = normalize(v_half);
	
	// parallax offset berechnen
	height = height * u_parallax_scale - u_parallax_scale * 0.5;	// scale und bias
	texcoords.x += height * to_viewer.x;	// offset x
	texcoords.y += height * to_viewer.y;	// offset y
	
	vec3 normal = texture2D(t_bump, texcoords).rgb;
	
	// farb- und specularwerte aus der basemap laden
	vec4 color = texture2D(t_base, texcoords);
	
	// normals von [0;1] nach [-1;1]
	normal -= 0.5;
	normal *= 2.0;
	
	// renormalisierung
	normal = normalize(normal);
	//normal.y = -normal.y; doom3 benutzt y-flip

	// phong
	const float diffuse = max(dot(normal, to_light), 0.0);	// N dot L, diffuse term
	const float specular = max(dot(normal, half), 0.0);	// N dot H, specular term

	color.rgb *= diffuse;
	color.rgb *= u_light_color.rgb;
	color.rgb += pow(specular, u_specular_exp);
	color.rgb *= attenuation;

        gl_FragColor = color;
 }

I have tested this shader in my engine and the shader designer from typhoonlabs (the linux version), both show the
same behaviour.
As long as the

if(attenuation <= 0.0)
discard;

is in comments everything works fine.
If you remove the comments and recompile, the application stops rendering and I have to restart the shaderdesigner / my engine to make it respond again.
But thats not all. If you comment out the

color.rgb *= diffuse;

instruction the shader works again even with the discard command.
I tried to replace the diffuse-variable with the instructions
which calculate its value, but no luck.

I’m really confused. :confused:

I almost forgot:

The discard-shader (“lattice”) which ships with the shader designer works fine, even with the new driver.

I experience similar problems with the new NVIDIA graphics driver for Linux, but I haven’t been able to pinpoint the problem. Does anyone at NVIDIA know of any bug in the driver that might cause this? Being a driver issue, I don’t really know how one would go about debugging to find what the problem is.

I, too, had a problem with one of my shader demos freezing while using the newest NVidia Linux driver. I haven’t found the exact problem yet.

Do you both use GLSL or are the “old” arb_program extensions affected too?

I have only observed a problem with one app that makes heavy use of GLSL, and no arb_vp/arb_fp. Still no luck tracking down the exact problem…

Same here.