floating-point depth cube maps

Hi,
Im trying to create a floating-point depth cube map on my ATI V7350 using the following code, but it fails. Is this format suppose to be supported? Couldn’t find any information specific to ATI cards anywhere.

  
	assert(glGetError() == GL_NO_ERROR);

	GLuint l_texture;
	GLsizei l_size = 512;
	glGenTextures(1, &l_texture);
	glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
	glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
	glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_S, GL_CLAMP );
	glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_T, GL_CLAMP );
	glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_R, GL_CLAMP );

	glEnable(GL_TEXTURE_CUBE_MAP);
	glBindTexture(GL_TEXTURE_CUBE_MAP, l_texture);
	
	for(int i=0; i<6; i++)
	{
		glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X_EXT + i, 0, GL_DEPTH_COMPONENT16_ARB, l_size, l_size, 0, GL_DEPTH_COMPONENT, GL_FLOAT, NULL);
		assert(glGetError() == GL_NO_ERROR); // asserts! GL_INVALID_OPERATION
	}

tx.

Look at this:
http://www.gamedev.net/community/forums/topic.asp?topic_id=381998&forum_id=25&gforum_id=0
Or this:
http://www.beyond3d.com/forum/archive/index.php/t-18802.html

You will probably have to use FLOAT16 texture and use fragment shaders.