Raster Graphics and Vector Graphics

Hello Everyone,

i just read an article about raster and vector graphics. Well this suddenly popped in my mind. i read that in vector graphics its basically specifying equations/paths for objects or shapes and not as pixels.

But aren’t all display hardware based on the concept of pixels?
Say we specify a line primitive using GL_LINES, ultimately it is about lighting which pixel on the monitor right?

So, even with vector graphics, ultimately the specified shape has to be presented as a bunch of pixels right?

Quoting from the article:

ARTICLE

they have infinite resolution - no matter how large you expand or how small you contract the image, the math creating it holds up and it will always show smooth, clear edges and details. The little 36KB logo file mentioned above can be printed at ANY size

So, how is vector graphics so different from raster? Since all display hardware deals with pixels?

Could anyone kindly explain?
Thanks in advance.

Well you are right that in the end your are dealing with pixels in both cases. However, from the point of view of representation and quality, both of these methods are completely different. Raster is just an array of pixels while vector is a mathematical representation. The resolution for raster is finite dependent on the monitors resolution whereas for the vector it depends on the total steps you take to represent the vector’s mathematical representation on display.

To give u an example lets say i have a rectangle defined in a raster image (for e.g. a bitmap so to speak) having width 100 and height 100. No matter what color i use to fill the rectangle, it would need 100x100 (width*height) pixels to represent it. In case of vector(for e.g. an eps image), you will just store the two coordinates the left,top and right,bottom. The amount of pixels needed is totally dependent on the output device and the scale to which you want to evaluate the rectangle on the output device.

Another subtle difference pops up when u perform zooming in. In the raster image for zooming in, u would still use the same 100x100 pixels and replicate the rows and cols of pixels based on how much magnification is needed and this gives u the characteristic blockiness of a raster image. However, for a vector, u can reevaluate the color at a new interval based on the fill attribute assigned to the rectangle thus u donot get the blockiness and get a much higher visual quality. I hope this clears up the difference.

Wonderful explanation :slight_smile:

Thanks mobeen.

Also, one more thing. Now can we call specifying vertices as in OpenGL as vector? Since we actually do not specify the shape in pixels(in general). Can we call it that way?

Thanks

Now can we call specifying vertices as in OpenGL as vector?

You’re dangerously close to misusing the concept of “vector”.

A vector image is different from a mathematical “vector”. A math vector is an N-dimensional array of numbers. A position in 3D space is usually represented as a 3 or 4-dimensional vector.

You cannot pass a “vector image” as a position or other vertex attribute data. OpenGL cannot use vector images of any kind.

However, OpenGL is a rasterizer. It’s primary purpose is to convert a mathematical representation of a surface into a raster image. So in effect, the OpenGL scene is a vector “image” already.

Thanks for the reply Alfonse Reinheart.

You’re dangerously close to misusing the concept of “vector”.

im sorry, i meant vector graphics. Not the mathematical entity.
Thanks for the clarification.

You could always procedurally generate an image in the fragment shader that behaves like a vector image, rather than looking up a texture.

If you wanted to draw multiple shapes in the same shader, I guess you could pass in a list of required shapes + positions/rotations as uniforms + work out the contribution of each shape to the final output color. It’s not very efficient though - texture bombing is described at http://http.developer.nvidia.com/GPUGems/gpugems_ch20.html

Alternatively you could render one element at a time, and issue more draw calls - possibly faster even with more draw calls - you’d have to test. There could be better ways to do this, instancing maybe?

If you were going to try to use a vector graphics file format with OpenGL, you’d need to find a way to convert each possible element of the file into GLSL code, and each instance of a shape into data that can be passed to a shader via uniforms (or equiv).

Why write fragment shaders to build vector shapes when you can convert the shapes to polygons? You’d benefit from hardware antialias, etc.

If you want to zoom into a section of a circle/curved surface/complex shape retrieved from a vector graphics file, whilst using polygons, you’d have to keep (procedurally) generating new vertices to appear smooth when zoomed in + not too slow when zoomed out.

If it’s all straight edges, then polygons would definitely be better, and using triangles to represent curves surfaces is often good enough - but you’d have to see if any “vertex popping” that occurs was acceptable/hideable.

Could a geometry shader do this in realtime?

They are now, but prior to 1980 Vector Graphics Terminals (which drew lines directly on the screen by controlling the X-Y deflection coils) were quite common.