original in en Phillip Ross
About a year ago a company named 3Dfx Interactive announced their new 3D chipset that would change the way people thought about 3D graphics on PCs. Prior to the release of 3Dfx's chipset, there was no reasonably priced alternative to having high-end 3D graphics on a home computer system. High-end graphics could only be found on expensive workstations such as SGIs and Suns which had expensive specialized hardware. PC video adapter manufacturers such as Number Nine, Diamond Multimedia, and Matrox had affordable video adapters on the market that were known to have 3D capabilities, but these capabilities were very limited compared to the expensive 3D hardware found on SGIs and other workstations.
With the release of 3Dfx's chipset, titled the Voodoo 3Dfx, high-end 3D graphics became affordable for the average computer user. The performance of this chipset is undoubtedly a benchmark in the everlasting endeavor to bring high-end 3D graphics to the home desktop. Previously, the 3D capabilities of other video chipsets were limited to z-buffering and gourad shading (some less primitive ones supported limited texture mapping), but these capabilities were often only available in a certain resolution or color depth. Unfortunately, a lot of CPU intervention was needed to put some of these capabilities to work for an application.
Here is an example of how one of the video adapters might be limited in their capabilities. It might claim to support z-buffering, but it may only have code in the video drivers that stores the z-coordinates of pixels in video memory that was not being used otherwise. This z-buffering might only be able to be used in lower resolutions and at lower color depth modes because higher modes use more memory. At higher modes there would not be enough memory free to allow storage of z-coordinates. Even with the storage of these z-coordinates in video memory, the application would still need to use the CPU to compare the z- coordinate of new pixels to the z-buffer in video memory. Since these compares require a significant amount of CPU power to carry out, the z-buffering capability of the video adapter is not really eliminating the classic z-buffer bottleneck. If this is too much information, don't worry...it really is a technical explanation that is not needed to see the differences between video adapters that claim to support 3D and the 3Dfx chipset.
It is sufficient to say that the Voodoo 3Dfx has very advanced 3D capabilities that could not be matched by any other video chipset manufacturer at the same price. The chipset boasts general 3D capabilities such as gourad shading, depth buffering (z-buffering as well as w-buffering), alpha blending, fogging, chroma-keying, and dithering. Also, it is capable of true perspective-correct texture mapping, trilinear mipmapping, texture lighting, sub-pixel correction, and texture space decompression. All of these capabilities are done completely in the hardware with little or no intervention of the application. The application is only responsible for setting up the 3D graphics environment and coordinate setup. Currently the Voodoo 3Dfx does not support coordinate setup, but this is not usually a problem as modern CPUs are capable of providing ample processor power for these computations. Note though that the next generation 3Dfx chipset, named the Voodoo2, will have this capability providing even more speed for graphics. The current test models of video adapters utilizing the Voodoo2 are breaking even higher speed records! Most importantly though, the price vs performance of the capabilities of the current Voodoo chipset is the outstanding feature.
Unfortunately, the Voodoo 3Dfx chipset does have it's limitations. The primary limitation of this chipset is that it is only able to do full-screen rendering and the chipset can not be used effectively as the only adapter in a computer. Video adapters that utilize the 3Dfx chipset need to have another stand-alone video card working along with it. The video adapter with the 3Dfx chipset just works along side of the stand-alone adapter. The way it works is that the operating system uses the normal stand-alone video adapter normally, but when an application wants to use the 3Dfx chipset for rendering, it access 3Dfx drivers to initialize the 3Dfx. The 3Dfx then kicks in and begins to render according to the Glide functions called by the application.
In a normal configuration without a 3Dfx adapter, the stand-alone adapter sits in it's own bus slot on the motherboard and it has an output that would be connected to the monitor. The video adapter's output would then be displayed on the monitor. 3Dfx video adapters such as the Monster3D and the Pure3D use a "pass-through" system that allows both a stand-alone video adapter and the 3Dfx video adapter to use the same monitor. A configuration such as this consists of both video adapters put into slots on the motherboard. The output of the stand-alone video adapter is connected to an input on the 3Dfx adapter via a pass-through cable that is usually included with the 3Dfx adapter. The monitor is then connected to the output of the 3Dfx adapter. When running normally, the stand-alone video adapter produces it's normal video signal which goes to the 3Dfx adapter's input, and the 3Dfx passes the video signal to the monitor. When an application tells the 3dfx driver to initialize the 3Dfx, the 3Dfx adapter shuts off the pass-through and the stand-alone video adapter's signal no longer reaches the monitor. Instead, the 3Dfx adapter begins sending its own video signal to the monitor. All rendering done by the 3Dfx adapter goes to the monitor until the application issues a shutdown command to the driver to de-initialize the 3Dfx and begin sending the stand-alone adapter's video signal to the monitor.
Unfortunately, with the pass-through configuration, the normal video adapter's output is not visible and this can cause problems for applications that use a windowing system such as Xwindows and want to render into a window. There is another method supported by 3Dfx adapters that can be better for applications such as these, but it requires using an additional monitor. If you have two monitors, you can connect one to the output of the normal stand-alone video card and the other monitor can be connected to the output of the 3Dfx adapter. This way, one monitor will always display the video signal of your stand-alone adapter. The other monitor will display nothing until the 3Dfx adapter is initialized. Then, once the 3Dfx kicks in, you can use one monitor for your normal windowing system and the other monitor will provide fullscreen output from the 3Dfx.
There is a newer chipset made by 3Dfx called the 3Dfx RUSH chipset, which is able to render into a window. Video adapters with this chipset also contain a 2D chipset which is built along-side the Rush chipset and they share a common framebuffer. Since there is not currently Linux support for this it will not be covered here, but development is in progress.
Another limitation of the 3Dfx chipset is that it cannot reach higher resolutions like today's stand-alone video cards can. While today's stand-alone video cards are supporting resolutions of 1280x1024, 1600x1200, and even higher, the 3Dfx video adapters generally do not go higher than 640x480. This is not as much of a limitation as one might think, though. With the advanced anti-aliasing and texture filtering capabilities of the 3Dfx, a large amount of objects can be squeezed into a 640x480 resolution and virtually no pixelization occurs. In fact, it is often difficult to identify the resolution that a 3Dfx application is using just by looking at the display!
The more common base-level 3Dfx cards such as the Monster3D can only reach 640x480. I believe some can be pushed to 800x600, but I've been told at this resolution the adapter loses abilities to do depth and alpha buffering since the memory that would normally be used for these buffers is used for the extra resolution. Higher end 3Dfx adapters such as models from quantum 3D can support 800x600 without disabling depth or alpha buffering.
3Dfx Interactive is the manufacturer of this high-performance 3D chipset, however, they do not manufacture the video adapters that use this chipset. Other companies such as Diamond Multimedia, Orchid Technology, and Canopus Corporation all make video adapters which utilize the chipset. Diamond makes the Monster3D, Orchid makes the Righteous3D, and Canopus makes the Pure3D. A company by the name of Quantum 3D branched off from 3Dfx and offer video adapters which use the advanced configurations of the Voodoo (multiple PixelFX and TexelFX units, more framebuffer or texture ram, etc). These models are known as the Obsidian 3D models. Check out the 3Dfx Interactive website (www.3dfx.com) for a full list of video manufacturers that manufacture video adapters using the 3Dfx chipset.
The Voodoo chipset can actually be viewed as an advanced flexible graphics rendering engine made up of separate Voodoo subsystems. These subsystems can come in many combinations, but the simplest configuration would be that of only one Voodoo subsystem. Each subsystem is made up of separate rendering processors known as PixelFX and TexelFX units. The PixelFX is the unit responsible for pixel operations such as depth buffering and goraud shading. The TexelFX unit is responsible for texture operations such as texture filtering and projection. Together these units can work together to produce effects such as lighted textures. Each of these units also have their own video memory which they use for their specialized operations. The PixelFX uses it's memory to store pixels for the framebuffer and the TexelFX uses it's memory to store textures.
Each Voodoo subsystem configuration contains one PixelFX unit, but there can be subsystems configured with one, two, or three separate TexelFX units to increase speed of texture mapping. Even further, a Voodoo engine can be configured with multiple subsystems and use scanline interleaving to effectively double the rendering rate of the engine. These advanced configurations are capable of providing higher performance than even high-end SGIs workstations which has been proven in test cases. Of course, these advanced configurations are quite a bit more expensive than the simple configuration used in the popular Voodoo video adapters and are overkill for most users.
3DFX Interactive will not publish register-level programming documentation for their chipset for fear of making reverse engineering of their hardware easier. Instead, they distribute an SDK, called Glide, which acts as a software "micro-layer" on top of the hardware. Glide is a set of functions organized into a software library which hides register specifics while providing a relatively easy API for programming the 3Dfx chipsets with very little overheard at all. The libraries are ported to the platforms (including Linux) which 3Dfx chooses to support along with very detailed documentation for the API. Developers can then use the API to interface their own 3D applications with the 3Dfx. Glide is a very low-level graphics library unlike OpenGL or Direct3D. It does not provide any high-level 3D graphics capabilities like display lists or transformation functions. It merely provides a small abstraction from the registers of the 3Dfx, and only provides software functions that are directly implemented in hardware on the 3Dfx. I've spoken to the individual who ported Glide to Linux and he says the library is very simple. He says basically you pass the correct parameters to the Glide functions, and the functions merely push the parameters into registers on the card and tell the 3dfx to render.
This is not to say that OpenGL developers or Direct3D developers can not develop 3Dfx applications. OpenGL and Direct3D drivers have been written to use Glide so that developers can use the OpenGL or Direct3D API and the driver will translate the high-level function calls and operations to Glide specific operations that in turn run the 3Dfx. This is a very fast and efficient method for development.
A driver that interfaces Mesa (the free implementation of OpenGL which runs under many operating systems) with Glide has been written to allow OpenGL applications to run under Linux and Windows 95 with hardware support. Linux is free, the compilers for Linux are free, Mesa is free, and the Glide SDK from 3Dfx is free so basically this combination provides a very cost-effective high-performance OpenGL development system! Unfortunately, there is no Glide SDK for Linux running on Alpha or Sparc CPUs so this only applies to the intel 386 platform.
At the time of this writing the newest version of Mesa is version 2.5 and there are beta versions of 2.6 in testing. The Mesa driver is quite advanced is capable of accelerating point, line, and polygon rendering with flat shading and gourad shading as well as texture mapping, depth buffering, fogging and blending. Although I mentioned before that the Voodoo 3Dfx chipset was not capable of anything but full screen rendering, it possible to get in-window rendering thanks to a small hack in the Mesa driver. This hack takes the data from the framebuffer of the 3Dfx and transfers it across the PCI bus and into the video ram of the stand-alone video card. Even though this is not as fast as fullscreen 3Dfx rendering, it is still very much faster than software rendering of mesa alone.
Mesa is available for downloading from the Mesa ftp site at ftp://iris.ssec.wisc.edu/pub/Mesa. Mesa is distributed as two separate packages. One contained the main library and include files and starts with the name MesaLib while the other file is just a package of demo files. The demo files package should begin with MesaDemos. To install simply untar the packages and change directory into the directory created while untarring. From here you have a few choices when building Mesa. Starting in Mesa version 2.5 there were a few transformation routines that were rewritten in intel 386 assembly language for a speed increase. Unfortunately they are a little buggy but they've been fixed in the Mesa 2.6 beta versions. To build Mesa with 3Dfx support but without the assembly routines, you would type "make linux-glide" (without the quotes of course). To build Mesa with 3Dfx and include the assembly routines type "make linux-386- glide" (again, don't use quotes when actually typing on the command line). Starting with Mesa 2.6 there have been compiler optimizations written into the makefile that allow the compiler to produce code that will optimize Mesa for use with the famous GlQuake and QuakeII games! If you want GlQuake optimization you would use "make linux-386-quake-glide" to build Mesa.
After Mesa is done building, there are a number of ways you can install it. An alternative could be to put the files located in the include and lib directory into /usr/lib and /usr/include or maybe /usr/local/lib and /usr/local/include. Or you can put them in any directory you want so long as Linux's dynamic linker can find them. The directories that the dynamic linker looks can be configured in the /etc/ld.so.conf file. Because Mesa is developed at such an rapid pace and I like to test out betas when they are released, I keep separate directories for each version of Mesa. Whichever version I want to use, I change my ld.so.conf file to include that directory containing the version I want to use. Right now I have /usr/local/Mesa-2.5 holding version 2.5 library (/usr/local/Mesa-2.5/lib) and include (/usr/local/Mesa- 2.6/include) files. And for the 2.6 betas I use /usr/local/Mesa- 2.6b1 or /usr/local/Mesa-2.6b2. However you do it is totally up to you, but there is a VERY important step you should not forget. Anytime you install new libraries or change the directories in /etc/ld.so.conf you must run the ldconfig utility. This goes into the library directories and sets up correct symlinks and does a few other things. If you would like to see which libraries the linker currently known about you can use the -p option with ldconfig. When I want to know which version of Mesa an application will use I type
ldconfig -p | grep MesaAfter you install Mesa you are ready to go. In order to run a Mesa application that uses the 3Dfx you must execute it as root. You must also have the Xserver you are using set to use 16 bits per pixel color depth. If you downloaded and unpacked the demos they should have been compiled when you built Mesa and you can test them out. When you use Mesa, there are three different ways you can have a Mesa application run. First of all, you can have everything done with software rendering which does not use the 3Dfx at all. This is the default. To get the application to use 3Dfx, you have to set an environment variable. To run the application with 3Dfx in fullscreen mode you, set
MESA_GLX_FX=fullscreenand the program will use the 3Dfx. Using a Mesa program can be tricky inside Xwindows though. When you run a Mesa program, the Xserver does not actually know that the 3Dfx even exists. What happens is that a window is initialized and open, and the 3Dfx is initialized and can begin rendering. If the mouse cursor goes outside of the window on the Xserver, the Mesa program will no longer be able to accept keyboard input or mouse events. Therefore, if you're not using more than one monitor, it is advised that you use interactive window placement if your window manager supports it. Otherwise, when you start the mesa program, the 3Dfx will kick in and you will not be able to see your Xserver desktop to be able to place the mouse cursor back inside the windows. It is possible to write a Mesa program to allow wrapping of the mouse cursor so that once the mouse cursor is inside the window, the mouse cursor can not move back out. When it reaches the border of the window it will be placed back into the middle of the window (this is known as mouse wrap).
To make the application use the 3Dfx to render into a window you have to set some other environment variables. You must set
SST_VGA_PASS=1Then you set MESA_GLX_FX=window. After you set these values and run the program, you should get better performance during rendering than if you used only software rendering.