ATI's CrossfireATI officially announced their multi-GPU solution today. It's called "Crossfire" and it's quite impressive. Here are links to two solid articles:
Crossfire has one significant advance over Nvidia's SLI. Nvidia's solution requires support for each specific game to be written into the drivers, because they "optimize" the rendering method for each game that is supported. What that means is that a ton of games won't be supported. So that approach is great for benchmarks, but it's lousy for general gaming.
ATI's solution incorporates three possible rendering methods: AFR (alternate frame rendering), Supertiling, and Scissor. Alternate frame rendering is just what it seems: the GPU's render alternate frames. In Supertiling, each frame is broken up into a 32x32 pixel tiles and each card renders a "checkerboard" pattern that is then assembled into a complete frame. With Scissor, each GPU is given one half of the screen to render (top half/bottom half).
Not all of these methods are supported in all games. Supertiling will not work for OpenGL games. There is a master database that will specify the optimal rendering method to use for each game, but (and this is an important "but") for games where there is no database entry, the default method will be AFR for OpenGL games and Supertiling for D3D games. That means all games (that use 3D acceleration) are supported and will benefit.
That's a real point of distinction for ATI's method. Now it may be that this is less significant than it seems, because Nvidia will probably unveil SLI 2 this fall, and surely they won't repeat their mistake. For now, though, I don't see any reason to choose Nvidia's solution over ATI's.
One additional little nugget: ATI has also added some higher-precision AA modes, up to 14X (previously 8X was the highest level in ATI's drivers).
As a hardware geek, I love multi-GPU solutions. I still have fond memories (and believe it or not, "fond" is definitely the right word) of buying a second Voodoo 2 and using SLI for the first time. It was incredible. So I'm totally on board when it comes to this kind of hardware exotica.
However, there's a real problem with the practical application of this technology. How many games right now can even push one high-end video card (like the Radeon X850 XT)? I have an X800 XT-PE and an Athlon FX-51 and I ran Half-Life 2 in 1600x1200 with 4xAA and 16-tap AF. The number of games that will actually benefit from this technology is a small, small subset of the PC games published each year.
Here's what I came up with from last year:
Flight simulators might be added to that list, but quite a few of the flight simulators are GPU bound. And even with the four games on the list, I ran all of them at 1600x1200 with 16-tap AF with very few framerate hiccups (I think Far Cry had a few, but everything else was smooth). The installed PC base is so fractured in terms of specs that almost all developers focus on some kind of middle ground, not the high end. So I think it's legitimate to ask if multi-GPU rendering is a solution in search of a problem.
However, I do think building a system with a multi-GPU motherboard is an excellent way to extend a system's useful lifespan. Instead of needing a new system every two years to stay on the high end, a three or four year old system with multiple GPU's could conceivably still be faster than a top-end system with only one video card.