Vintage 3dfx Voodoo5 6000 Prototype GPU Fetches $15000 Bid

Status
Not open for further replies.

erek

[H]F Junkie
Joined
Dec 19, 2005
Messages
10,785
Mine still boots

“The Voodoo5 6000 was a marvel of a graphics card during its time, but the high production cost derailed its launch. In addition, the graphics card used a multi-chip (four VSA-100 chips) design where competing GeForce and Radeon models stuck to a single chip. Furthermore, the 128MB VRAM and external power brick didn't help the Voodoo5 6000's cause. Long story short, 3dfx eventually went bankrupt in 2002, and all that is left are these precious, unreleased Voodoo relics of the past.”

1675550940546.png


https://www.tomshardware.com/news/3dfx-voodoo5-6000-prototype-gpu-fetches-dollar5500-bid

https://hardforum.com/threads/3dfx-...rototype-210-0391-001-extremely-rare.2025388/
 
Always thought it would be an interesting case study to see how many VSA-100's could be crammed together onto a single modern die / package and what the scaling limits would be.
 

RIP that Intel PCI bridge.

Always thought it would be an interesting case study to see how many VSA-100's could be crammed together onto a single modern die / package and what the scaling limits would be.

VSA-100 officially supports up to 32 chips in SLI. Quantum3D made a few servers that I think used 32 chips and 1 or 2 GB of RAM (32 or 64M per VSA-100.)

But more chips doesn't scale performance linearly. 3dfx's SLi is very inefficient, IIRC, every chip has to maintain a full copy of the frame buffer, meaning the geometry throughput doesn't scale at all. Only the fill rate is improved, and you can get better AA/AF the more chips you have.
 
RIP that Intel PCI bridge.



VSA-100 officially supports up to 32 chips in SLI. Quantum3D made a few servers that I think used 32 chips and 1 or 2 GB of RAM (32 or 64M per VSA-100.)

But more chips doesn't scale performance linearly. 3dfx's SLi is very inefficient, IIRC, every chip has to maintain a full copy of the frame buffer, meaning the geometry throughput doesn't scale at all. Only the fill rate is improved, and you can get better AA/AF the more chips you have.
Wasn't the Intel Bridge stable compared to the HiNT chip?

scope out this 32-way SLI VSA-100 setup:

 
Considering GPU makers are still stuggling with the execution of multi-die GPU's, how well or poor would this have functioned and,or was it something Glide API overcame in specifically coded games?
 
Considering GPU makers are still stuggling with the execution of multi-die GPU's, how well or poor would this have functioned and,or was it something Glide API overcame in specifically coded games?

You're talking about two completely different things. Multi die ASIC is not the same as a multi chip video card. The latter has been around since the beginning of the concept of video accelerator boards.

3dfx's SLi technology had been around since the very beginning with the Voodoo Grpahics, or Voodoo1 it was later called. The PC versions of the board didn't have SLi, but it was used in some PC based arcade boards. It first was widely used on Voodoo2 boards, where two cards could be put in SLi. Professional high end workstation Voodoo2 boards from Quantum 3D would put 4, 8 and I think 16 Voodoo2 chip combinations in SLi.

The beauty of 3dfx's SLi is that it was entirely done in hardware/drivers, the game didn't need to be specifically coded to use it. The downside is that it's grossly inefficient. The way that 3dfx implemented SLi makes each card/GPU maintain a full frame buffer and texture data, even if the card is only rendering every Nth line. So when you look at a 3dfx card and see it has say 64 MB or 128 MB of RAM, that's misleading because it's split between the GPUs. On a Voodoo5 5500, each GPU gets 32 MB, on the 6000, each gets the same. This limits the performance of the card by having the smaller segmented memory pools, vs a single GPU card with a total memory pool of that much.

As for how well the Voodoo5 6000 performed, if you had a completely working stock card, it was somewhere between a Geforce 2 and 3 in performance. But since it was based on the same aging Napalm core as the 5500, it was feature limited. It still had no T&L functionality that both Nvidia and ATI had by that time. It also only supported DirectX 6, when Nvidia and ATI were already on 7 and working on 8. This would have been a big disadvantage since the market was rapidly moving to DX over OpenGL.

One of the fully decked out modified osckhar v5 6000s would probably put it in Geforce 3 territory in terms of performance, with the extra RAM and 200 MHz VSA100s.
 
You're talking about two completely different things. Multi die ASIC is not the same as a multi chip video card. The latter has been around since the beginning of the concept of video accelerator boards.

3dfx's SLi technology had been around since the very beginning with the Voodoo Grpahics, or Voodoo1 it was later called. The PC versions of the board didn't have SLi, but it was used in some PC based arcade boards. It first was widely used on Voodoo2 boards, where two cards could be put in SLi. Professional high end workstation Voodoo2 boards from Quantum 3D would put 4, 8 and I think 16 Voodoo2 chip combinations in SLi.

The beauty of 3dfx's SLi is that it was entirely done in hardware/drivers, the game didn't need to be specifically coded to use it. The downside is that it's grossly inefficient. The way that 3dfx implemented SLi makes each card/GPU maintain a full frame buffer and texture data, even if the card is only rendering every Nth line. So when you look at a 3dfx card and see it has say 64 MB or 128 MB of RAM, that's misleading because it's split between the GPUs. On a Voodoo5 5500, each GPU gets 32 MB, on the 6000, each gets the same. This limits the performance of the card by having the smaller segmented memory pools, vs a single GPU card with a total memory pool of that much.

As for how well the Voodoo5 6000 performed, if you had a completely working stock card, it was somewhere between a Geforce 2 and 3 in performance. But since it was based on the same aging Napalm core as the 5500, it was feature limited. It still had no T&L functionality that both Nvidia and ATI had by that time. It also only supported DirectX 6, when Nvidia and ATI were already on 7 and working on 8. This would have been a big disadvantage since the market was rapidly moving to DX over OpenGL.

One of the fully decked out modified osckhar v5 6000s would probably put it in Geforce 3 territory in terms of performance, with the extra RAM and 200 MHz VSA100s.
Hey thanks and good to know the diff.
 
One of the problems with this kind of multi-chip cards on a card technology is it appeals to people to make them think this is the "last purchase they will ever need to make for a VERY long time." Which as we have seen, next gen's midrange is often as fast as previous gen's top of the line. And if new features get added, well...

At the time, Voodoo 2 SLI was really something to get excited about, because 3dfx was king back then, and Nvidia was still recovering from the Sega Saturn/NV1 fiasco, and SLI was brand new (and brand new things are cool). But once Nvidia dropped the TNT2, 3dfx tried to steal the show with the "green" "Voodoo bug" car giveaway at E3 (1999? I forgot), but the V3-3000 wasn't faster than V2 SLI, and it didn't support 32 bit color, which something 3dfx could have easily chosen to support. But there were other problems, and too many articles and videos (even with the original executives) discussing what happened.

With a single card solution, you can just buy the fastest, at the earliest time, to get the most out of your money, and if SLI/Crossfire (at the time) is well supported, you can look for a sale on a second card or a used cheap card and get a drop in upgrade. But now, for years, it's just best to just get the fastest single card solution you can afford. 3090's were great when they first came out, and now 4090's are awesome too. Of course, we all know the 4090 Ti will come out, be overpriced and probably be 20% faster than a 4090, then 6 months later, the 5090 will be out...
 
but the V3-3000 wasn't faster than V2 SLI, and it didn't support 32 bit color, which something 3dfx could have easily chosen to support. But there were other problems, and too many articles and videos (even with the original executives) discussing what happened.

The Voodoo3 did support 32 bit rendering internally, but for whatever reason, they dithered the output to 22 bits using a box filter. 3dfx's "16 bit" color was actually better than 16 bit color from either Nvidia or ATI, because they retained more color information. The latter two used pure RGB565, which caused weird green color band artifacting because the green channel retained more color depth than the red and blue channels.

What made 3dfx's 22 bit color not great is they used a box dithering filter that made the image soft.
 
The Voodoo3 did support 32 bit rendering internally, but for whatever reason, they dithered the output to 22 bits using a box filter.
There was a reason, they cheaped out on the framebuffer size to keep the costs down. And the filtering happened post-framebuffer right before the image was converted to analog and sent to your CRT via the RAMDAC. Thus, if you took a screenshot with a V3 you only ever saw the 16-bit final render. The alleged 22-bit filtering was staged right before converting the 16-bit digital image to what would ultimately be a lossy analog signal. One would question that at this stage there was any "bitness" at all.

3dfx's "16 bit" color was actually better than 16 bit color from either Nvidia or ATI, because they retained more color information. The latter two used pure RGB565, which caused weird green color band artifacting because the green channel retained more color depth than the red and blue channels.
RGB565 is how they all stored their color information. The extra bit for the green channel is a good thing because human vision is more sensitive to green color than the others. The banding has nothing to do with how many bits the green channel had. The Voodoo 3 was also banding at 16-bit, go compare its framebuffer output to the others. Anyone rendering a final output to 16-bit, dithered or not, is going to have banding. The only real solution to color banding was to set your device to 32-bit output.

I've never been in the camp that thinks what 3dfx were doing with the Voodoo3's color output was anything special. They were literally trying to compensate for their inflexible framebuffer decision. The TNT2 and G400 also both render 32-bit internally then output to a dithered 16-bit. I could also argue that Matrox had them all beat with the higher quality RAMDAC but that's another topic. The difference with Nvidia/Matrox is they didn't need to resort to parlor tricks to convince their customers their render output is "just as good" because if their customers truly wanted the highest quality color output they'd just set the device output to 32-bit color depth and be done with it.

The real shame is that 3dfx was doing just fine performance wise with the Voodoo 3 so it could have totally been a 32-bit card. But a stupid decision is stupid decision.
 
Last edited:
RGB565 is how they all stored their color information. The extra bit for the green channel is a good thing because human vision is more sensitive to green color than the others. The banding has nothing to do with how many bits the green channel had. The Voodoo 3 was also banding at 16-bit, go compare its framebuffer output to the others. Anyone rendering a final output to 16-bit, dithered or not, is going to have banding. The only real solution to color banding was to set your device to 32-bit output.

I've already done the 16 bit quality comparison between the Voodoo3, TNT2, Rage and Matrox G200. In the games I play, all of them except the 3dfx cards have color banding. It was especially bad in Quake lineage engines, like Half-Life.

And saying that the green channel having more bits of color depth not causing green banding is false. You can force the effect to happen if you down sample an RGB888 texture into RGB565, the whole image shifts to have an ugly green tinge to it. You can also prevent it from happening by going down to 15 bit color, RGB555. I have to deal with this problem on a daily basis building game worlds and having to modify existing textures that have already been downsampled to RGB565, or compressed using DXT1 or DXT5, which essentially does the same thing, but is very lossy.

I've never been in the camp that thinks what 3dfx were doing with the Voodoo3's color output was anything special. They were literally trying to compensate for their inflexible framebuffer decision. The TNT2 and G400 also both render 32-bit internally then output to a dithered 16-bit. I could also argue that Matrox had them all beat with the higher quality RAMDAC but that's another topic. The difference with Nvidia/Matrox is they didn't need to resort to parlor tricks to convince their customers their render output is "just as good" because if their customers truly wanted the highest quality color output they'd just set the device output to 32-bit color depth and be done with it.

Matrox may have had crystal clear output for the time, but their drivers were absolutely horrific. They barely supported DirectX, let alone OpenGL. It wasn't until many of their earlier cards were basically EOL did they get halfway decent 3D rendering API support.

I remember back when I had a Matrox G200, it had terrible problems. Graphical corruption was common, especially if the game tried to do translucency/transparency, which would often make the card crash and the system had to be rebooted.
 
I've already done the 16 bit quality comparison between the Voodoo3, TNT2, Rage and Matrox G200. In the games I play, all of them except the 3dfx cards have color banding. It was especially bad in Quake lineage engines, like Half-Life.
Were you comparing each via display output or framebuffer capture? Because it's easily demonstratable via framebuffer comparisons. Also the game engine and artwork really matter because a lot of old games used 16-bit and even 8-bit textures which in pre-T&L days didn't help. Quake exclusively used 8-bit textures which is why the original skybox looks like crap now matter what color depth you're working with.

And saying that the green channel having more bits of color depth not causing green banding is false. You can force the effect to happen if you down sample an RGB888 texture into RGB565, the whole image shifts to have an ugly green tinge to it. You can also prevent it from happening by going down to 15 bit color, RGB555. I have to deal with this problem on a daily basis building game worlds and having to modify existing textures that have already been downsampled to RGB565, or compressed using DXT1 or DXT5, which essentially does the same thing, but is very lossy.
This sounds like a lousy texture conversion. You'd have to royally screw the conversion process up to end up in a green tinged state. Adding (de)compression into the pipeline of that? Sure, maybe that will add artifacts. But to not muddy up the conversation, assume a non-lossy source.

Matrox may have had crystal clear output for the time, but their drivers were absolutely horrific. They barely supported DirectX, let alone OpenGL. It wasn't until many of their earlier cards were basically EOL did they get halfway decent 3D rendering API support.

I remember back when I had a Matrox G200, it had terrible problems. Graphical corruption was common, especially if the game tried to do translucency/transparency, which would often make the card crash and the system had to be rebooted.
Horrific? Nah...

Being a G400 MAX owner for years (and still currently an owner of working said unit) I can recall first-hand experience. Their Direct3D drivers were solid. As solid as anyone else at the time. If anything, their driver support for Direct3D was a selling point for the G400. Especially since they were the first to offer Environmental Bump Mapping. Feature-wise, the G400 was a solid Direct3D option. No driver crashing. No where near the flaming dumpster you make it out to be. Sounds like you are ill informed or missed the boat.

The pre-2000 era OpenGL ICD, eh...

Matrox's path to OpenGL ICD was a roller coaster. They did initially provide a MiniGL (which just wrapped Direct3D calls) that supported all the popular OpenGL games, namely anything running on id Tech. And it worked pretty well for those titles. But compared to the TNT2 OpenGL support? Eh, the TNT2 in that era had the best OpenGL ICD, period. 3dfx couldn't even claim that. 3dfx didn't deliver an OpenGL ICD with the launch of the Voodoo 3 either. They offered a crappy MiniGL like Matrox did that effectively wrapped Glide and only targeted games. But Matrox also didn't have the advantage of a library of Glide supported titles to fallback on. So 3dfx got a pass for lack of OpenGL support and Matrox got crucified for it. Eh, whatever. By the time either delivered an OpenGL ICD, neither were relevant anymore.

I never had crashing with my G400, but the early versions of the MiniGL did have graphic issues that took months to iron out. The full OpenGL ICD did fix this, but it didn't perform nearly as well as the MiniGL. So it was a pick your poison kind of deal.

If I had to go back and do it all over again? I'd have gone with a TNT2 Ultra for the superior gaming experience. Because 3dfx were not innovating anything with the introduction of the VSA-100. Avoiding 3dfx after the Vooodoo 2 SLI era I do not regret one bit. I certainly don't feel like Matrox was ripping me off with the G400 MAX. I got my money's worth out of it in the end. It served as a capture card with its daughter board, so yeah, even well past its prime my G400 had more life in it.

Here is my G400 MAX still in its period appropriate PC:

PXL_20230210_023956292.NIGHT.jpg PXL_20230210_024227955.NIGHT.jpg PXL_20230210_024644213.NIGHT.jpg
 
Last edited:
Sweet piece of history.
I remember buying Diablo 2 and it ran like absolute dog shit when I turned on the map overlay. Dad called Blizzard and they told him to buy a Voodoo 4 4500, something about hardware T&L. Years later I learned the card didn't support T&L... Either way, made the game run flawlessly, and that was my very first computer part upgrade. Had to work my ass off for that card. I do miss the days of sexy bitches on my GPU boxes.
I don't remember what graphics processor was in that Sony Vaio with the first gen Pentium 4...1.3GHz iirc. Looked like a PIII substrate with a PIV smashed on it. I think the Sony mobo had an ATi Rage or something on it, was terrible.
 
Were you comparing each via display output or framebuffer capture? Because it's easily demonstratable via framebuffer comparisons. Also the game engine and artwork really matter because a lot of old games used 16-bit and even 8-bit textures which in pre-T&L days didn't help. Quake exclusively used 8-bit textures which is why the original skybox looks like crap now matter what color depth you're working with.

Quake was not pure 8 bit, it used palletized textures. No single texture could use more than 256 colors, but the pallet was not fixed, meaning different textures could use different pallets. Since most textures used didn't have more than 256 unique colors, this wasn't really a big limitation. Half-Life was the same, except it allowed for 24 bit skyboxes via targa or bitmap images.

This sounds like a lousy texture conversion. You'd have to royally screw the conversion process up to end up in a green tinged state. Adding (de)compression into the pipeline of that? Sure, maybe that will add artifacts. But to not muddy up the conversation, assume a non-lossy source.

Welcome to the wonderful world of DXT texture compression. When modifying existing game textures, 99.9% of the time, you don't have access to the original uncompressed texture source, so you must deal with texture degradation when making texture edits. If you plan on doing multiple edits, you make a new "master" uncompressed texture, but it will still have the same color shifting issues, it just won't cascade by avoiding making an edit of a lossy edit of a lossy edit.

Horrific? Nah...

Being a G400 MAX owner for years (and still currently an owner of working said unit) I can recall first-hand experience. Their Direct3D drivers were solid. As solid as anyone else at the time. If anything, their driver support for Direct3D was a selling point for the G400. Especially since they were the first to offer Environmental Bump Mapping. Feature-wise, the G400 was a solid Direct3D option. No driver crashing. No where near the flaming dumpster you make it out to be. Sounds like you are ill informed or missed the boat.

We're talking about two completely different cards. The G400 MAX is not the G200. None of what you said applies. My earlier Matrox Mystique and Millennium II cards were just as bad. And yes, I still have all three of those cards.
 
  • Like
Reactions: erek
like this
We're talking about two completely different cards. The G400 MAX is not the G200. None of what you said applies. My earlier Matrox Mystique and Millennium II cards were just as bad. And yes, I still have all three of those cards.
Yes those early Matrox cards were horrible at 3D. But so was the entire market at that time so I don't understand why the need to single them out.
 
Quake was not pure 8 bit, it used palletized textures. No single texture could use more than 256 colors, but the pallet was not fixed, meaning different textures could use different pallets. Since most textures used didn't have more than 256 unique colors, this wasn't really a big limitation. Half-Life was the same, except it allowed for 24 bit skyboxes via targa or bitmap images.
Which is why I specifically said Quake's textures were 8-bit. None of that matters because an 8-bit texture will still look like an 8-bit texture when viewed up close in Quake. It's even more pronounced on a skybox which is always at a fixed size and at a fixed position.

Welcome to the wonderful world of DXT texture compression. When modifying existing game textures, 99.9% of the time, you don't have access to the original uncompressed texture source, so you must deal with texture degradation when making texture edits. If you plan on doing multiple edits, you make a new "master" uncompressed texture, but it will still have the same color shifting issues, it just won't cascade by avoiding making an edit of a lossy edit of a lossy edit.
Texture compression was largely irrelevant in 1999 given most of the popular hardware didn't support it. Texture compression wouldn't be a thing until later mainly because games were slow to adopt it. Unreal didn't even support the S3TC texture pack until late 1999. And even that didn't make it magically work on hardware that didn't support the feature.

We're talking about two completely different cards. The G400 MAX is not the G200. None of what you said applies. My earlier Matrox Mystique and Millennium II cards were just as bad. And yes, I still have all three of those cards.
Good for you. I'm exampling video cards that the Voodoo 3 and the VSA-100 were trying to compete with. A G200 and any ancient 3D decelerator is pretty much irrelevant in this context.
 
Yes those early Matrox cards were horrible at 3D. But so was the entire market at that time so I don't understand why the need to single them out.

Those "early" Matrox cards, like the Mystique and Millennium / Millennium II and G200 were released in the 1996-1998 time frame, which was at the same time the Voodoo1, Voodoo2 and Banshee existed. Same with the ATi Rage series. Those cards had far fewer problems than the Matrox cards did from launch, 3dfx, Nvidia and ATi also didn't take years to fix broken drivers.

Matrox doesn't get a free pass.

Which is why I specifically said Quake's textures were 8-bit. None of that matters because an 8-bit texture will still look like an 8-bit texture when viewed up close in Quake. It's even more pronounced on a skybox which is always at a fixed size and at a fixed position.

LOL. You don't know anything about the Quake engine. You're conflating perspective corrected unfiltered textures with color depth. And no, the Quake skybox was not fixed. It had two layers of parallax scrolling that could be changed for any number of different images.


Good for you. I'm exampling video cards that the Voodoo 3 and the VSA-100 were trying to compete with. A G200 and any ancient 3D decelerator is pretty much irrelevant in this context.

Good for you winning a straw man argument against yourself that was never asked to begin with.
 
Those "early" Matrox cards, like the Mystique and Millennium / Millennium II and G200 were released in the 1996-1998 time frame, which was at the same time the Voodoo1, Voodoo2 and Banshee existed. Same with the ATi Rage series. Those cards had far fewer problems than the Matrox cards did from launch, 3dfx, Nvidia and ATi also didn't take years to fix broken drivers.

Matrox doesn't get a free pass.



LOL. You don't know anything about the Quake engine. You're conflating perspective corrected unfiltered textures with color depth. And no, the Quake skybox was not fixed. It had two layers of parallax scrolling that could be changed for any number of different images.




Good for you winning a straw man argument against yourself that was never asked to begin with.
The 3dfx Voodoo 5 6000 Sold for $15,000 at Auction https://www.tomshardware.com/news/3dfx-voodoo-5-6000-sold-for-15000-dollars
 
you know what would be awesome? some context as to why youre posting this...
are you "ross" not "erek"?
seems like a fair deal, i guess.
not Ross,
but apparently LTT bought the 3dfx Voodoo and there may be a video about it
1677536025132.png
 
so does that mean my quantum3d card with 8x VSAs is worth $30k? time to sell!
 
  • Like
Reactions: erek
like this
so does that mean my quantum3d card with 8x VSAs is worth $30k? time to sell!
yes
though i think for standard eBay accounts you'll be limited to $15K max like this Voodoo was
 
but this thread isnt about those.


stop randomly quoting me for no reason, please
it wasn't at all a random quote, and just because you say it is doesn't make it a random quote either.

a part of having discussions in threads is elaborating on discussions and my mentioning of the Rampage is just a conversational piece that is relevant to my argument about the title of LTT's clip being it's the "Wildest GPU Ever" :wow:
well no, there's a lot of really more interesting and wild GPUs and that is my point by bringing the Rampage up

it is akin to the common misconception and quote that the i740 was Intel's first Discrete GPU, no there were many others before that and yes they were commercially offered

I believe LTT misrepresented this too during the Larrabee presentation
 
Status
Not open for further replies.
Back
Top