AMD Distances Itself from CrossFire with RX Vega

Because turning settings down in games isn't fucking good enough. That's why.

You would turn on an option that does nothing but reduce performance by 50%? Cool. Personally I have more interest in enjoying my games and optimizing them rather than just cranking shit up to settings that do nothing by reduce performance. If game developers actually optimized their games properly it would make more sense.
 
That is rather a self fulling prophecy though. Users do not use it because devs do not develop for it because users do not use it. ;)

Yes and no...there's ALWAYS going to be small userbase, just because of the cost. The majority of SLI users are the uber-high-end types, who have top-tier cards. That's a TINY fraction of the total user base for any given game. The other group is those people running last gen, or mid-tier cards in SLI/CF to save money. Also a tiny portion of your userbase.

The time required is the key. If devs can optimize a game for SLI/CF in a couple hours of work, it's not a big deal and they'll do it to support even a small part of the userbase. If it takes weeks of time to optimize the game properly for SLI and then a few more weeks for CF, there's no way to justify the costs. All that dev time could be better spent optimizing the graphics overall, so that it runs better for the majority of their users.
 
Again, the problem isn't the API's. It's the game developers that are at fault here. Game developers not supporting it is a multi-part problem.

Makes complete sense.

To be clear, I didn't say the API's were the problem though, just that it would be tragic to cut wind down or cut multi-GPU support now that APIs are more capable. 1 step forward with the APIs and 2 steps backward due to developer adoption.
 
The old VooDoo2 cards in SLi didn't require software support beyond the Glide API. They simply worked and that's what AMD and NVIDIA need right now. I hoped that Vulkan gaining multiGPU support would help but unless Vulkan support becomes more widespread there is little hope of that.

The old voodoo2 cards weren't programmable. Fully programmable GPUs ruined any possibility of seamless multiGPU support.

In terms of doing this at the hardware level just look at what's necessary for 2 CPU systems. AMD and Intel put serious effort and resources into those interconnects and they "just" need to share L3 cache and memory access. Hell AMD's Epyc 7000 uses more PCI-E lanes to talk between 2 CPUs than Intel's Core i9 (any version) gives you for your entire system. Doing the same with GPUs would require extending that communication to also handle scheduling and fencing (and somehow make that fencing cheap since its a core building block in GPU workloads). There's no way you'd ever see CrossFire or SLI going that route as those flimsy connectors would need to be massively buffed up which would end up being different SKUs, further driving the price up.


OH they better fucking not kill off Crossfire. Same goes for SLI. With DX12 and Vulkan supposedly being more multi-GPU capable of any API that came before them, it would be utterly ironic that they cut multi-GPU support now.

DX12 and Vulkan do multi-GPU explicitly, they don't use Crossfire/SLI. As in, the developer sees that there are 2 GPUs in the system and manually issues commands to them rather than something like Crossfire/SLI where the driver pretends that there's only 1 GPU to the game even though there's actually 2 GPUs.

You probably won't see that scaling well with multiple discreet GPUs, but a more realistic usage of this is mixing a discreet GPU plus the integrated GPU. Epic demo'd doing this with a Titan X + Intel iGPU doing the post-processing shader on the Intel iGPU and everything else on the Titan X, netting a ~10% performance boost ( http://wccftech.com/directx-12-mult...-coherently-demo-shows-big-performance-gains/ )
 
On one hand not much stuff is dx12 or Vulcan yet. Assuming crossfire and SLI are less troublesome in dx11 and older titles? I realize the future leads away from dx11 and older --- but there is still a huge swath of games out there that are DX11 or older.

And it's fair to assume that exisiting triple A engines will still be multigpu friendly - including the new UT engine that isn't even released yet -- that many games will use as a base platform, frostbite engine, havoc engine etc.

It's not dead yet.
 
You would turn on an option that does nothing but reduce performance by 50%? Cool. Personally I have more interest in enjoying my games and optimizing them rather than just cranking shit up to settings that do nothing by reduce performance. If game developers actually optimized their games properly it would make more sense.

If it improves graphics quality, you bet I would. As long as I can hit a minimum of 60FPS at max settings I'm happy. If I have to throw more hardware at the problem to achieve that then so be it. The multi-GPU crowd has always been that way. I'm not going to turn down settings and sacrifice image quality just to hit a target FPS rate if I don't have to.

Also, it's not always a question of optimization. Some features are demanding. Plain and simple.
 
As a long term SLI user that swapped to a single powerful GPU with the advent of the 980Ti, I've seen multi GPU as a dying technology for quite some time now.

Perhaps now developers will start coding more efficiently instead of expecting users to throw expensive, hot and noisy hardware at the problem.
 
Some games also push the boundaries like the upcoming Hellblade so while I would never spend the money on two top end cards it would be nice i the option were worth it to those that would.
 
This is totally on the developers, but that's also not a bad thing or necessarily an indication of a long-term trend: another long-term trend is developers licensing engines or using engines developed within their publisher's umbrella, with examples such as UnrealEngine that is broadly licensed and Frostbyte that is used by basically every EA studio now.

If these game engines develop mature multi-GPU support for the emerging DX12 and Vulkan APIs, we could easily see a comeback of sorts.

And it's not like performance needs are going to stop, let alone slow down: sure, we can almost max out 4k60, but screw 60FPS: I want 4k120 on the desktop, and next-gen VR solutions will require even more performance (say, 2x 4k90, so a similar rendering load to 4k180). A Volta Ti product will still probably fall short of 4k120 with today's games, and Vega isn't even trying- and that accounts for the next two years or so of graphics hardware availability while the games themselves aren't standing still.
 
Tried SLI twice. Both times was never happy with it and ended up selling the second card.
 
I finally gave up on CrossFire/SLI when I saw the performance of the AMD 295X2. I got worse performance in every game as it was released, for months, until a new driver was released.
 
I can afford multicard, but for me there's no point. Nothing I play really gets an advantage from it these days and on top of that I'm really sensitive to stutter. The biggest 'benefit' would be it looks cooler in my case. Not really a reason to spend $700 extra every 18 months.

I just sit waiting for Volta instead
 
Seems dumb to me. You'd think AMD and Nvidia would use any excuse for people to purchase 2 or more expensive GPUs instead of one.

On one side you have driver/developer cost, on the other side you have increased sales. As long as it tips over to increased sales it works financially. But I doubt it ever did and mGPU with an exponentially higher cost simply kills it completely.

Over half the gaming PCs sold today are laptops, and that share is only going to keep increasing. It had a 10x growth the last 3 years.

And then we dont even have to talk about game support and user experience issues.

If these game engines develop mature multi-GPU support for the emerging DX12 and Vulkan APIs, we could easily see a comeback of sorts.

mGPU cost compared to higher API regular SLI/CF is many times the cost. if the first one cant be done financially, the second cant for sure.
 
I would like a better understanding of why it is being neglected. People say the developers don't support it, but why? They used to... So what is the real driving force behind this trend. There is always an advantage to having the option to get more FPS or better graphics then is possible with any single card. There is always the motive to sell more graphics cards.

I also don't get why developers are such a big deal here. I mean most developers don't build their own engines and this type of support should be a function of the engine, not something to be messed around with on a case by case basis.

Maybe what is really going on is that far less games are being made to push performance and far more games are being pushed cross platform, so when you have to split your resources and trim things down to work on consoles from multiple generations you just don't want to do that.

Maybe its the acceptance now days that people will pay $1300 for a gpu like titan.


On another note multi gpu seems to be a product the comes in waves. The question I am really trying to get answered is what starts and ends the waves. So that said I am sure that it will be pushed again in the future.
 
Making multi-GPU work means paying a lot of attention to what your renderer is doing and when.

Almost no game developer bothers with that level of tuning, and hell, they can barely get the content out in the first place. Better the engine developers build the framework (which some already are), and provide methods for game developers to tune their content accordingly.
 
I've always been happy with my results and SLi.

250 GTS 512mb powered my ability to play Final Fantasy XIV when it first came out flawless.

I've used SLi sense.

250 GTS
560 GTX Ti
660 GTX Ti
970 GTX FTW
1070 GTX FTW2
1080 GTX Ti SC Black Editions

Still finding SLi enjoyable... sad tho that after all this they rather destroy SLi and not make it stronger. It was amazing in Maxwell....
 
If it improves graphics quality, you bet I would. As long as I can hit a minimum of 60FPS at max settings I'm happy. If I have to throw more hardware at the problem to achieve that then so be it. The multi-GPU crowd has always been that way. I'm not going to turn down settings and sacrifice image quality just to hit a target FPS rate if I don't have to.

Also, it's not always a question of optimization. Some features are demanding. Plain and simple.

But anything but 'ultra' isn't (H)ard...
 
I would like a better understanding of why it is being neglected. People say the developers don't support it, but why? They used to... So what is the real driving force behind this trend. There is always an advantage to having the option to get more FPS or better graphics then is possible with any single card. There is always the motive to sell more graphics cards.

This has all been said before. Multi-platform game development is one of the principal causes for this. Developers build for consoles first, and the PC is an afterthought. Tight deadlines enforced on developers by publishers prevent the publishers from spending time and resources they don't have on the PC platform. Often late in development, the PC version gets outsourced to some shitty company that specializes in PC ports. The problem is that these companies are out of touch with the PC gaming community and have a short window to make the game work on the PC. Supposedly the guys behind Arkham Knight's PC version only had a short window to work with and a staff of 12 company wide.

I also don't get why developers are such a big deal here. I mean most developers don't build their own engines and this type of support should be a function of the engine, not something to be messed around with on a case by case basis.

No, developers rarely build their own engines but an engine supporting a feature does not mean that the feature is going to be enabled and the game optimized for it. Various game engines may support it, but ultimately it doesn't matter if you are using a graphics API that requires additional work to implement. Unreal Engine may support it, we know Frostbite and CryEngine support SLI/Crossfire but that doesn't matter if the game uses DirectX 12 or Vulkan. Additional work is required by developers to support multiGPU under those API's. Work they aren't doing.

Maybe what is really going on is that far less games are being made to push performance and far more games are being pushed cross platform, so when you have to split your resources and trim things down to work on consoles from multiple generations you just don't want to do that.

Now you are getting it. There are other reasons why implementation is difficult, but this is the main crux of the problem. Another one is expertise. There are probably very few developers that can code a DX12 game explicit multi-GPU. Companies often won't bother to pay for training or purchase SDK's until they need them. Some SDK's are free, but learning to use the new tools is something people are often left to do on your own. People have to teach themselves to leverage new technology as it comes out and don't always have the time for it. Largely, it comes down to money. There is so much licensing, purchase costs, training costs, and time involved which equals paying salaries. Ultimately, money is the bigger issue. A company isn't going to spend tens of thousands of dollars in man hours to support multi-GPU on a multi-platform game that will see relatively few sales on the PC platform compared to consoles. They are unwilling to spend money on developing / utilizing a feature that only a small percentage of PC players will be able to use. They spend their money and time elsewhere.

Ultimately, I think developers may want to build the best games they can, but the people in charge at the studios and publishers call the shots. It is their job to maximize profits, not buid good or quality games.

Maybe its the acceptance now days that people will pay $1300 for a gpu like titan.


On another note multi gpu seems to be a product the comes in waves. The question I am really trying to get answered is what starts and ends the waves. So that said I am sure that it will be pushed again in the future.

One of the issues is that we haven't had a massive leap forward in graphics quality the way we used to. This isn't so much a technical limitation as a design choice. The difference between shooters like Duke Nukem 3D, Doom and Quake I was enormous. Quake II to Quake III / Unreal to Unreal Tournament 2004 was huge. After that, we didn't see another really big leap until Doom 3. After that it's been slowly ramping up without major leaps forward. Games like Battlefield get away with larger leaps because their game worlds are smaller and more straight forward than things are in something like Mass Effect Andromeda.

My point here is that game design trends are responsible for us being stuck right now. Game developers can choose to make stuff as pretty as Battlefield I with small game maps or worlds. Alternatively, they can choose make it larger and more open world like Mass Effect Andromeda or something in between. Unfortunately, either case only needs a single mid-range or higher card to max out at 1920x1080. There are also some software limitations in place. Frostbite was apparently limited to relatively small maps and had to be modified to work for something like Mass Effect Andromeda. Modifying engines to push boundaries takes time, expertise and money. Larger open world games like MMOs suffer even more with graphics fidelity taking a back seat to world size. Effectively, you can only put so many polygons on screen at once. How that budget gets allocated makes all the difference but again, developers try to cater to the widest possible audience which is consoles and mid-range PC's. Again, this limits that polygon / pixel budget.

Sometimes when games have really tried to push the envelope and didn't run so well on a wide range of systems, the developers paid the price for it in lost sales. Crysis would have sold more copies if more people could run it. That was a game that was a couple years ahead of its time. It would be two, three or more generations of hardware before we could experience the game properly and even then, it was only the highest end PC's that could handle that. Developers have been burned trying to push the envelope too far. They would rather be rewarded with profits due to wide market distribution.
 
Multi gpu has been dogshit for ages now, its always a fucking waiting game for drivers regardless of what "side" you have cards from. I've had both and sli is easily as ropey as crossfire despite what some chimps would have you believe. I think we're going to see it fade away in the next couple of years almost entirely, at least in multiple card form. That multi core in a single die tech seems interesting though.
This may not be entirely true. As DirectX12 and Vulkan support multigpu support without "sli and crossfire" driver support. So it may be that they see the writing on the wall that they themselves won't be involved in this particular aspect in the future. Multi GPU may take off again in a whole different form. Only time will tell.
 
Seems dumb to me. You'd think AMD and Nvidia would use any excuse for people to purchase 2 or more expensive GPUs instead of one.

There are already people buying 4 to 16+ GPUs thanks to miners. Why waste their time with Sli/Crossfire?
 
This may not be entirely true. As DirectX12 and Vulkan support multigpu support without "sli and crossfire" driver support. So it may be that they see the writing on the wall that they themselves won't be involved in this particular aspect in the future. Multi GPU may take off again in a whole different form. Only time will tell.

DX12/Vulkan mGPU makes adding SLI/CF support looks like child play. One wasn't worth it to be done, but the other that cost multiple times more is?

Lets be honest here, both companies are dropping multi GPU in any format. Its all left for compute now with Nvidia using NVLink for GPU to GPU communication in the future and AMD using Infinity Fabric.

Then you may see a handful of AAA companies burning cash and time for nothing to show off some bugged mGPU support.
 
Devs can't even properly support multithreaded CPUs (which are in 75%+ of PCs now), think they're going to give a shit about tech that 5% of PC gamers use?
 
Without SLI/CF, resolutions beyond 2560x1400 will be a challenge to drive.
We will have to turn down the settings on all games. May as well game on your console.

4K monitors "for gaming" are a waste without SLI/CF.
 
Last edited:
DX12/Vulkan mGPU makes adding SLI/CF support looks like child play. One wasn't worth it to be done, but the other that cost multiple times more is?

Lets be honest here, both companies are dropping multi GPU in any format. Its all left for compute now with Nvidia using NVLink for GPU to GPU communication in the future and AMD using Infinity Fabric.

Then you may see a handful of AAA companies burning cash and time for nothing to show off some bugged mGPU support.

Honesty has nothing to do with the future. Single cards couldn't drive support that many users needed for years. That may occur again as resolutions climb and graphics clarity nears real life and or VR actually reaches serious levels of adoption with ultra high resolutions. So honestly we may be in a multi-GPU situation again very soon. I at one point had 5 cards in my main box. 4Way SLI +Physics. Then Nvidia suddenly removed support for 4way and dropped to 3 way and I had to get hacked drivers for a few years. I have dropped to 3 cards with physics and then just 3 cards and now 2 cards. So hopefully 1 card will actually be able to drive my system soon that would be great, but as the requirements on that card increase, we may find ourselves back where we started in multi-GPU situations.
 
Back
Top