Fury won't beat 980Ti, much less Titan

Status
Not open for further replies.
When an engineer build something, he would build it for the worst case scenario, of course within reason. I see Furmark as a reasonable stress testing to measure worst case scenario of cooling system and power supply.

No card will ever hit that kind of stress in a real world situation. Being able to tell that an Nvidia card fails at 90c and an AMD at 100c doesn't really help when both cards are rated to operate at different temperatures. I just don't feel like I learn anything from these kind of benchmark tests.
 
When an engineer build something, he would build it for the worst case scenario, of course within reason. I see Furmark as a reasonable stress testing to measure worst case scenario of cooling system and power supply.

I completely agree, that is why when I get a new cell phone I have a fully loaded dump truck run it over to make sure the glass will not break. If it does then it is defective because it did not pass the worst possible scenario. /sarcasm
 
No card will ever hit that kind of stress in a real world situation. Being able to tell that an Nvidia card fails at 90c and an AMD at 100c doesn't really help when both cards are rated to operate at different temperatures. I just don't feel like I learn anything from these kind of benchmark tests.

Then let's agree to disagree.

My anecdotal story: Used to have a 4770k that would pass everything (aida, intel tuning tool, x264 8 threads) but prime95 avx2 @4.4 Ghz. Naive as I was, I believed people on [H] saying that the cpu would never see that kind of workload in real world. One day while I was playing Tomb Raider and encoding some mp3 in the background, the pc crashed on my. So I had to back the OC down to 4.2 and prime95 avx2 is no more of a problem. Not a single crash due to cpu instability ever since. From that day on, I started to test everything in their worst case scenario. Of course running furmark or prime95 for 24 hrs is stupid but it should be at least an hour or two.
 
I completely agree, that is why when I get a new cell phone I have a fully loaded dump truck run it over to make sure the glass will not break. If it does then it is defective because it did not pass the worst possible scenario. /sarcasm

I see you missed the part "within reason", which makes sense since you can't make a relevant analogy. Gorilla glass is designed to be scratch resistant, not unbreakable. There are specifications for a reason. Next time you drive, don't wear your seat belt since most accidents are just fender bender, only the worst case scenario accident would kill you. /sarcasm
 
And amd is the only company that severly ignores pci-e specs in the making of the glorious 295X2 that can potentially burn your house down. Ignorance is bliss, letting connectors that are rated for 85 degree centigrade running at 90 won't be a problem at all. Running a 550W cards using 375W rated connector is just fine.
http://www.jonnyguru.com/forums/showthread.php?t=11497

has anyone had issues? can you point to one?

lol
 
ive not heard of a single power/fire/explosion issue, despite the supposed "out of spec"
 
And amd is the only company that severly ignores pci-e specs in the making of the glorious 295X2 that can potentially burn your house down. Ignorance is bliss, letting connectors that are rated for 85 degree centigrade running at 90 won't be a problem at all. Running a 550W cards using 375W rated connector is just fine.
http://www.jonnyguru.com/forums/showthread.php?t=11497

Serious question... Has any motherboard manufacture released a motherboard with boosted/amplified PCIe lanes?

I found this... http://www.evga.com/Products/Product.aspx?pn=100-MB-PB01-BR

Has any motherboard manufacture built in something like that?
 
IMHO Nvidia priced the 980Ti at $650 to make sure AMD has no breathing room for Fiji. The one thing everyone knows is that Fiji + HBM will be expensive to make. And with the very real possibility that Fiji is stuck as a 4GB card, even if it performs on par with a full Titan X, Nvidia made sure that AMD could not find a more comfortable spot between the $550 980 and the $1000 Titan X. Now AMD has to try and fit it at or below the 980Ti @ $650. Unless it can out perform the Titan X by like 20% I can see very little hope of it selling decent numbers if it's over $600 w/ 4GB and that will really hurt AMD.

An interesting side note though is the main battle where AMD must sell well is with the 390 line since it'll compete with the lower priced 970 and 980 both stuck with 4GB of ram. It looks like they are gearing up for a battle of ram in that area. Though if AMD pushes RAM in the 390 line marketing then they sabotage their Fiji line if it is limited to 4GB. It's kind of an interesting conundrum for AMD.
This is again logic that only people with Nvidia marketing background could come up with an excuse. When you want to sell videocards from March 17 till now and have the best option available why would you waste resource on making a niche card then dumping it a few months later for something that comes close but is way cheaper.

The Titan vs 980Ti issue I am sure is a planned strategy, even if hurried a bit by rumored AMD release dates.

Pretty sure everyone who "really" wants a new Titan has one. Nvidia released it first knowing it could ride the wave of ultra-fanboy/Scrooge McDuck-swimming-in-his-money-vault hobbyist sales. The premium alone on this puppy probably covers the manufacturing cost. With the number of full-big Maxwell's surely being limited, Nvidia wouldn't rely on this halo product to drive the bulk of their sales. Now, the people who truly need massive SP performance will be the primary customers and Nvidia won't have to worry so much about "yeah its the best, if you can find one for sale and want to take out a second mortgage on your house".

Then they drop the 980Ti. Don't need the massive halo markup (but can still keep a decent fanboy premium), less stringent on the yield requirements, and half the RAM cuts production cost. The PR on the price alone will drive sales. Most of these sales will not cannibalize Titan, as they wouldn't have spent that much on a single card, anyway, or would have waited for Pascal.

Deciding to launch when they did was probably threading the supply chain needle. Any earlier and it would likely have been considered a paper launch. It ended up being perfect, though. They got the jump on AMD, and will now have 15 days in the spotlight, and apparently not so much as a peep of a rebuttal from the competition. People are now hammering F5 to check availability to actually BUY a 980Ti, while on the AMD side people are just checking once a day or so to see if there are any new rumors about the Fury.

At this point Nvidia is probably kicking themselves for spending as much as they did on market research and planning. AMD has pretty much defeated themselves. Nvidia could have done pretty much nothing but release their cards whenever and still won.

re the D3D12 unified memory thing..the API doesn't automatically do that, it simply allows for it. It is up to each individual game developer to implement it.
Your whole argument does not make sense since Nvidia wants sales. If you can sell videocards without any opposition why would you make an expensive version to start with. There is little to gain from doing it at this point in time. Unless you are believing that you can only buy videocards in a 2 week period in June..

The people waiting for AMD will still want an AMD card the people who wanted a Nvidia will buy a Nvidia one.

But several months have gone past Nvidia could have made sure no cash would have gone AMD way from March this year and yet they spring it as a surprise while in reality the only product that comes close in performance is Nvidia ...............
This is no surprise. AMD can't get drivers to perform now ... what makes you think the next gen cards are going to be any better ?

-blind faith? Their new GM going to perform some gpu driver miracles ?... this I got to see .

You mean the Nvidia ones that blow up your card that were not made by Nvidia? And yet work(ed) on Nvidia hardware. Have you heard of google :)
 
to play devils advocate here, you could release the big card first to gather binned silicon to use in the cut down version.

Titan rejects go to the 980i bin.
 
I see you missed the part "within reason", which makes sense since you can't make a relevant analogy. Gorilla glass is designed to be scratch resistant, not unbreakable. There are specifications for a reason. Next time you drive, don't wear your seat belt since most accidents are just fender bender, only the worst case scenario accident would kill you. /sarcasm

It was a great analogy for anyone that understands Furmark is nowhere close to real world usage scenarios. I would say the same for NV and AMD products. Furthermore, your anecdote was about operating something OUTSIDE its designed specifications. So why can't I expect my gorilla glass to protect something outside its designed specifications as well?

Show me a review where those connectors went over spec or that the PCIe slot went over it's spec. All the reviews I saw said the VRM and actual PCB were hot on the 295x2 but the connectors and PCIe slot did not exceed specifications. Both AMD and NV have had cards that came up against their design limitations but they leave some room for user induced faults (crappy ventilation, poor quality PSU, etc) and general safety buffer from failure.
 
to play devils advocate here, you could release the big card first to gather binned silicon to use in the cut down version.

Titan rejects go to the 980i bin.

That part makes more sense then what others propose. But still selling more cards would have more profit .....
 
So ~70w difference only a bit off your almost "100w."

That's nice. Nvidia TDP |= power consumption.
They were a bit better back in Kepler days than they are with Maxwell but it still holds true.


That's why Nvidia throttles Furmark, right?
Since you are not good at math, le me do it for ya: 659 - 577 = 82. Hope that helps.

No, nvidia no longer throttle FurMark. With everthing stock, my ex 780 Ti still running Furmark at stock clocks, just without any boost. Same for 980 and Titan X, unlike the reference 290X that throttles with every games.
AMD and nVidia both use the same dynamic clock and power management mechanism so both can throttle, it's AMD failure to adhere to the specs that cause the problem. They can just add a third power connector just like Asus ARES or stack them like nVidia did with the reference 680.
 
It was a great analogy for anyone that understands Furmark is nowhere close to real world usage scenarios. I would say the same for NV and AMD products. Furthermore, your anecdote was about operating something OUTSIDE its designed specifications. So why can't I expect my gorilla glass to protect something outside its designed specifications as well?

Show me a review where those connectors went over spec or that the PCIe slot went over it's spec. All the reviews I saw said the VRM and actual PCB were hot on the 295x2 but the connectors and PCIe slot did not exceed specifications. Both AMD and NV have had cards that came up against their design limitations but they leave some room for user induced faults (crappy ventilation, poor quality PSU, etc) and general safety buffer from failure.
I post the link above and it clearly shows that the card is out of specs in terms of both power and heat. If you're too lazy to read, don't ask someone else to shove data into your face.
 
who cares?

it obviously hasnt cuased any problems YET.
Fixed it for you.
And when problems arise, it's going to be because of the PSU, right. How convenient that AMD guys just blame everything on somebody else for AMD's incompetence.
 
And amd is the only company that severly ignores pci-e specs in the making of the glorious 295X2 that can potentially burn your house down. Ignorance is bliss, letting connectors that are rated for 85 degree centigrade running at 90 won't be a problem at all. Running a 550W cards using 375W rated connector is just fine.
http://www.jonnyguru.com/forums/showthread.php?t=11497

You know this isn't an issue, it's just like buying any other high end card, there is a list of requirements, if you don't have a proper power supply, don't be surprised if bad things happen.

It's not up to AMD to pick what power supply you use. You buy the card knowing that you need a quality PSU. AMD state that you need a PSU capable so supplying 28A on a single rail.

Even the most Anti-AMD site out there, PcPer, didn't think this was a problem at all and actually applauded AMD for having strict hardware requirements needed to bring such a powerful gpu to the market.

It's been over a year since Jonnyguru enlightened the world with his safety concerns and there hasn't been one report of 295x2 going on fire.

And lastly, the 6990 also went way over the Pcie spec and nothing ever happened to those cards either. Oh and Anandtech did a nice little section in their review of the 6990.

You are calling people ignorant without having any facts yourself apart from Jonnyguru who obviously doesn't have any facts either.

Read this.

http://www.anandtech.com/show/4209/amds-radeon-hd-6990-the-new-single-card-king/5
 
Fixed it for you.
And when problems arise, it's going to be because of the PSU, right. How convenient that AMD guys just blame everything on somebody else for AMD's incompetence.

Who will this problem affect realistically? Who will go and buy a supposed $850 card and run it on a 430 watt power supply or any other shit power supply for that matter?

And that is why PSU's won't be a problem. As far as power draw if you don't own a 80+ platinum power supply you are not efficient, then why complain about efficiency?
 
I post the link above and it clearly shows that the card is out of specs in terms of both power and heat. If you're too lazy to read, don't ask someone else to shove data into your face.

Man so defensive, I read your link and nowhere in all 4 pages did it have any exact measurements of the plugs. Instead people guess based on the graph in the video that went from 20 to 95C. The target was set on the VRM. So I ask again show me something that specifically measured the plugs (please better read your own link next time, don't quote the thread thesis and be lazy).

Here is an example: Measured no more than 58C for plugs. While not an ideal source as it does not exactly show him measuring the plugs but at least he references it with an exact number. All I asked was show me a single source that backed up your claim. It is not up to me to find data to back up YOUR claim that you saw in someone else's thread.

Yes the TDP of the 295x2 did go over PCIe power specs and anyone buying that card knew that and was prepared with a solid PSU. Even the OEMs that offered it as an option bulked it with a strong PSU. I suppose someone could have just waltzed into a store and picked up a card and figured "Hey this looks good!" Although given the price range, both initial and current, I find that unlikely.
 
Man so defensive, I read your link and nowhere in all 4 pages did it have any exact measurements of the plugs. Instead people guess based on the graph in the video that went from 20 to 95C. The target was set on the VRM. So I ask again show me something that specifically measured the plugs (please better read your own link next time, don't quote the thread thesis and be lazy).

Here is an example: Measured no more than 58C for plugs. While not an ideal source as it does not exactly show him measuring the plugs but at least he references it with an exact number. All I asked was show me a single source that backed up your claim. It is not up to me to find data to back up YOUR claim that you saw in someone else's thread.

Yes the TDP of the 295x2 did go over PCIe power specs and anyone buying that card knew that and was prepared with a solid PSU. Even the OEMs that offered it as an option bulked it with a strong PSU. I suppose someone could have just waltzed into a store and picked up a card and figured "Hey this looks good!" Although given the price range, both initial and current, I find that unlikely.

His arguments are null and void, because the company responsible for the specification say that it doesn't matter, it was specification made when nobody thought GPUs would use over 300W and they don't have an issue with going over that.
 
Pretty much the last few guys just jump to conclusions without even reading. It's not about the psu, it's about the cable and connectors' specs that the 295X2 is exceeding. And seems like AMD guys are just a bunch of gangs who have no concerns over safety thinking that letting their cables and connectors running at 80-90 deg. C is fine, I think it's not necessary for me to discuss the matter anymore.
 
Link me to a single person who caught on fire/exploded/buned their house down with their 295x2.

no?

ok then...

non-issue/Red Herring/Concern Troll.
 
Pretty much the last few guys just jump to conclusions without even reading. It's not about the psu, it's about the cable and connectors' specs that the 295X2 is exceeding. And seems like AMD guys are just a bunch of gangs who have no concerns over safety thinking that letting their cables and connectors running at 80-90 deg. C is fine, I think it's not necessary for me to discuss the matter anymore.

I care about safety, to bad my Gtx 780 has caps burn on me twice. That card was drawing way to much power and the temps were through the roof! I am so glad that there was not an electrical fire, oh wait there are safety features in PSU's, on motherboards, as well as graphics cards and I have external surge protectors on top of that. This technology is astounding must have cost a fortune, lol.

When was the last time you heard of cables and connectors catching on fire in a gaming environment? And who would not have a brain to turn off the computer if they saw sparks, that is if the computer had not already done it for you.
 
Pretty much the last few guys just jump to conclusions without even reading. It's not about the psu, it's about the cable and connectors' specs that the 295X2 is exceeding. And seems like AMD guys are just a bunch of gangs who have no concerns over safety thinking that letting their cables and connectors running at 80-90 deg. C is fine, I think it's not necessary for me to discuss the matter anymore.

Nope, you know you have nothing and that's why you can't discuss this anymore.

The specifications of the cables and the connectors are what's been discussed, I have even linked to a discussion Anandtech had with the group responsible for said specifications. It wasn't an issue with the 6990, it's not an issue now.

And you labelling everybody who disagrees with you as part of an AMD gang is ridiculous, well, just like your argument in fact.
 
Man so defensive, I read your link and nowhere in all 4 pages did it have any exact measurements of the plugs. Instead people guess based on the graph in the video that went from 20 to 95C. The target was set on the VRM. So I ask again show me something that specifically measured the plugs (please better read your own link next time, don't quote the thread thesis and be lazy).

Here is an example: Measured no more than 58C for plugs. While not an ideal source as it does not exactly show him measuring the plugs but at least he references it with an exact number. All I asked was show me a single source that backed up your claim. It is not up to me to find data to back up YOUR claim that you saw in someone else's thread.

Yes the TDP of the 295x2 did go over PCIe power specs and anyone buying that card knew that and was prepared with a solid PSU. Even the OEMs that offered it as an option bulked it with a strong PSU. I suppose someone could have just waltzed into a store and picked up a card and figured "Hey this looks good!" Although given the price range, both initial and current, I find that unlikely.
https://www.youtube.com/watch?v=gVDnQomkkaI
And this is an example of the plug getting close to 90°C...
 
Nope, you know you have nothing and that's why you can't discuss this anymore.

The specifications of the cables and the connectors are what's been discussed, I have even linked to a discussion Anandtech had with the group responsible for said specifications. It wasn't an issue with the 6990, it's not an issue now.

And you labelling everybody who disagrees with you as part of an AMD gang is ridiculous, well, just like your argument in fact.
Again, ignorance is bliss. If letting cables running at 80-90°C is comfortable with you then of course, it's not an issue at all.

Why did it not catch on fire then?
Because no one has let it run like that for more than a few hours.
 
His arguments are null and void, because the company responsible for the specification say that it doesn't matter, it was specification made when nobody thought GPUs would use over 300W and they don't have an issue with going over that.

I wouldn't say null and void. To my knowledge the PCIe specs were never updated so on a technicality he is right that the 295x2 TDP was over spec. Now the relevance of the 300W cap on the spec is certainly debatable (with facts of course, not subjective charts). I give credit where it is due and for people with low risk threshold it can be a safety concern. Just like some people swear off CLC coolers because of the danger of a possible leak or pump failure. Risk vs reward, everyone has different levels of acceptable risk for given reward.

All I wanted was an exact number on the parts he was claiming were overheating. His source had a chart that made it pretty subjective depending on your monitor coloring/eye color sensitivity/etc. In the TPU review they never stated a temp of the plugs only that they were "hot" as was the radiator (at 60C).
 
I care about safety, to bad my Gtx 780 has caps burn on me twice. That card was drawing way to much power and the temps were through the roof! I am so glad that there was not an electrical fire, oh wait there are safety features in PSU's, on motherboards, as well as graphics cards and I have external surge protectors on top of that. This technology is astounding must have cost a fortune, lol.
Surge protector won't be able to protect your connectors from melting due to heat. No component as far as I know has sensor to monitor temperature on cable so you have ZERO protection if your connector/cable melt.

When was the last time you heard of cables and connectors catching on fire in a gaming environment? And who would not have a brain to turn off the computer if they saw sparks, that is if the computer had not already done it for you.
I'm using my computer for F@H, I can't be home 24/7 to stare at my computer while its crunching F@H, right? And you know F@H runs great on GPU right?
 
Again, ignorance is bliss. If letting cables running at 80-90°C is comfortable with you then of course, it's not an issue at all.


Because no one has let it run like that for more than a few hours.

So it is safe then. Come on man this day in age with this argument, can you post me a link to anyone who had this problem with a card that was running at these specs? Or is this a danger scenario like we could all get nuked tomorrow.
 
i could get run over by a bus tomorrow, doesnt mean i worry about it.
 
Surge protector won't be able to protect your connectors from melting due to heat. No component as far as I know has sensor to monitor temperature on cable so you have ZERO protection if your connector/cable melt.


I'm using my computer for F@H, I can't be home 24/7 to stare at my computer while its crunching F@H, right? And you know F@H runs great on GPU right?

Once again what are the real world chances of this happening in the first place without other safety features interfering or it happening at all for that matter. Has it even happened?

Everyone with a 980ti better watch out there is a chance that your whole house can burn down if you leave your computer on. As a matter of fact better cut all electrical use, you might get a fire in your house. That is the relevance of your argument.
 
I wouldn't say null and void. To my knowledge the PCIe specs were never updated so on a technicality he is right that the 295x2 TDP was over spec. Now the relevance of the 300W cap on the spec is certainly debatable (with facts of course, not subjective charts). I give credit where it is due and for people with low risk threshold it can be a safety concern. Just like some people swear off CLC coolers because of the danger of a possible leak or pump failure. Risk vs reward, everyone has different levels of acceptable risk for given reward.

All I wanted was an exact number on the parts he was claiming were overheating. His source had a chart that made it pretty subjective depending on your monitor coloring/eye color sensitivity/etc. In the TPU review they never stated a temp of the plugs only that they were "hot" as was the radiator (at 60C).

Parts aren't overheating. Here is a Guru3d review of the 295x2 showing an overclocked 295x2 under full load. Notice the area around the connectors, just 50 degrees. That doesn't even get near the outdated PCIe specifications.

http://www.guru3d.com/articles-pages/amd-radeon-r9-295x2-review,34.html

And yes, his arguments are null and void, because AMD put the connectors on the card. They would have built those connectors to suit the cards. They also stress a good power supply.
 
And yes, his arguments are null and void, because AMD put the connectors on the card. They would have built those connectors to suit the cards. They also stress a good power supply.
So AMD built the molex connectors on the card differently than other cards and PSUs that qualify to power the 295X2 have special built power connectors too? Are you living in wonderland? Most connectors are just come from a few supplier and that is it. A connector on a r9 270x is just the same as those found on the 295X2 except for the pin count.
And if techpowerup's video doesn't work on your computer, I can download it and upload it to vimeo for you.
 
I wouldn't say null and void. To my knowledge the PCIe specs were never updated so on a technicality he is right that the 295x2 TDP was over spec.

There is one more thing, that the specification wasn't made for any safety reasons, it was just a figure they came up with. It wasn't a safety feature. It wasn't a limit. It was just a figure that they decide on for the OEM market. Not high end graphics chips.

Using the specification as some kind of tested safety figure is daft.
 
Hmmm... wonder if those PCIe power connectors on ASRock's motherboard would actually help a 295x2's performance...

Just speculation, but it would seem that the 295x2 is only capable of drawing 375w of power while it can use 500w+ of power, so it would go to reason that there is still some grunt in that card that isn't utilized.

Would love to see a benchmark of identical setups with ASRock vs non-ASRock.

(Powercolor made some 4x8-pin 295x2, but the thing is 3 slots and air cooled so is hot as balls and likely throttles...)
 
Last edited:
So AMD built the molex connectors on the card differently than other cards and PSUs that qualify to power the 295X2 have special built power connectors too? Are you living in wonderland? Most connectors are just come from a few supplier and that is it. A connector on a r9 270x is just the same as those found on the 295X2 except for the pin count.

No, I am saying that they would have put the parts on the card suitable for the purpose. And seeing as there is no reports of any 295x2 going on fire then they must have been correct because something like this would happen pretty quickly. If those parts are the same parts on a 270x then so be it, they must be good enough for the job.

They also stress that you need a quality power supply, and power supplies rated to the spec that AMD recommend can handle the kind of load the 295x2 requires.

You are basing your whole argument on one site. Jonnyguru whoever the hell that is. If this was any kind of issue more tech sites would have jumped on it. But it isn't.

It's you flogging a dead horse to make AMD look bad, and it's a pretty pathetic attempt.
 
Status
Not open for further replies.
Back
Top