Does the 750 Ti spell doom for AMD?

Toms? LOL.
Explain the specific problem with their power reading testing method. You may not agree with their benchmark methods for game performance or associations with the industry but just scoffing at any review site that doesn't guarder respect is nothing but elitist bullshit rather than properly questioning their methods at each step.
 
I didn't see where the 265 used less power than the 750 Ti.

That's because it doesn't - in fact, it uses the same amount of power as the 270 it is based on - LegitReviews itself points this out.

The only real area where this card trumps GM107 is memory bus bandwidth (256-bit vs. 128-bit for GM107). In terms of total system power consumption, R7-265 uses only 24W less at load than GTX650Ti - which is where compared to GTX750Ti? Second, even R7-265 requires that 6-pin aux power feed - GTX750Ti does not.

The TDP of R7-265 is 150W - more than twice that of GTX750Ti (last page of LegitReviews' review). The bigger news here is the price cut for the rest of R7 - seems that Maxwell caught AMD a bit flat-footed, so now they are cutting prices to avoid marketshare losses.
 
I was under the impression analog has been abandoned.

Dongles wont held you if the DVI out is DVI-D.

Only DVI-A, and DVI-I have analog.

And single-DVI with every nV GTX has been DVI-I - dual DVI is DVI-I and DVI-D. (Even AMD has not completely eschewed DVI-I, simply due to the upgrade market and the longevity of Trinitron and Diamondtron-type CRTs.)
 
I was under the impression analog has been abandoned.

Dongles wont held you if the DVI out is DVI-D.

Only DVI-A, and DVI-I have analog.

Not yet. There has been talks from the industry to abandon analog support by 2015in favor of digital. AMD and Intel have mentioned this but not Nvidia. Not sure what the current situation is.

The phase out plan is likely to start with integrated solutions (for both Intel and AMD). Wiht this in mind there is the possibility that Maxwell will be the last Nvidia GPU to support analog out natively as well.
 
Well if you would of saw my original post to which he replied and I replied back to I said tdp does not matter to a gamer, a gamer is not going to buy a 750ti over a 780, nor was he talking about casuals, hes talking about people looking to overclock which is not your average joe with a psu without a auxillary power cable.

While TDP doesn't matter to gamer just think about it

If they managed to get 750ti performance in 60w TDP what performance will 250W Maxwell 880ti bring ?

And then there's a market which cares more than any other - gaming notebooks.
 
While TDP doesn't matter to gamer just think about it

If they managed to get 750ti performance in 60w TDP what performance will 250W Maxwell 880ti bring ?

And then there's a market which cares more than any other - gaming notebooks.

Exactly.

I didn't dismiss those hungering for more powerful cards - I am simply saying that not everybody needs to go there. Also, of what need is there - for nVidia or even gamers - to go there just yet?

The issues (with GTX760) are more that there was nothing in the 7-series below it, and it is therefore relatively power-hungry for its current place - GTX750 and GTX750Ti fix that.
If you need more power than GTX750Ti, there is, after all, GTX760 - some designs, such as ASUS' DirectCU II, have relatively minimal draws above GTX750Ti.

The other issue is that overlong overhang of GTX6xx as a whole - nVidia had nothing to replace it with. It is now likely that GTX8xx (Maxwell-based) can kick GTX7xx (all of it) downrange in terms of price - just as GTX6xx did with GTX5xx.

This is merely the opening salvo in the launch of Maxwell - messily and nastily disruptive as it is. That is also why the whole YEAR will be interesting, as more Maxwell-based GPUs launch.
 
Explain the specific problem with their power reading testing method. You may not agree with their benchmark methods for game performance or associations with the industry but just scoffing at any review site that doesn't guarder respect is nothing but elitist bullshit rather than properly questioning their methods at each step.

I won't trust any test done by Toms, especially the ones including Nvidia and AMD cards. All of you are confusing, TDP with power consumption, that's why you are all wrong.
 
Although these are great cards for real. In mining though you can get a 7870 undervolted to around 80-90w and still get over 425 kh/s.

Nvidia still needs to catch up a bit, but they are getting close. CANNOT wait to see big die maxwell. Imagine what a BIG cuda core card could mine!

Curious as to the brand, settings, and voltage used as well as the hardware the card is currently used in.

In my experience, the 7870s have been lackluster at best struggling to get more than 350. I couldn't imagine over 400 and over half the load tdp of a factory card.
 
I personally think they gonna nerf it and keep this chip for a couple of years like. Theres no way a company will release a card witch perform 100 % better one generation after. I personally think we'll see 50% better performance at the same level on 880ti and titan2 or whatever than next year they'll release the 980 with another 50% more or less..

I don't think it ever been done from generation leap, what was the biggest leap?

Not sure if it was from the last 7X00 nvidia card, to the 8800 GTX, that card was the king for over a year.
 
Curious as to the brand, settings, and voltage used as well as the hardware the card is currently used in.

In my experience, the 7870s have been lackluster at best struggling to get more than 350. I couldn't imagine over 400 and over half the load tdp of a factory card.

I cant remember where I read about it, but alot of people undervolt for mining. I run my 280x's at 1.065v 1100/1800 for mining. Now I have no idea how much power I drawing, but its not as much some reviews show because those cards usually hit 1.25 (i think) when gaming. So comparing mining would be a bit different then gaming reviews. Thats why its hard to measure the 2 when properly tuned.

But all in all a serious miner wouldnt waste 1 pci-e slot on a 750ti. PCI-e slots are be spendy when added up.
 
I cant remember where I read about it, but alot of people undervolt for mining. I run my 280x's at 1.065v 1100/1800 for mining. Now I have no idea how much power I drawing, but its not as much some reviews show because those cards usually hit 1.25 (i think) when gaming. So comparing mining would be a bit different then gaming reviews. Thats why its hard to measure the 2 when properly tuned.

Oh, I don't disagree with you about undervolting cards for mining, it makes a lot of sense for prolonging the life of the card. I just thought this was something you experienced personally with a 7870. If it's something you read, I'd call bullshit. I'll eat crow if I'm wrong, but 425kh/s at 90w on a 7870 is pretty rediculous. It would hands down be the best card for mining bar none.
 
Oh, I don't disagree with you about undervolting cards for mining, it makes a lot of sense for prolonging the life of the card. I just thought this was something you experienced personally with a 7870. If it's something you read, I'd call bullshit. I'll eat crow if I'm wrong, but 425kh/s at 90w on a 7870 is pretty rediculous. It would hands down be the best card for mining bar none.

IMO i think it could be done, but it would have to be 1 magical 7870.
 
One very magical 7870. The 2 I have were a complete pain in the ass to get to 400KH/s and I had to flash the bio's on both to get it. After 3 days of tweaking them I got them to 405KH/s, but that was it. If I raise or lower the core or memory even 5MHz, the KH/s plummets over 100 points.
 
I don't think most people who have computers with no 6-pin know what a 750Ti is.

Yet.

That is precisely the point of items like PC Perspective's piece on using it as a one-item upgrade for OEM desktops.

Other than troubleshooting, what keeps the wallet full (for me) is doing upgrades of existing PCs - being able to upgrade more PCs, especially when all I need is a screwdriver and a single GPU, brings additional work, which means additional money (I charge a flat $20 per item upgraded - the customer supplies the parts). Such easy-peasy upgrades means a win for the customer, and a win for me.
 
Yet.

That is precisely the point of items like PC Perspective's piece on using it as a one-item upgrade for OEM desktops.

Other than troubleshooting, what keeps the wallet full (for me) is doing upgrades of existing PCs - being able to upgrade more PCs, especially when all I need is a screwdriver and a single GPU, brings additional work, which means additional money (I charge a flat $20 per item upgraded - the customer supplies the parts). Such easy-peasy upgrades means a win for the customer, and a win for me.
Yeah. I used to work for GeekSquad (yeah, hate on me all you want :p). We would routinely do video card upgrades on machines straight from hp, dell, acer, etc etc. Customer's definitly were never happy when we told them we would have to upgrade their PSU in order to upgrade their video card which was an instant +60$ and labor.

Although quite honestly, if the box on the video card still has a high PSU requirement then we would probably still recommend a PSU upgrade anyways for any warranty purposes, regardless of the fact that the PSU will probably be fine.
 
Just out of curiosity, how many years did you work there, and how many customers would you say passed through wanting GPU upgrades?
 
Well the recent techpowerup.com review of the asus 750ti vs the 265 are a lil different then toms.

http://www.techpowerup.com/reviews/ASUS/GTX_750_Ti_OC/23.html

They measure the same way but they use different ways of tasking the gpu, so it's bound to be a bit different but all the numbers make sense from one site to the other. When you read how they task the gpus for their respective graphs.
Yes, and they prove no where near 3x lower.
2 and some change times lower is still a large margin. When considering it's the same old 28nm.
 
Last edited:
Just out of curiosity, how many years did you work there, and how many customers would you say passed through wanting GPU upgrades?
I worked in GeekSquad for five years. I'd say we'd probably do GPU upgrades once a week on some client's machine. I usually didn't deal with the computer repairs too much, so I can't say how many times they installed a new video card as some sort of repair process.
 
FarCry3-FR.png


I won't trust any test done by Toms, especially the ones including Nvidia and AMD cards. All of you are confusing, TDP with power consumption, that's why you are all wrong.

Just cause I found it funny.
 
2 and some change times lower is still a large margin. When considering it's the same old 28nm.

One is on a more mature process than the other, while one targeted a specific TDP and the other was targeting a specific performance range.
 
One is on a more mature process than the other, while one targeted a specific TDP and the other was targeting a specific performance range.
265 isn't a targeted performance range it's a left over basically crippled bad dies when they spin the die it's not targeting a specific performance range it's utilizing all your silicon you buy from the foundries instead of just throwing away non-perfect dies. And 750ti isn't a specific TDP but a specific watt range. TDP describes the heat dissipation properties of the cooler not the actual power consumption of the video card, although it hints to that, but it's wildly different for each cooler. So it's not even applicable thing to talk about nvidia to amd, or intel or amd etc.

It's not process you're suggesting TSMC just made the 28nm better, such improvement in spinning die would carry over to AMD cards as well. This is an architecture level improvement. Al tough ionno if AMD spins that specific die at GF or TSMC.
 
265 isn't a targeted performance range it's a left over basically crippled bad dies when they spin the die it's not targeting a specific performance range it's utilizing all your silicon you buy from the foundries instead of just throwing away non-perfect dies. And 750ti isn't a specific TDP but a specific watt range. TDP describes the heat dissipation properties of the cooler not the actual power consumption of the video card, although it hints to that, but it's wildly different for each cooler. So it's not even applicable thing to talk about nvidia to amd, or intel or amd etc.

It's not process you're suggesting TSMC just made the 28nm better, such improvement in spinning die would carry over to AMD cards as well. This is an architecture level improvement. Al tough ionno if AMD spins that specific die at GF or TSMC.

Correct, so you agree it is targeting a specific performance range under it's fully enabled brother.

What creates the heat that needs to be dissipated? That's right, power consumption.

As a process matures, certain qualities and specifications are changed over time, i.e. the process is tuned. Over the span of a year or two that tuning can make quite a difference as well as the knowledge and experience you have gained from having other ASICs manufactured on the same process. You take that knowledge and experience, as well as the updated libraries, to release your new ASIC using a new architecture to take advantage of the tuned process.

Why do you think Nvidia did a B1 spin on GK110?

If you don't know if AMD is manufacturing a GPU at GF or TSMC, you really shouldn't be trying to BS me.
 
Yeah. I used to work for GeekSquad (yeah, hate on me all you want :p). We would routinely do video card upgrades on machines straight from hp, dell, acer, etc etc. Customer's definitly were never happy when we told them we would have to upgrade their PSU in order to upgrade their video card which was an instant +60$ and labor.

Although quite honestly, if the box on the video card still has a high PSU requirement then we would probably still recommend a PSU upgrade anyways for any warranty purposes, regardless of the fact that the PSU will probably be fine.

I don't hate on GeekSquad - it would be hippo-critical since I do the same sort of thing (the difference is that I'm not connected to a big-box retailer and I charge less - otherwise, that's it).

GTX750/Ti kicks the upgrade door wide-open - in fact, wider open than it has been since the introduction of PCI Express, if not since the introduction of AGP.

1. OS support - GTX750/Ti are supported by Windows as far back as XP - basically the beginnings of PCI Express.
2. (Lack of) Power Draw - how many desktops with G31 (the first Intel chipset for desktops to support PCI Express) have otherwise AT-type power supplies (not only no PCI-Express 6-pin, let alone 8-pin, power connectors, but in some cases no SATA power connectors, either)? Every such system is a candidate for GTX750/Ti - even those with G31.
3. Lack of difficulty - Lack of that auxilliary power requirement makes GTX750/Ti a "screwdriver upgrade"; if you can use a screwdriver safely, you can do the upgrade.

Nastily and messily disruptive? Heck yes. Doom? Hardly - simply because there are still those that won't take advantage of the simplicity of such an upgrade, for whatever reason.
 
I would imagine they're somewhat worried. The performance per watt is pretty damn spectatular with the 750ti and if that translates over to the higher end parts later in the year(plus 20nm), those parts could be monsters.
 
What are your thoughts on the 750 Ti reviews so far? From what i have gathered, NVIDIA has achieved major improvements in performance/power with Maxwell. Considering that heat is teh major limiting factor in vidoe card performance these days, generating half the heat allow future NVIDIA cards to reach up to twice the performance of same size chips today.:)

This is hilarious considering it gets smoked by it's AMD counterpart at $150 LOL.

You know, maybe try to be competitive at sub-budget level before shooting for the moon?

Also, with the Mantle drivers new benches today, a R290X is destroying a 780 ti.

http://abload.de/img/battlefield_4_second_asao5.png

Also, I'm sure AMD has their own new architecture that will be out you know, before too long, with performance gains of their own. Besides that, AMD still has headroom at 28nm unlike Nvidia (IIRC Tahiti is ~420mm, GK110 is ~550mm or max reticule size, nvidia physically cannot go bigger, AMD can)

If anything Maxwell was a major letdown, people have been talking about it like it's a monster for years now, it comes out, only as a low end card, and worse yet and gets demolished in the sub-$150 price segment! What happened? This is what I've been hearing about for the past 5 years over and over and over?

There are major improvements in Maxwell, but Nvidia's unwillingness lately to do anything besides count on their fanboys that will happily pay more for less performance and quality as long as Nvidia is stamped on the side, is preventing them from gains imo.
 
Last edited:
This is hilarious considering it gets smoked by it's AMD counterpart at $150 LOL.

You know, maybe try to be competitive at sub-budget level before shooting for the moon?

Also, with the Mantle drivers new benches today, a R290X is destroying a 780 ti.

http://abload.de/img/battlefield_4_second_asao5.png

Also, I'm sure AMD has their own new architecture that will be out you know, before too long, with performance gains of their own. Besides that, AMD still has headroom at 28nm unlike Nvidia (IIRC Tahiti is ~420mm, GK110 is ~550mm or max reticule size, nvidia physically cannot go bigger, AMD can)

If anything Maxwell was a major letdown, people have been talking about it like it's a monster for years now, it comes out, only as a low end card, and worse yet and gets demolished in the sub-$150 price segment! What happened? This is what I've been hearing about for the past 5 years over and over and over?

There are major improvements in Maxwell, but Nvidia's unwillingness lately to do anything besides count on their fanboys that will happily pay more for less performance and quality as long as Nvidia is stamped on the side, is preventing them from gains imo.

Smoked? Hardly.

If you are referring to the R7-265, it has two and one-half times the power draw (it's a 150W TDP part) in addition to twice the memory bandwidth - if anything, it should be beating GTX750Ti by far more than what it is. The reason why it isn't is for the same reason that GTX660 isn't, either - greater efficiency in the case of GTX750Ti means less power lost. (What good is having all that power if less of it is actually usable? That is, in fact, the issue with GTX Titan, if not all of pre-Maxwell Kepler, not to mention all GPU architectures - from everyone - prior to the changing of focus toward efficiency as opposed to raw power.)

Quite aside from the lack of smokage that R7-265 brings to the table, GTX750/Ti - in and of itself - solves the lack of a genuine mainstream part in the GTX7xx range.

Worse (for AMD), R7-265 is a gimped HD7870 (gimped for any of a variety of reasons) and was designed to take advantage of that hole that used to exist below GTX760. However, thanks entirely to GTX750/Ti, the hole is no longer there. Due to the lack of length (and the lack of power draw) GTX750/Ti can even compete, and heads-up, with all those AMD low-TDP-yet-discrete GPUs from the HD5xxx/6xxx/7xxx series, given merely case space, as it has no greater power draw than any of them. R7-260X can't go there - which means that R7-265 can't go there either.

Face facts - R7-265 is late, and aimed for a hole that is no longer there.
 
This is hilarious considering it gets smoked by it's AMD counterpart at $150 LOL.
Not sure what you're on about, AMD has no counterpart to this card as far as I'm aware.

Can you point me in the direction of an AMD card that performs as fast as the 750 Ti, for $150, that requires no auxiliary power connector?

Also, with the Mantle drivers new benches today, a R290X is destroying a 780 ti.

http://abload.de/img/battlefield_4_second_asao5.png
The Mantle graph only has average framerate. Minimum framerate (the important figure) is totally missing...
 
Besides that, AMD still has headroom at 28nm unlike Nvidia (IIRC Tahiti is ~420mm, GK110 is ~550mm or max reticule size, nvidia physically cannot go bigger, AMD can)
Er... isn't the 750 Ti proof that Nvidia has plenty of headroom on 28nm?

They just proved that they could nearly double their performance-per-watt figures on 28nm without making the die any larger.

If anything Maxwell was a major letdown, people have been talking about it like it's a monster for years now, it comes out, only as a low end card
The 750 Ti is a prototype (notice it's not called an 800-series card?). It's most of what Maxwell is intended to be, ported to 28nm lithography.

Even with that handicap, it's still roughly two times more efficient than Kepler, so I'm not sure how that's any kind of letdown. It looks like a totally successful proof-of-concept release to me.

What happened? This is what I've been hearing about for the past 5 years over and over and over?
Not really, no. The 750 Ti is a between-generations part. It's certainly not Kepler, but it also isn't the final iteration of Maxwell.

There are major improvements in Maxwell, but Nvidia's unwillingness lately to do anything besides count on their fanboys that will happily pay more for less performance and quality as long as Nvidia is stamped on the side, is preventing them from gains imo.
Pay more for less performance? The 750 Ti is the fastest-performing card that runs off slot-power alone. Not really surprising that it comes at a slight price premium when it can go in places that similarly-performing AMD cards simply can't :rolleyes:
 
Last edited:
Back
Top