GTX 880 and 800 Series to be More Powerful But Cheaper than the 700 Series

Status
Not open for further replies.
Hah.
Closer to 20%.

http://www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_680/27.html

Roughly half the gains of the other launches. HALF!
It takes both the 680 and 780 gains to match one of those cards.

http://www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_780/26.html
Lol look at some of the games being cpu limited in that techpowerup review though and it still came out to be 24% faster.

Overall in MOST reviews the 680 was about 35% faster than the 580 and that's pretty well known I thought.

 
Last edited:
You're correct.
I crunched HardOCP's numbers and they got about 30% as well but a lot of them were over 100fps anyway.

I take back everything I said in my last few posts.
 
For me I want something more out of Maxwell than just the typical performance bump and power efficiency improvements. Not to discount the hard work that goes into making those things happen, but I'm really eager for some kind of innovation in this industry.

I don't necessary put the bar so high as to expect something earth-shattering and something that will revolutionize the industry, but for a long time now nvidia has been talking a lot about all kinds of things other than just simple frames-per-second... and I'd like to see something delivered for a change.

I dunno... I'm just meh about performance bumps these days.
 
For me I want something more out of Maxwell than just the typical performance bump and power efficiency improvements. Not to discount the hard work that goes into making those things happen, but I'm really eager for some kind of innovation in this industry.

I don't necessary put the bar so high as to expect something earth-shattering and something that will revolutionize the industry, but for a long time now nvidia has been talking a lot about all kinds of things other than just simple frames-per-second... and I'd like to see something delivered for a change.

I dunno... I'm just meh about performance bumps these days.

From what I have read (and if they don't gut it later on) Pascal is supposed to be a new way of doing things...but I'll believe it when I see it.
 
From what I have read (and if they don't gut it later on) Pascal is supposed to be a new way of doing things...but I'll believe it when I see it.


I thought that was Maxwell with the Unified Memory and then Volta with stacked memory. Oh wait, now Pascal comes along and takes both their features. Believe it when I see it can't be any closer to the truth.
 
Aren't big maxwell cards supposed to come with ARM CPU to offload some tasks from cpu ?
 
The problem with "unified memory" is "unified memory with what"? They already have virtual unified memory with a driver update and it's nothing of consequence (just a programmer's convenience). And they can't concretely "unify" any memory with an Intel or AMD CPU, so what CPU are they going to unify memory with? The whole idea was contingent on this Project Denver which seems to have mysteriously disappeared from the roadmap and renamed Tegra TK1.
 
screw power efficency give me power. i have never seen someone go "oh hey i saved this much$$ on my power bill this month."

give me as much power as possible screw power effiency. (for desktop pc)

laptop is different story.
 
Better power efficiency means they can push the GPU harder, right?
More OC headroom as well, I suppose.

Just compare the 780 to the 290. The 780 OC's so well it beats OC'd 290X's.
 
screw power efficency give me power. i have never seen someone go "oh hey i saved this much$$ on my power bill this month."

give me as much power as possible screw power effiency. (for desktop pc)

laptop is different story.

Have you priced larger quality power supplies lately?

Screw that, I want efficient cards so I and others don't have to spend $3514658465 on a small nuclear reactor that is going to heat the room up to power the damn thing.
 
The problem with "unified memory" is "unified memory with what"? They already have virtual unified memory with a driver update and it's nothing of consequence (just a programmer's convenience). And they can't concretely "unify" any memory with an Intel or AMD CPU, so what CPU are they going to unify memory with? The whole idea was contingent on this Project Denver which seems to have mysteriously disappeared from the roadmap and renamed Tegra TK1.


I think that was the point of Project Denver and the ARM cores. The separate the GPU from the CPU as much as possible. Since Nvidia couldn't license x86 they were forced the ARM route (which later blew up) and it was supposed to be a CPU on the GPU and basically that little ARM core would be the middle man between the GPU and system CPU as well as taking care of some tasks without having the latency of having to talk to the system CPU.

Or some shit like that. I mean who knows exactly. This hasn't yet really worked or been beneficial for either AMD, Nvidia, or Intel and at this point it's just a magically Unicorn that may or may not ever come into existence.

screw power efficency give me power. i have never seen someone go "oh hey i saved this much$$ on my power bill this month."

give me as much power as possible screw power effiency. (for desktop pc)

laptop is different story.


I'm all for bigger and better by any means profitable, but thank god both companies have started to think more about energy consumption. Not because energy costs a fortune (which it pretty much does everywhere outside the U.S.), but because with that comes heat and noise issues.

I'm one of those people that isn't a big fucking baby when it comes to noise from a PC. Others prefer to have all their gear as close to 0C as they can get. Still others want to have the power go out on the block when they plug in their beast. All 3 are wrong. Balance is the key. You know your average PC is using more energy when 5 years ago a high-end build would run just fine and then now simply turning on the heat or AC knock out half the houses power.
 
For me I want something more out of Maxwell than just the typical performance bump and power efficiency improvements. Not to discount the hard work that goes into making those things happen, but I'm really eager for some kind of innovation in this industry.

I don't necessary put the bar so high as to expect something earth-shattering and something that will revolutionize the industry, but for a long time now nvidia has been talking a lot about all kinds of things other than just simple frames-per-second... and I'd like to see something delivered for a change.

I dunno... I'm just meh about performance bumps these days.

Nvidia has tried and done so with things like physx, gsync, gameworks(available on all vendors), shadow play, and more but the PR war from AMD continues instead of beefing their own competitive offerings. Nvidia has given us cooler and quieter cards reference even as well. Maxwell is poised to be a big speed Bump + power reduction, which enables better graphics inherently. Yet you see tons of people playing tribe wars between vendors and whining about pro statements instead of checking the facts. See gameworks whining for the latest... Runs equally on both vendors, even.
 
Last edited:
And liger88, it has nothing to do with being a "pansy" as you call it. People have different preferences and tolerance for noise. People are not identical.
 
Better power efficiency means they can push the GPU harder, right?
More OC headroom as well, I suppose.

Just compare the 780 to the 290. The 780 OC's so well it beats OC'd 290X's.

Yeah not so much, even shown in the latest review...Nvidia's more power efficient but cant keep up even with 100Mhz extra on the core and much faster/more mem.
 
And liger88, it has nothing to do with being a "pansy" as you call it. People have different preferences and tolerance for noise. People are not identical.


I was referring to people being babies about noise, one of the 3 things that people are usually worried about regarding GPU's (noise, heat, and power consumption) all of which go hand in hand. Everybody chooses at least 1, but usually 2, out of those that they are willing to sacrifice.

However, despite peoples preference, we have to always consider the other 2 choices we overlook as all 3 effect one another directly or indirectly. It's why AMD and Nvidia try to balance it and say to hell with you people.those saying they don't care about power limits, noise, or cooling efficiency. At least on their reference designs.
 
I was referring to people being babies about noise, one of the 3 things that people are usually worried about regarding GPU's (noise, heat, and power consumption) all of which go hand in hand. Everybody chooses at least 1, but usually 2, out of those that they are willing to sacrifice.

However, despite peoples preference, we have to always consider the other 2 choices we overlook as all 3 effect one another directly or indirectly. It's why AMD and Nvidia try to balance it and say to hell with you people.those saying they don't care about power limits, noise, or cooling efficiency. At least on their reference designs.

Gotcha, I misunderstood your point then :). That is a good one, actually.
 
Better power efficiency means they can push the GPU harder, right?
More OC headroom as well, I suppose.

Just compare the 780 to the 290. The 780 OC's so well it beats OC'd 290X's.

No, chip designs still hit their speed walls regardless of how hot they get. AMD used to hit their walls well before throttle temps, the 780's do as well.
 
Have you priced larger quality power supplies lately?

Screw that, I want efficient cards so I and others don't have to spend $3514658465 on a small nuclear reactor that is going to heat the room up to power the damn thing.

I own 2 1200watt corsair psu.


Im a gamer I want the most power I can get. And as many fps I can get.


Do you think people care about gas when they drag race? No they want as much horsepower they can get.
 
Do you think people care about gas when they drag race? No they want as much horsepower they can get.
They might not care about gas, but they also don't want to burst into flame :rolleyes:

Better efficiency equals less power, sure, but it also equals less heat. Less heat means less cooling, less noise, potentially smaller cards. All good things.
 
Nvidia has tried and done so with things like physx, gsync, gameworks(available on all vendors), shadow play, and more but the PR war from AMD continues instead of beefing their own competitive offerings.

What are you talking about? He is talking about innovation and you are using PhysX and gameworks as examples? Physx was invented by Aegia for a start and never mind the fact that physx can be run from the CPU. Gameworks isn't really anything special either, certainly not something that is going to revolutionize the industry.

Kepler itself is just an evolution of Fermi. Heck, it's still loosely based on the architecture of the G80.

but the PR war from AMD continues instead of beefing their own competitive offerings.

Why bring a statement like this into a question about Maxwell? But to think the PR war is just AMD is laughable, both companies are just as bad as the other. They are competitors and both are trying to make themselves look good and the other look bad. Only a fanboy would think otherwise.

Maxwell is poised to be a big speed Bump + power reduction, which enables better graphics inherently.

This is something I hope comes to pass. But, I am not expecting anything major performance wise until 16nm die shrink. Guess we shall see in October.


Yet you see tons of people playing tribe wars between vendors and whining about pro statements instead of checking the facts. See gameworks whining for the latest... Runs equally on both vendors, even.

What has this got to do with anything? Why have several digs at AMD in a thread about Maxwell? You spent most of your post "whining" rather than responding to the person you quoted.
 
Im a gamer I want the most power I can get. And as many fps I can get.
Power consumption and frame rate are orthogonal. You do not want one for the benefit of the other: you just want the other (in this case, the frame rate).

What you really should have said is "I'm a gamer and I want the highest frame rate I can get". Unless you're genuinely interesting in the engineering implications, thinking about the power consumption shouldn't even really enter into the equation.
 
screw power efficency give me power. i have never seen someone go "oh hey i saved this much$$ on my power bill this month."

give me as much power as possible screw power effiency. (for desktop pc)

laptop is different story.

Better power efficiency is something that everyone wants because that means that a card of a given performance level uses less power. Since there's basically a 300W practical limit on TDP unless you go with closed loop coolers out of the box like the 295X2, better power efficiency means that a card of a given TDP level will be able to provide more performance.

What you probably meant is that that you don't care about TDP, and I agree. I'd prefer they just stick to a 300W TDP for the top end cards and cram as much performance as possible into that power envelope, but they won't do that because they make more money just giving you 25% more performance every time rather than providing as much as they possibly can.
 
I hope the 880 have something larger than a 256-bit memory bus. The 680 got away with it because it was using memory that came clocked 50% higher than the 580. But how much further can they clock GDDR5? They'd have to clock it at 10ghz or higher to match the memory bandwidth the GK110 cards offer. I don't see this happening, so if they are indeed 256-bit, I'm curious to see the performance gain of the GM204 will hold up over the GK110 at higher resolutions...
 
We don't know if it's needed at all.
256 bits never seemed limited in eyefinity resolutions on [H] with 680 vs 7970 and maxwell has increased cache.
 
If the Video Ram caching mechanism is efficient, there is no need for a wider memory bus. The gains would be far minimal as the on-die cache is much faster than any GDDR5 available in the market.
 
wider memory bus has much higher latency, this is why my 256-bit gpus always felt snappier than 384 and 512-bit gpus.

I think NVidia learned this long time ago when they tried 512-bit 3-4 generations ago to never use it again.

I also think that 384/512-bit GDDR5 memory was always about bragging rights, or the wow factor with no advantage in terms of performance, if anything worse than 256-bit at 7ghz with low timings.

wake me up when GDDR6 come out capable of running 12ghz+
 
Last edited:
wider memory bus has much higher latency, this is why my 256-bit gpus always felt snappier than 384 and 512-bit gpus.

I think NVidia learned this long time ago when they tried 512-bit 3-4 generations ago to never use it again.

I also think that 384/512-bit GDDR5 memory was always about bragging rights, or the wow factor with no advantage in terms of performance, if anything worse than 256-bit at 7ghz with low timings.

wake me up when GDDR6 come out capable of running 12ghz+
Whole post looks like nonsense but this stands out. Hell why stop there when we can have a super snappy 64 bit card? :rolleyes:
 
wider memory bus has much higher latency, this is why my 256-bit gpus always felt snappier than 384 and 512-bit gpus.

I think NVidia learned this long time ago when they tried 512-bit 3-4 generations ago to never use it again.

I also think that 384/512-bit GDDR5 memory was always about bragging rights, or the wow factor with no advantage in terms of performance, if anything worse than 256-bit at 7ghz with low timings.

wake me up when GDDR6 come out capable of running 12ghz+

The higher the speed of the memory means that the timings are looser and latency is worse.
Wider bus means similar or more bandwidth, with lower clocks at a lower latency.

Latency isn't a huge issue with VRAM and GPUs though...
 
Whole post looks like nonsense but this stands out. Hell why stop there when we can have a super snappy 64 bit card? :rolleyes:

I think 384-bit would be perfect for GDDR6 because memory timings would be much tighter and the memory would run faster.

I didn't know 64-bit GDDR5 memory existed, ohhh wait it doesn't, ahaha.
 
I think 384-bit would be perfect for GDDR6 because memory timings would be much tighter and the memory would run faster.

I didn't know 64-bit GDDR5 memory existed, ohhh wait it doesn't, ahaha.
What? You can put GDDR5 on ANY memory bus. Again your comments are nothing but nonsense. You live in a world of your own claiming 256 bit cards feel snappier than 384 bit cards. So again using your logic then go get a 128 bit or 64 bit card for the best snappiness.
 
What? You can put GDDR5 on ANY memory bus. Again your comments are nothing but nonsense. You live in a world of your own claiming 256 bit cards feel snappier than 384 bit cards. So again using your logic then go get a 128 bit or 64 bit card for the best snappiness.

Well, I could be wrong, but I do not believe any cards have ever come with GDDR5 and a 64bit bus.

GDDR5 has only been put on 128bit bus widths so far (but please, correct me if I am wrong because I could be). GDDR3 was used on 64bit during the time that GDDR5 has been in use. Now thats not to say it cant work, but no manufacturers have used that combination yet.
 
Well, I could be wrong, but I do not believe any cards have ever come with GDDR5 and a 64bit bus.

GDDR5 has only been put on 128bit bus widths so far (but please, correct me if I am wrong because I could be). GDDR3 was used on 64bit during the time that GDDR5 has been in use. Now thats not to say it cant work, but no manufacturers have used that combination yet.

exactly, his post was full of fail and pure nonsense.

he doesn't know how to look at this stuff because he only goes by numbers.

it is obvious that GDDR5 on 256-but is pretty much maxed out, so going higher does not give any noticeable improvement, hence the reason why NVidia turned away from 512-bit bus for good on GDDR5.

we need next-gen memory like GDDR6 or whatever to take better advantage of 512-bit.
 
exactly, his post was full of fail and pure nonsense.

he doesn't know how to look at this stuff because he only goes by numbers.

it is obvious that GDDR5 on 256-but is pretty much maxed out, so going higher does not give any noticeable improvement, hence the reason why NVidia turned away from 512-bit bus for good on GDDR5.

we need next-gen memory like GDDR6 or whatever to take better advantage of 512-bit.
GT640 has a 64 bit card with GDDR5. :rolleyes:
http://www.geforce.com/hardware/desktop-gpus/geforce-gt640/specifications

Wow you managed to find a way to post even more crap that you are just pulling out of your ass. You really should stop talking as the more you post the more foolish you look.
 
wider memory bus has much higher latency, this is why my 256-bit gpus always felt snappier than 384 and 512-bit gpus.

I think NVidia learned this long time ago when they tried 512-bit 3-4 generations ago to never use it again.

I also think that 384/512-bit GDDR5 memory was always about bragging rights, or the wow factor with no advantage in terms of performance, if anything worse than 256-bit at 7ghz with low timings.

wake me up when GDDR6 come out capable of running 12ghz+

WHAT? My old 8800 GTX was significantly faster than my old 7800 GTX at high resolution gaming.
 
exactly, his post was full of fail and pure nonsense.

he doesn't know how to look at this stuff because he only goes by numbers.

it is obvious that GDDR5 on 256-but is pretty much maxed out, so going higher does not give any noticeable improvement, hence the reason why NVidia turned away from 512-bit bus for good on GDDR5.

we need next-gen memory like GDDR6 or whatever to take better advantage of 512-bit.

I don't agree with you. 256 bit certainly doesn't max out GDDR5, neither does 384 bit or 512 bit for that matter. The problem is, GDDR5 becomes flaky once it passes a certain clockspeed threshold (or more expensive for binned GDDR5). Case in point, the GTX 780ti actually has more memory bandwidth than the 512 bit 290X at default memory clockspeeds. GTX 780ti has 336GB/s while the 290X has 320GB/s if memory serves. Neither of these bus widths are maxing out GDDR5. The limiting factor is the GDDR5 itself which cannot pass certain memory clocks without becoming unreliable.

Aside from his, memory bus width is not the end all, be all metric to judge a video card. The stock GTX 680 outperformed the stock 7970 at launch despite the 256 bit bus. If you look at paper specification lists, the 7970 should come out on top. But the GTX 680, when it launched, was a good deal faster than the stock 7970 despite having fewer cores and a lesser memory bus. Why? Because the architecture was better optimized and more efficient. The 750ti outperforms the older GK107 despite having a lower bus width. I would agree that bus width means jack shit, while architecture and cache efficiency are far more important. There are lower level design considerations which cannot be put on a specification list which are far more important to actual performance than memory bandwidth.

But no, GDDR5 will not be maxed out regardless of the bus width. But it really doesn't fucking matter. If anyone thinks that bus width is the only metric which affects a video card performance, they're wrong. Lower level design optimizations and architecture efficiency are far more important than bus width - bus width makes a difference, but is a small factor among many when it comes to final performance levels.
 
Last edited:
Status
Not open for further replies.
Back
Top