Gtx 1080 benchmarked

3DMark isn't useful for determining whether a card is fast enough to play any particular game. But that's ok because that's not its purpose. It's job is to be a benchmark of "relative" performance.

Back in the day 3dmark would also provide a glimpse of what's possible with the latest rendering APIs and algorithms and that was a very useful and exciting thing to have. Now the last few versions have just been an ugly, unimpressive mess with no value whatsoever. 3dmark Vantage was a joke and the others haven't done much better since.

Considering it as a benchmark of relative performance is dubious at best. I can tell you that system A gets 11 units of performance and system B gets 18. What the fuck does that mean? It doesn't translate to anything. Sure you can look at the system specifications but it doesn't tell you how to get more units of performance or what system components determine that level of performance. X units of performance do not necessarily translate into faster Handbrake encode times or better performance in the Doom Beta.

3D Mark is arbitrary nonsense.
 
Considering it as a benchmark of relative performance is dubious at best. I can tell you that system A gets 11 units of performance and system B gets 18. What the fuck does that mean? It doesn't translate to anything. Sure you can look at the system specifications but it doesn't tell you how to get more units of performance or what system components determine that level of performance. X units of performance do not necessarily translate into faster Handbrake encode times or better performance in the Doom Beta.

3D Mark is arbitrary nonsense.
Why would use the overall 3dmark score as a performance metric for encoding?

Why would you use the overall 3dmark score as a performance metric in games?

The overall score is pretty useless in that sense, I can match your overall score with a graphics card twice as powerful and a cpu slow enough to lower the score to match yours, yet my system will perform better in gpu bound situations and the graphics score will reflect this
 
Considering it as a benchmark of relative performance is dubious at best. I can tell you that system A gets 11 units of performance and system B gets 18. What the fuck does that mean? It doesn't translate to anything. Sure you can look at the system specifications but it doesn't tell you how to get more units of performance or what system components determine that level of performance. X units of performance do not necessarily translate into faster Handbrake encode times or better performance in the Doom Beta.

3D Mark is arbitrary nonsense.

You could say the same for most benchmarks if you use that criteria. They usually provide some metric that is only relevant for comparing performance in that particular workload.
 
The graphics score in 3dmark will reflect the relative performance over the whole 100 games better than any one of them will

No it doesn't. 3D Mark doesn't use a game engine and the results it spits out aren't the same as testing and then averaging performance across multiple games or game engines. As I mentioned before, we've seen cards like ATI's Radeon 2900XT that tested well in 3D Mark despite being slower in actual games most if not all the time. In other words: Would that 20% increase in 3D Mark video card performance translate into a 20% average increase in actual games? If it does that would be nice, but I have doubts that this is the case.

Maybe one of you benchmark whores can shed some light on that. I don't think it will but I've been wrong before. :cool:
 
You could say the same for most benchmarks if you use that criteria. They usually provide some metric that is only relevant for comparing performance in that particular workload.

And we've seen how those synthetic workloads aren't necessarily indicative of actual, real world performance. I'm not saying synthetic tests aren't useful or interesting in some way. I'm simply saying that 3D Mark performance has never been something I've considered when shopping for a video card. It doesn't tell me anything because it can't be translated into something more tangible like actual applications.
 
Would that 20% increase in 3D Mark video card performance translate into a 20% average increase in actual games? If it does that would be nice, but I have doubts that this is the case.

No idea but if it does then it would be validated as a useful benchmark. Most review sites don't even attempt to correlate 3dmark performance with games any more.
 
No it doesn't. 3D Mark doesn't use a game engine and the results it spits out aren't the same as testing and then averaging performance across multiple games or game engines. As I mentioned before, we've seen cards like ATI's Radeon 2900XT that tested well in 3D Mark despite being slower in actual games most if not all the time. In other words: Would that 20% increase in 3D Mark video card performance translate into a 20% average increase in actual games? If it does that would be nice, but I have doubts that this is the case.

Maybe one of you benchmark whores can shed some light on that. I don't think it will but I've been wrong before. :cool:


No I don't expect that 20% to translate into 20% better performance effectively, just that on average over a large test set it'll be a better quantified of relative performance of two cards than choosing one game and basing everything on it.

Like, ashes of the singularity is compute heavy, very little geometry work

(insert game here) is geometry heavy, not much compute work

Neither of those two will give you a satisfying answer, 3dmark is better than either
 
A reference 980 has 2048 cores that run at 1250. This card has 2560 cores running at 1800. That is technically 80% faster if it scaled perfectly still using the same architecture. Instead we are only getting about 50% better performance which would be piss poor even if still using Maxwell but even more ridiculous considering this is a supposedly better architecture.

What % increase do we typically see generation to generation?
 
What % increase do we typically see generation to generation?

780ti to Titan X was 50-60% IIRC. That didn't even have a node shrink.

What I am worried about is if this architecture is gimped in anyway due to the focus on half precision for deep learning applications. I don't see this as a likely risk, but it's in my mind. This is why I'd wait to see how Polaris does before jumping on the 1080 band wagon which is a smart thing to do anyways.
 
What % increase do we typically see generation to generation?

Compare 580 to 680, 680 to 780, 780 to 980, 980 to 1080

780ti to Titan X was 50-60% IIRC.

What I am worried about is if this architecture is gimped in anyway due to the focus on half precision for deep learning applications. I don't see this as a likely risk, but it's in my mind. This is why I'd wait to see how Polaris does before jumping on the 1080 band wagon.


huh ?

the 16-bit feature won't affect 32bit performance, I don't see why it would at all

in fact I highly suspect they will remove the double 16-bit throughput from the consumer lineup because it would compete with their pro cards (latest maxwell quadro for example)
 
Compare 580 to 680, 680 to 780, 780 to 980, 980 to 1080

I checked 580 to 680 and posted it in another thread, but forgot what I found. I don't think it was 80% though. I don't even think it was 50%.

The reason I compared Fermi to Kepler is because it was the last time we had both an architecture update and node shrink.
 
I checked 580 to 680 and posted it in another thread, but forgot what I found. I don't think it was 80% though. I don't even think it was 50%.

The reason I compared Fermi to Kepler is because it was the last time we had both an architecture update and node shrink.

Yeah I know, that's why everyone compares fermi to kepler and it makes perfect sense

However (xD)...

The 780 to 980 was a big jump and it was on the same node.

Maxwell is the big architectural change from Kepler, pascal is an incremental improvement of maxwell on a new process
 
No I don't expect that 20% to translate into 20% better performance effectively, just that on average over a large test set it'll be a better quantified of relative performance of two cards than choosing one game and basing everything on it.

Like, ashes of the singularity is compute heavy, very little geometry work

(insert game here) is geometry heavy, not much compute work

Neither of those two will give you a satisfying answer, 3dmark is better than either

I agree with you in that you can't base video card performance or relative video card performance on a single game or game engine. I get that. I can't agree with 3D Mark being better than either because it's performance doesn't translate to gaming performance of any kind. If you wanted to know average relative performance of one card vs. another across a variety of games you would need a larger sample size and then average the performance of each in those games. Review sites aren't willing to take the time and spend the money testing dozens of games or more and then doing the math to figure out which card(s) are truly faster more often than not. Due to resource constraints they take the most popular games at the time and run a half dozen or more of them to figure out relative performance. 3D Mark as a substitute for proper testing and research is misleading at best.

You can't use a pregnancy test to determine blood alcohol levels in a traffic stop. Its simply not the right tool for the job. I could execute Powershell commands to perform a set task and then time it. I could then repeat that on another system. What would that have to do with video card performance? Not a fucking thing, just like 3D Mark. Oh you can be sure that the system with the best specs and better video card is faster than another system with lower end hardware. At best 3D Mark tells us what we already know from other tests conducted in other ways. 3D Mark isn't useful is in that it's numbers have nothing to do with games. You are comparing apples to eggplants when talking about 3D Mark as an indicator of relative video card performance.
 
Yeah I know, that's why everyone compares fermi to kepler and it makes perfect sense

However (xD)...

The 780 to 980 was a big jump and it was on the same node.

Maxwell is the big architectural change from Kepler, pascal is an incremental improvement of maxwell on a new process

The 780Ti to GTX 980 wasn't a massive jump. You might be thinking of the 780Ti to the 980Ti in which case that's fair. The GTX 980 was a more midrange offering.
 
The 780Ti to GTX 980 wasn't a massive jump. You might be thinking of the 780Ti to the 980Ti in which case that's fair. The GTX 980 was a more midrange offering.

No no i was thinking of the 780 vs 980

I compare products based on their placement

By all means though, the more accurate comparison would be 770vs980

Gk104 vs gm204
 
I am leary of any published article which doesn't state the date and time the article was published, among other things. Taken with a grain of salt.

Dan_D Separating the bullshit from the real experience. (y)
 
I think that's the new shroud design. I personally preferred the older one.

As do I, up until my 980Ti purchase I've always bought my cards as soon as they were released so reference was my only option. Now that I've seen what the ACX coolers can do, I won't be buying reference any longer.
 
As do I, up until my 980Ti purchase I've always bought my cards as soon as they were released so reference was my only option. Now that I've seen what the ACX coolers can do, I won't be buying reference any longer.
The acx coolers are silent but their performance is quite mediocre

The windforce coolers at max fan generate so much thrust they counteract gpu sag ;)

Hidden feature
 
Also something to think about. Once Pascal is out Nvidia will ignore Maxwell like they did Kepler. So the performance delta will be even greater once that happens.

Lack of attention to detail is already happening with their latest drivers. I'm now having to run my 144hz monitor at 120hz in desktop mode to keep my 980Ti from ramping up to 3d base clock. This was an issue with some earlier drivers that was fixed and is back again. The FPS ticker in GeForce experience also isn't working for me with their latest drivers.
 
The acx coolers are silent but their performance is quite mediocre

The windforce coolers at max fan generate so much thrust they counteract gpu sag ;)

Hidden feature

I've been happy with it. It's not chill, but stays cool enough that I can maintain my boost clocks throughout my gaming session, and that's with the default fan profile at work.
 
Curious what AMD's strategy is...

Fury could be rebranded but I doubt they can match the price of a 1070 which will likely outperform or match it and have double vram
 
I'd love to see expanded offerings of stock watercooled cards, similar to the EVGA hybrid approach. Less power consumption, quieter, cooler, boo-yah! That's almost surely the aftermarket space; the price difference between aftermarket air and water cooling appears to be narrowing
 
Curious what AMD's strategy is...

Fury could be rebranded but I doubt they can match the price of a 1070 which will likely outperform or match it and have double vram
Either they price it appropriately versus the 1070 (which is possible, if the 1070 comes in at $500) or if they can't afford to do that because the 1070 comes in at $329 or whatever they just let it sit, non-competitive. Remember AMD will still have the single-card performance crown with the Radeon Pro Duo.
 
No no i was thinking of the 780 vs 980

I checked some benchmarks on AT and it looks like around a 30-40% improvement with a few games at 50%-ish. Somewhat similar to Fermi -> Kepler but without the node shrink. Of course they removed just about all FP64 in Maxwell.

I think misterbobby expecting to see 80% is unrealistic, based on historical precedent. Of course I know misterbobby could never be happy with anything nvidia does. Jen-Hsun could get up there on stage and squeeze out a diamond-encrusted golden turd and he still wouldn't be impressed.
 
Either they price it appropriately versus the 1070 (which is possible, if the 1070 comes in at $500) or if they can't afford to do that because the 1070 comes in at $329 or whatever they just let it sit, non-competitive. Remember AMD will still have the single-card performance crown with the Radeon Pro Duo.

You'll have to excuse me if I don't take the Radeon pro seriously as a gaming card ;)
 
I checked some benchmarks on AT and it looks like around a 30-40% improvement with a few games at 50%-ish. Somewhat similar to Fermi -> Kepler but without the node shrink. Of course they removed just about all FP64 in Maxwell.

I think misterbobby expecting to see 80% is unrealistic, based on historical precedent. Of course I know misterbobby could never be happy with anything nvidia does. Jen-Hsun could get up there on stage and squeeze out a diamond-encrusted golden turd and he still wouldn't be impressed.

Yes but aside from the fp64 units being removed there's also a considerable performance difference per ALU
 
Why not, it's a fantastic gaming card. Price/performance is way off, obviously.
In virtually every game that has been released till now those 4gb won't be enough to even push the resolution to a point where I'm stressing 8096 ALUs

Then there's the price

And there's the fact that multi-gpu is finicky, and I don't like finicky

The one exception is VR, it's good for VR .

At least if you buy two cards you can then sell one

Of course you can argue the future and dx12 and multigpu, yeah. If 1/2 games released in the next two years support it (optimistic I'd say) you'll have a great experience on par with a 500 single gpu solution :p (Vega, gp100, volta, navi)

By the time it's relevant (if ever) it will be outmatched and out priced
 
so it's being unveiled tonight, does that mean that [H] is going to have some benches ready for us?
Nobody that knows can talk about it.

Originally today was supposed to be the NVIDIA "editor's day", when they show a bunch of powerpoint slides then sample hardware to press with an embargo of a week or two before they can post reviews. So most likely [H] and all the others don't have hardware yet-- they'll get it tonight.

Even if it is embargoed I would expect to see a bunch of much more substantial leaks before the time's up.

But who knows? Maybe every site has reviews ready to go, scheduled to post at 6:01PM PT tonight. It's possible.
 
Dan thats what I meant actually. It does not correlate to actual performance nowadays but you can still use it as a general metric if you choose to (i'e for a website to compare numbers). It really is much more meaningless than it used to be mid 2000's.
 
Why do you guys expect high performance from the new cards? That's not what Nvidia is after. It's a business company. They are interested in lasting as long as possible and taking as much money as possible. Designing new extremely powerful cards for low cost ain't going to happen, ever.
I know it's hard to swallow, but 1080 is probably 20-25% faster at at least $600+ cost. It's an effective way to milk people for their money.
Once people buy 1080 they will release new Titan which is significantly faster than 1080. Once the rich people jump on Titan bandwagon they will release "cheaper, slower Titan aka Ti" to milk those who didn't want to buy/can't afford Titan.
It's not just nVidia's business plan, it's how business work.

If anything is going to change that it's not me, you, our petitions or nVidia, but AMD. So let's hope they deliver this time. And for our all sake, be on par with nVidia as long as possible. Or else our wallets are fucked.
 
I haven't been watching the rumor mill too closely, but if this is the Gxx04 part then it's replacing the 980 and the 970. Those are the cards we should see the significant performance increase from, not the 980ti and Titan X. I'm a bit lost by some of you expecting a midrange (admittedly upper-midrange) part to completely decimate the bigger top of the line part from the previous generation.

All I know is that my 670 is getting a bit long in the tooth and my gsync monitor means that I'm hopping for good things in the $350 to $400 range from NVIDIA within the next few months. If not, there is always the For Sale/Trade forum and Freesync.

Of course with other things I've read about 14mm not bringing cost per transistor down, I might be waiting a bit longer...
 
I haven't been watching the rumor mill too closely, but if this is the Gxx04 part then it's replacing the 980 and the 970. Those are the cards we should see the significant performance increase from, not the 980ti and Titan X. I'm a bit lost by some of you expecting a midrange (admittedly upper-midrange) part to completely decimate the bigger top of the line part from the previous generation.

All I know is that my 670 is getting a bit long in the tooth and my gsync monitor means that I'm hopping for good things in the $350 to $400 range from NVIDIA within the next few months. If not, there is always the For Sale/Trade forum and Freesync.

Of course with other things I've read about 14mm not bringing cost per transistor down, I might be waiting a bit longer...

I agree with you. I can't find the statement, but I recall NVIDIA stating that they were going to release GPUs based on new processes and technologies primarily in the mid-range first, and then release full scale parts later. They've been doing this for some time now.
 
Why do you guys expect high performance from the new cards? That's not what Nvidia is after. It's a business company. They are interested in lasting as long as possible and taking as much money as possible. Designing new extremely powerful cards for low cost ain't going to happen, ever.
I know it's hard to swallow, but 1080 is probably 20-25% faster at at least $600+ cost. It's an effective way to milk people for their money.
Once people buy 1080 they will release new Titan which is significantly faster than 1080. Once the rich people jump on Titan bandwagon they will release "cheaper, slower Titan aka Ti" to milk those who didn't want to buy/can't afford Titan.
It's not just nVidia's business plan, it's how business work.

If anything is going to change that it's not me, you, our petitions or nVidia, but AMD. So let's hope they deliver this time. And for our all sake, be on par with nVidia as long as possible. Or else our wallets are fucked.
AMD the last bastion of hope? Please... as a business they're after the same thing. AMD have been trying to steer the direction of computing technology for a long time in the guise of helping the community while all it's really doing is trying to push people to buy their hardware. And honestly I don't blame them when their marketshare in both the CPU and discrete GPU markets are now in the 30% range and continue to fall.

People working within these companies may be trying to do good, but businesses at the end of the day are always serving their own self interest. The sooner you realize this the happier you'll be.
 
Is there any chance they might go on sale this weekend?

Yes there is a chance they could be available as a preorders or initial shipment this weekend. Likely a low chance :p
 
Last edited:
AMD the last bastion of hope? Please... as a business they're after the same thing. AMD have been trying to steer the direction of computing technology for a long time in the guise of helping the community while all it's really doing is trying to push people to buy their hardware. And honestly I don't blame them when their marketshare in both the CPU and discrete GPU markets are now in the 30% range and continue to fall.

People working within these companies may be trying to do good, but businesses at the end of the day are always serving their own self interest. The sooner you realize this the happier you'll be.

You are right, AMD would be the same, but, without AMD it's a monopoly, and monopoly is not good for us. We need AMD to be on par. I am a customer, and I want faster a d cheaper cards. That can only be achieved when more companies fight for the customers, not when one company can dictate the price and performance.
 
Back
Top