Nvidia Ampere Purportedly 50% Faster Than Turing At Half The Power Consumption

upload_2020-1-2_14-42-12.png

Hold your horses, wait for proper benchmarks.

as things stands RDNA2 is 50% slower, and moved to 32nm; and poor Ampere will be made on 120nm 150% slower than Volta. RDNA news are straight from my secret source at nv, and other from intel hq.
 
If this hold up I'm very much looking forward to the "3070" being a full 4k 60fps card. Very cool.
 
This is the source article. Nothing of substance in it. It's just a single line in order to get investors salivating.

http://www.taipeitimes.com/News/biz/archives/2020/01/02/2003728557

I'm guessing the 50% number was pulled out of their ass because 7nm is, like, half of 16nm amirite?
Yeah, 7nm isn't a magical node. If they ported the 2080Ti to 7nm it might get an efficiency increase but lots of other wonky shit might happen... It definitely wouldn't magically become 50% faster...

This product is still 6+ months away from actually going into my PC. I predict that NVIDIA will release this one at the same price as the 2000 series or 15-25% higher. If Big Navi ever comes out and mops the floor with the 2080Ti (as it's supposed to...cough..) Maybe, just maybe, Nvidia drops the price. AMD hasn't been knocking shit out of the park yet, as far as I'm concerned. Ngreedia will continue to gouge my pocket book, just hope they don't release another "1%" Test Escape Nightmare. It took me three cards to get a solid one on release.
 
If this hold up I'm very much looking forward to the "3070" being a full 4k 60fps card. Very cool.

No doubt. The 2080Ti can swing that 75% of the time (not even using Gsync). Even if these #'s aren't true I still expect that.
 
3070 will have 2080S performance.

There's no reason for them to have a 50% performance increase in a single cycle when they can just have small increments and people will gobble them up.
 
50% faster less power. Sure sounds great. lol

Anyone remember the last time a GPU company found 50% more performance in one generation. Believe it when I see the benchmarks.

Pretty sure these aren't going to be really much faster then turing. They will however be much smaller... should use less power, and mainly be much cheaper to Fab. Which is what NV really needs. Cheaper chips.
 
3070 will have 2080S performance.

There's no reason for them to have a 50% performance increase in a single cycle when they can just have small increments and people will gobble them up.

I'd drop $500-600 on a 3070 with 2080S specs no problem at all. Hopefully with better RTX performance as well.
 
"On a lesser scale, Asus and Gigabyte will probably profit from Nvidia's future Ampere offerings as well. Both brands harvest up to 30% of their revenues from gaming products."

What is so important about this quote?
 
I don't expect something like this, but if it's true...my wallet is ready.
 
  • Like
Reactions: Auer
like this
This would be pretty stunning if both halves of the statement were concurrently true (my money is squarely on Not). This would be a true game changer for mobile usage as it would bring 2080Ti-level performance down to about 80W TDP.

I think what's much more likely is that both halves of the statement are true, but not at the same time: At maximum output, the compute is 50% better than the 2080 Ti and the TDP is practically the same for that, but at some low-compute use case like idle/web surfing, the TDP is halved.
 
Intel wasn't waiting around. They screwed the pooch with their process tech. IF they had 10nm as they expected we wouldn't be having this conversation. Sucks to be them.

Even without the node issues, I think Intel would have continued to release 4 core chips on the desktop for as long as they possibly could.
 
I imagine it'll have 50% the maximum clock speed, too. o_O
 
Even without the node issues, I think Intel would have continued to release 4 core chips on the desktop for as long as they possibly could.

IIRC their 10nm release schedule had 6 and 8c cpu's on the desktop.
 
Sounds ambitious. I’ll believe it when I see it. I’ve learned not to pay too much attention to the rumour mill when it comes to these things, because the rumour mill often turns out to be BS, especially this early in the game.
 
Intel wasn't waiting around. They screwed the pooch with their process tech. IF they had 10nm as they expected we wouldn't be having this conversation. Sucks to be them.

Cant blame 10nm for that. AMD had 8 core flagship desktop CPUs on 14nm and Intel was still flogging 4 core i9’s.
 
sure it's 50% faster..... at ray tracing because 75% of the die is taken up by RTcores.. ;)
 
The lowest cored i9 was the 8950HK with 6C/12T.

I am sure he meant I7's but, *shrug*, Intel is having major issues and even if they had been 10nm successful, that would not have made them any better than they are now. It was still the same architecture they have used for a long time now, more or less.
 
I am sure he meant I7's but, *shrug*, Intel is having major issues and even if they had been 10nm successful, that would not have made them any better than they are now. It was still the same architecture they have used for a long time now, more or less.

Given his reply to my post, I think he meant i9. Probably just forgot how few "mainstream" i9s there have been. Intel would still be Intel, but I don't know if they would have kept 4c as long as they did at 14nm. If anything adding more cores could have given them more reasons to increase costs. They would definitely have continued giving mainstream chipsets low lane counts and constantly switching sockets though.
 
Back
Top