Separate names with a comma.
Discussion in 'Video Cards' started by Snowdog, Aug 27, 2019.
Amazing. You heard it here first folks. There is no 7nm wafer cost.
So that bunch of random nonsense, has convinced you that NVidia Transistors will magically be bigger than AMD Transistors on the same process?
I don't have words...
Do we really need this nonsense disrupting the community?
That’s not actually economy of scale....
It was YOUR problem.
The rest of the world already knows the relative costs of moving to a new node... it's been done througnout semi-conductor history. But now that you do know... you can't pretend not to understand anymore.
It seems math is magic for you?
You are only putting words into your own mouth, because nobody said any such thing. It is really amazing that I am giving 5th grade lessons on PERCENTILES%
Do you know what a wafer is..?
Please explain your "logic" on how NVidias ~10 Billion transistor part (TU106) would be significantly larger than AMDs ~10 Billion transistor part, on the same 7nm process.
Equal transistors on the same process, means the equal size.
A couple of pictures of AMD dies, doesn't change simple math facts.
I have an idea... why don't you guys try and refute me..?
Go ahead, do some research and find out much die space a 7nm Nvidia Turing GPU would be. Try and combat/debate/argue my facts, with your own counter facts. Instead of attacking the messenger.
Because I laid out a whole bunch of fact you are not trying to refute, so it must mean I am winning the debate... and the cheerleaders are left name calling..
Why are you using the Radeon VII’s full die size when it’s a cut down card?
Here are what is known, as wafer yields.
Imagine the one on the left as the big TU-102 and the one on the right as Navi10. (Obviously those aren't exact, but close enough to where you will understand how size of GPU matter.
Because AMD pays per wafer....not per chip. And so does Nvidia.
Yellow is bad chips, the grey are good chips.
Do you now understand how Economy of Scale comes into play here? AMD can mass produce their navi10 & navi20 chips to a scale that would be mass market. Not niche.
That’s not what economy of scale means. That’s just yield.
Sorry, I guess your sarcasm detector is broken. As others have told you already, you don’t understand what economies of scale means. And it has nothing to do with chips per (expensive 7nm) wafer.
It’s a very intriguing combination of excessive arrogance and bewildering ignorance.
7nm wafer is more costly than 12 finfet but OKAY
Yes I like waffles and chicken.
the prices of gpu's these days is fucking depressing.
I'm pretty sure he does what he does for them.
With the shrinking dGPU market each quarter, these companies have to maintain their high margins to satisfy investors so the prices will probably continue to creep up. Now with 3 players crowding the space, someone will inevitably get squeezed out and I'm guessing that will be AMD as they don't have the money to compete with either NVIDIA or Intel. In the future (3+ years from now), I can see the dGPU market looking something like this: NVIDIA (55%), INTEL (40%), AMD (5%). Techspot has a really nice article about Intel's upcoming GPU and how they can scale even the existing Gen 11 architecture up to compete at 2080 Ti specs: https://www.techspot.com/article/1898-intel-xe-graphics-preview/ If Intel uses EMIB, they will already have a leg up on both NVIDIA and AMD:
"In April Intel confirmed to Anandtech that they intended to use EMIB to support their GPUs soon, so that is something to look forward to."
"May 1, 2019: Jim Jeffers, senior principal engineer and director of the rendering and visualization team, announces Xe's ray tracing capabilities at FMX19. In addition, Intel has continued to hire talent away from the competition."
From Anandtech: https://www.anandtech.com/show/14211/intels-interconnected-future-chipslets-emib-foveros
"Xe will range from integrated graphics all the way up to enterprise compute acceleration, covering through the consumer graphics and gaming markets as well.
Intel stated at the time that the Xe range will be built on two different architectures, one of which is called Arctic Sound, and the other has not yet been made public. The goal is to create a platform for Xe relating the hardware, the software, the drivers, the platform, and the APIs all into a single mission, which Intel calls 'The Odyssey'. Introducing EMIB and Foveros technologies as part of the Xe strategy seems to be very much part of Intel's plan, and it will be interesting to see how it develops."
If anyone should be shitting their pants, it's Dr. Lisa Su and her very late and unimpressive Chinese designed RDNA. If we want to talk about economy of scale, Intel has their own fabs and we know their in-house 10nm will be used for CPUs, FPGA, GPU etc so they will be able to pump out higher volume and lower prices than both NVIDIA and AMD while maintaining higher margins than AMD. We also know NVIDIA will have Ampere ready in 2020 which will likely surpass the current 2080 Ti easily by at least 30% if not more at 7nm so AMD will be in Intel's crosshairs if Intel decides to push out anything remotely like what's speculated above.'
If the 5800/5900 do not have ray tracing and can't match/exceed 2080 Ti in performance, AMD will be dead in the water in 2020. Intel will have a dGPU for AIBs and they'll stick Xe in every desktop/notebook they can and where will that leave AMD? Dead. Hell even NVIDIA is in trouble in the notebook market because Intel will be selling a full ecosystem to manufacturers. RDNA isn't even a factor, it's an architecture that should've been released in 2016, at 7nm it isn't impressive one bit.
IIRC the move from 14nm to 7nm gave the Radeon VII about 25% increased performance on the same power.
So lets assume just the die shrink on Turing would give it between 20 and 30% performance increase. (taking into account that turing is already very efficient on 12nm)
That alone makes me pretty excited about Ampere since its designed for 7nm. I think nvidia could at least repeat the 30+ % increase (more likely substantially exceed) that the 2080Ti has over the 1080Ti.
I don't even think Ampere will be a die shrink of Turing but something new and more efficient. So if you take into account a new more efficient architecture + 7 nm, I can see it hitting 40%+ over 2080 Ti at the top end. AMD is completely fucked in 2020 and I'm going to laugh when the viral cheerleader on this forum quietly disappears around that time. What I find even more hilarious reading over this thread is that a certain someone seems to think TSMC has some special relationship with AMD for 7nm. If anything, AMD HAD to go to 7nm just to compete with Intel and NVIDIA who are using older and cheaper processes and now that TSMC has refined 7 nm, NVIDIA can take advantage of it for their GPUs. Intel will be coming with an in-house 10nm which should be as good/better than TSMCs 7nm so that leaves AMD as a huge target.
I would argue the majority of the performance gain from VII came from increased memory bandwidth, not the 7nm shrink. Vega also clocked well and could hit 1700+ MHz but performance did not scale with core clock, it scaled with bandwidth. VII also has 4 less CUs than vega 64, further evidence that bandwidth was the primary contributor to VII's performance increase.
You're guessing ampere will be a huge step over turing.
TSMC will likely have 7nm euvbut the time intel actually has functional 10nm.
I will also argue that TSMC producing AMD's chips is a huge reason why they are currently competitive in the manner they are, global foundries was trash.
At the risk of sounding like gamerx (shoot me please) Navi likely has a lot of room left on the table with an increase in memory bandwidth/CUs. I think it would be silly to assume that Navi can't hit 2080ti performance. A further refined navi will compete with ampere/intel stuff fine, IMHO.
I don't think we should count any chickens before they hatch, not from Intel/NVdia, nor AMD.
Ampere could mainly be a process bump, just like Pascal was. Intel is complete unknown until they deliver something. Big Navi has unknown parameters and date, and will need RT HW, so it will pay some form of RT tax.
Nvidia would LOVE for Ampere to get the performance gains pascal got from maxwell.
I don't think 5800/5900 will have RT and that will leave AMD at a large disadvantage. Furthermore, if they only hit 2080 Ti performance in 2020 with Ampere out, they will be behind a generation again and will need to compete with low prices and margins. I think Ampere will be a sizeable jump over Turing given it will be at 7 nm + likely refined/new architecture. We got a 30-40% gain over 1080 Ti with 2080 Ti so I don't think this is a stretch at all. With regards to Intel, they are coming out with some really impressive technologies in 2020 so I see them being competitive right out of the gate. They're still keeping a lot of information secret and what they've let out so far with foveros and emib seems very promising, not to mention Xe will have RT so AMD is absolutely required to have it in 2020.
Yeah as I said above, I think Ampere will be a pretty large jump over Turing. With Intel's first gen Xe, even if they don't match NVIDIAs top end, they will certainly compete with AMD in the midrange notebook/desktop market and given their marketing $$ + in-house fabs, they will be super competitive which is why I say 2020+ is going to be a dark time for AMD.
If I seem like I'm cheerleading Intel, it's because I'm genuinely excited about what they will be doing in the market late 2020 and on. They will be shaking up the CPU/GPU market IMO and if they deliver with Foveros/EMIB tech for consumer GPUs in the future (whether it's first gen Xe or later), it could rattle both AMD and NVIDIA if they don't have their own MCMs ready. We know Intel has been siphoning engineering talent from both AMD and NVIDIA so I can't wait to see what they bring us in these next few years--I predict the GPU market will look completely different than it does today. I'll certainly be buying Intel stock pretty soon with the bet they will see a $10+ jump in 2020-2021.
No one gives a shit about die size. As long as a GPU comes to us at a reasonable price and we can deal with the heat and power consumption, no one cares. What we do care about is price and performance. AMD is way behind on performance and only moves units based on cost in certain price points. I don't see that changing any time soon but go ahead and keep living in your fantasy world.
Even I've grown tired of this thread and I'll normally go round and round with people like you for hours on end for my own entertainment.
Yeah Nvidia may have something up their sleeves but we have to wait and see. RDNA addressed a lot of GCNs shortcomings and now perf/flop is in the same ballpark as Turing.
If AMD manages to get power consumption under control it could be a very competitive generation. Of course there’s still the wildcard of what AMD will do with DXR. Fun times ahead.
You are not a push back, you are at the forefront. No big deal, your claims will change nothing, since AMD is now doing better than Nvidia.
Except die hard fans, of course
Sorry, but are you drunk?
I think he's talking about the Jon peddie release for this quarter that showed AMD gained 10% but that includes APUs. Overall the dgpu market shrunk.
Nope. Why, are you?
I wish i was after reading this thread.
LOL! Yeah, AMD has a solid product with a solid future built on that architecture and good pricing and it makes a difference. The fact is, I have a RX 5700 and it is working great as a 1440p card on my Freesync 1440p 144hz monitor. I also saved at least $250 over what ti would have cost for a 2070S and I am better off for it.
No one said the 5700 isn’t a good product, it’s just not the second coming of jesus.
Did you... read the thread?
No need to read the thread and I did not say it was the second coming either. However, the OP insinuates that the 5700 is not a good product but hey....... $250 in my wallet saved and a card that is not far off from a 2070s either.
It seems a tad disingenuous to compare the 5700 (not XT) pricing to the 2070S, when even the 2060S outperforms it, and is obviously the more appropriate comparison.
What other questionable details are you hiding to manufacture that $250 price difference?
Yup, it is how they've tried to justify the price increases on Turing. Die size, perf/watt, or price/perf. Round and round it goes.
It is not disingenious at all, and there is no manufactured price difference, I paid $250 or so less than a 2070S would usually cost and it is not far off from the 2070S, which is the subject of this thread. Just because you do not like my assessment does not mean I am wrong with said assessment.
The 5700 is 25% off from a 2070S. That’s like me comparing a 5700 to a 1660, saying the performance isn’t far off and you can save $100-150.