NVIDIA reportedly wants to cut TSMC orders for next-gen RTX 40 GPUs 5nm wafers amid lower demand

Status
Not open for further replies.
Even in the video you posted... this guy said as much, the M1 draws 1/4 the power from the wall while scrubbing around adobe premiere, and the same is true for stuff like Zbrush ect. Does it heat up like crazy if you try to render... of course, its not breaking the laws of physics, that just isn't something anyone making money on that stuff would be doing so who cares.
One thing I'd like to point here is that Apple bought all the 5nm from TSMC back in 2020. Who in the x86 world uses 5nm? The laptop from the video uses a Intel i9 12900H which is 10nm. The RTX 3080 Ti Mobile uses 8nm from Samsung. You can't tell me that a lot of the power savings you see from Apple isn't from their 5nm? Not only does 5nm give Apple a power efficiency advantage but it also allows them to pack more transistors. This is all hopefully after the pandemic silicon shortage. AMD's 6000 series mobile chips are now using 6nm while Intel has bought 2 years of TSMC's 3nm. Considering Nvidia is going to make the next generation Switch SoC, I can see them wanting to go 3nm as well. Considering Nvidia has worked with Samsung in the past, I can see them going with Samsung for 3nm.

This is why I go after AMD parts for laptops to compare to the Apple M series and not the Intel+Nvidia combo. AMD's laptop APU's consume a lot less power compared to Intel, while offering good GPU performance. Add a discrete AMD GPU and you save a lot more in power consumption compared to Nvidia. As it stands AMD is on 6nm for mobile and 7nm for their GPU's. Once Apple goes 3nm next year with their M3 then their power efficiency will be even bigger. By then I would think Intel will start selling their 3nm chips because they did buy it for the next 2 years. By the end of this year AMD will be using 5nm. AMD Zen5 will be 3nm.... by 2024.
 
. No 3D model artist gives two shakes how fast any Laptop PC or MAC renders anything.
I was not talking about the rendering part, I was talking about the modelling bench (again, who would use a laptop to do Maya/Blender work ? I really do not get the appeal of the small apple studio or macbook pro in general for almost all the workload for almost everyone, too), from my understanding of the bench the 3080TI mobile 12900h was much better at it and blender has a native port, on what the M1 pro being significantly better at Maya based on ?

Blender during modeling not rendering the performance is much better on the PC laptop, why would it be by far the reverse on the non native Maya ? If I enter Pixar studio, Lucas Art SFX, will I see people working on laptop ? Or anyone that care much about power draw ? I cannot imagine someone interested doing serious Zbrush work on a laptop ever (why ?)

The only people that would be rendering on any laptop... are students and Youtubers with less then 10k subs creating cheesy 20s 3D title cards. lol
Outside some rare documentarist on the road and some scientific on location, that sound a bit true of anyone that would do anything on any laptop too no ?

If we talk 3d modeling on a laptop we are probably talking about that yes, students/enthusiasts or quick work on location, not people making dinosaurs on the latest Jurassic parks movie. Same for editing, probably youtuber not people editing the latest Jurassic Park movie.
 
Last edited:
  • Like
Reactions: ChadD
like this
One thing I'd like to point here is that Apple bought all the 5nm from TSMC back in 2020. Who in the x86 world uses 5nm? The laptop from the video uses a Intel i9 12900H which is 10nm. The RTX 3080 Ti Mobile uses 8nm from Samsung. You can't tell me that a lot of the power savings you see from Apple isn't from their 5nm? Not only does 5nm give Apple a power efficiency advantage but it also allows them to pack more transistors. This is all hopefully after the pandemic silicon shortage. AMD's 6000 series mobile chips are now using 6nm while Intel has bought 2 years of TSMC's 3nm. Considering Nvidia is going to make the next generation Switch SoC, I can see them wanting to go 3nm as well. Considering Nvidia has worked with Samsung in the past, I can see them going with Samsung for 3nm.

This is why I go after AMD parts for laptops to compare to the Apple M series and not the Intel+Nvidia combo. AMD's laptop APU's consume a lot less power compared to Intel, while offering good GPU performance. Add a discrete AMD GPU and you save a lot more in power consumption compared to Nvidia. As it stands AMD is on 6nm for mobile and 7nm for their GPU's. Once Apple goes 3nm next year with their M3 then their power efficiency will be even bigger. By then I would think Intel will start selling their 3nm chips because they did buy it for the next 2 years. By the end of this year AMD will be using 5nm. AMD Zen5 will be 3nm.... by 2024.

I am not disagreeing with you... process matters of course. But also why do customers really care... I'm not putting my CPUs and GPUs under a microscope for youtube clicks. All I care about is performance and efficiency. Apple has a product no one else has. IMO its probably a 33/33/33 situation where sure process is 1/3 of the solution. ISA (ARM) is another 1/3 (cause no matter what anyone says full fat ARM is still lighter and more efficient then full fat x86) and 1/3 is the other aspect of Apples solution with built in accelerators decode/accelerators ect. (I don't care how the sausage is made... and professionals doing 8k video scrubbing on site out of a RED camera ect don't care either)
 
Last edited:
I think, it is probably a case of Apple's compiler being optimized for their hardware architecture. Intel/AMD/Microsoft can't do this level of vertical integration.
Very good point... probably would work that in my 33/33/33 comment somewhere if I had a do over. 33/33/33/33 lmao :)
 
I was not talking about the rendering part, I was talking about the modelling bench (again, who would use a laptop to do Maya/Blender work ? I really do not get the appeal of the small apple studio or macbook pro in general for almost all the workload for almost everyone, too), from my understanding of the bench the 3080TI mobile 12900h was much better at it and blender has a native port, on what the M1 pro being significantly better at Maya based on ?

Blender during modeling not rendering the performance is much better on the PC laptop, why would it be by far the reverse on the non native Maya ? If I enter Pixar studio, Lucas Art SFX, will I see people working on laptop ? Or anyone that care much about power draw ? I cannot imagine someone interested doing serious Zbrush work on a laptop ever (why ?)


Outside some rare documentarist on the road and some scientific on location, that sound a bit true of anyone that would do anything on any laptop too no ?

If we talk 3d modeling on a laptop we are probably talking about that yes, students/enthusiasts or quick work on location, not people making dinosaurs on the latest Jurassic parks movie. Same for editing, probably youtuber not people editing the latest Jurassic Park movie.
Good points... and your right the bulk of any real 3D work isn't ever going to be done a laptop. Also no professional really cares about blender for that matter. I mean I enjoy messing with blender, but very few companies are using blender in any real workflows. I mean it gets better and better so perhaps some day that will be true of a few places... and there may even be a few in India today using blender. Anyway.

The 3D artists I have met that have M1 based macs... its not about doing all their work. Its about being able to take models already on the go home and doing SOME work on them. Where I am in Canada we aren't that far removed from people being sent home. This is why many many shows and movies have been so delayed. The effects people can't really do much at home... some of the big houses have actually sent some crazy workstation type hardware home, but the files still need to come back physically at some point for render work or to be passed on to the next stage. That is how at least one of the people I spoke to was using his Apple... he was taking ruffed models and doing detail work at home. No not final renders or crazy renders.

So no if you go to a big production house... no you won't see people in there working on laptops. BUT you will have people that have been taking small bits home. At least for now... to be fair most people doing that are using windows laptops. There are people using M1s though... and it really really depends on what the workload is. 100% the audio folks are using M1 macs. (but pro audio has been heavy Apple for years now) There are a lot of jobs that just can't be done away from the office though... color work requires specialized monitors and setups ect.
 
I hope gamers... say good enough with what they got for a good solid year or even two.

Perhaps the best way to punish the greedy bastards is to hit them where it hurts. Let the miners sit on their shitty beat up firmware flashed junk.... and refuse to pay Nvidia one dime more then a AMD/Intel option.

All the virtue signaling... of we don't like Miners, and we are going to hobble are cards, or we have designed these cards for gamers was exactly that a big fat VS. We all know they where happy to make huge bank and sell massive lots to Crypto farms.

I say screw em for awhile. I mean I'm gaming on a 5700xt... I don't have ray tracing, and I can't push 4k over 100fps. But you know what... it pushes my wide angle (2k?) monitor to a very nice respectable and steady 75 FPS, in most stuff... and the very few games that drop into the 60s are quite freesync stable. I haven't seen anything on the horizon from game developers that will really punish my setup. So I'm going to skip this next gen as well. RT doesn't impress me enough to move... and even if the next gen cards can push all that for real this time, I'm not sure I care enough about the 5 or 6 games that actually implement it well anyway. (Cyberpunk being a huge bust is bad for NV and AMD) IMO If your running a decent NV 2/3000 or AMD 5/6700+ saying no thank you isn't really all that hard.

Agree x2..

If we all got on board with saying "No".. they would get the message... (maybe lol)
 
  • Like
Reactions: ChadD
like this
I think, it is probably a case of Apple's compiler being optimized for their hardware architecture. Intel/AMD/Microsoft can't do this level of vertical integration.
That’s certainly part of it, Apple maintains very tight vertical integration and an incredibly narrow product stack. They also collect (yet never share) an absolute metric shit ton of usage data. They know what their users are using their products for, how long their doing it for, if they are on battery or not. It further lets them customize their products to match the bulk of their users work flow.
 
Also no professional really cares about blender for that matter. I mean I enjoy messing with blender, but very few companies are using blender in any real workflows.
It had started to be use from time to time for some element:
https://www.blender.org/news/hardcore-henry-using-blender-for-vfx/
https://www.blendernation.com/2014/...reviz-for-captain-america-the-winter-soldier/

We do in my branch of work because sometime we have very simple need and some blender tool became very powerful overtime and free/easy to use/easy to script-automatise via their API while easy to allow human intervention with a GUI when needed.

I think, it is probably a case of Apple's compiler being optimized for their hardware architecture. Intel/AMD/Microsoft can't do this level of vertical integration.

Not so sure why Microsoft C++ compiler could not be optimized for x64, the STL stuff compiled for x86/64 seem extremely optimized to me, but I do not know enough.

https://www.phoronix.com/scan.php?page=article&item=gcc-12-alderlake&num=1
On top of new C/C++ language features and various optimization improvements, there is updated tuning for Intel's new Alder Lake processors. Here are some early GCC 11.2 vs. GCC 12 development benchmarks looking at the performance on a Core i5 12600K.

Intel® C++ Compiler Classic Developer Guide and Reference​


https://www.intel.com/content/www/u...er-options/code-generation-options/march.html

https://devblogs.microsoft.com/cppblog/avx-512-auto-vectorization-in-msvc/
In Visual Studio 2019 version 16.3 we added AVX-512 support to the auto-vectorizer of the MSVC compiler. This post will show some examples and help you enable it in your projects.

It is done by more different people, but it is still done I think, there is a lot of money to be made by a lot of people for code performing better without having to do much change if any change, just using a better compiler optmisation options, and on the x86 side compiler got extremelly good overtime, they are really really hard to beat by hands a lot of the time.

I could imagine not having to have hardware to support 70s program and more specific hardware for some high profile/used modern task more than better compiler optimization, C++ compilers on x86 platforms has been optimized to death for decades I would suspect.
 
Last edited:
I am not disagreeing with you... process matters of course. But also why do customers really care... I'm not putting my CPUs and GPUs under a microscope for youtube clicks. All I care about is performance and efficiency.
This seems like something someone would do to side step a conversation. Most customers have no idea how anything in their computer works, but we're not talking about that. Fact is Apple bought up TSMC's 5nm back in 2019 and locked themselves in during a period of time when we had a chip shortage. The next couple of years we're going to see some serious changes from Intel where they actually out bought TSMC's 3nm from Apple. Apple still has 3nm from TSMC as well for the next couple of years, but Apple won't be alone. What's sad is that AMD will be stuck at 5nm unless they make a deal with Samsung.
Apple has a product no one else has. IMO its probably a 33/33/33 situation where sure process is 1/3 of the solution.
That's not how CPU's evolve. Look at the M2 where it's still 5nm though supposedly better and it eats more power than the M1. Most of the IPC gains are from clock speed and cache. Where as AMD has had a nearly 10% increase in IPC for every new Zen architecture they made since they've been on 7nm.
ISA (ARM) is another 1/3 (cause no matter what anyone says full fat ARM is still lighter and more efficient then full fat x86)
I have and I've proven that while on battery the power consumption and performance is nearly the same on a AMD 6900HS. Keep in mind that x86 does also change so you can't say this is a x86 vs ARM issue. In most simple tasks the Apple M series is still better on battery but then again that's when the computer mostly idles. This is why Intel went big little design like Apple did and it shows. They're still on 10nm but design can only get you so far.
and 1/3 is the other aspect of Apples solution with built in accelerators decode/accelerators ect. (I don't care how the sausage is made... and professionals doing 8k video scrubbing on site out of a RED camera ect don't care either)
Technically the GPU is a 3D accelerator so yea but Apple obviously made an accelerator just for video editing. Still faster to video render on a PC.
I think, it is probably a case of Apple's compiler being optimized for their hardware architecture. Intel/AMD/Microsoft can't do this level of vertical integration.
Boasters and boys with weak self images like to have potential, because potential is a value that doesn't have to be proven. Oh you have the potential to be a gaming machine. You know you have the potential to be a powerful computer. You have the potential to get great benchmarks. Potential is great because you can play Minecraft all day without Ray Tracing and never have to cash it in. You won't cash it in. You won't cash your potential and I mean what would happen if you posted up next to a stock RTX 2060 and got native Metal and ARM support and cashed in all that potential energy into numbers only to find that in reality you're losing to a dads computer.
https://youtube.com/clip/UgkxF47b7jzffunOm1UZG3VT-uVDkeMVSmM_
 
I'm not sure process really matters like it once did Duke. Yes there are gains obviously... but its not like the old days where every die shrink instantly = massive performance bumps. They are there but its not the main deciding factor anymore. I mean all you need to do is look at current Intel... I mean they have stretched a couple processes out for years. I'm not convinced a magic die shrink is going to result in massive gains anymore. Gains sure, but alone the die shrink isn't going to be all Intel (or anyone else) need do anymore.

You can list cut down x86 chips with efficiency all day long. The facts are the facts ARMs ISA is more efficient. I'm not saying Apple can't include more involved prefetch or cache unites ect and eat into that... and sure AMD and Intel can strip things out of cores (Intel has little cores after all) and sip less power. But there is no denying that at its core ARM is a slimmer less power hungry ISA. Its just the facts... individual products are just that.

I wasn't suggesting Apple isn't making a 3D GPU... I'm saying they have zero need to compete with AMD or Nvidia on super computer stuff or video games. They need their GPU to perform basic 3D (basic by todays standards) and do some lifting with GPU compute. Their M chips do both of those things very well. Apple isn't at this point going to pay anyone for GPU tech to find 10-20% more performance. (and that would be a stretch if a AMD/NV solution had to live in the same power envelope) Lets all get a little real... Apples M1 Max 3D performance isn't actually that far behind a 3080 in renders either... last benches I saw that seemed reputable had the M1 Max rendering a complex scene in 1:30 while the 3080 was just under 1 min. Of course that is faster... but for what people would be rendering on a laptop, not a big deal either. (frankly it puts them ahead of AMD... so your suggesting they will go back to paying AMD is off, they have already sort of passed AMD in terms of GPU rendering anyway)

We agree I believe... Apples are a more specialized device. They are still general use computers... which invites the comparisons sure. Still, no one expects an Apple to play AAA games... or do it well anyway. They are also not GPU compute monsters for the hobby 3D folk. (cause again the M chips do what 3D professionals expect in a portable... more reason they don't need AMD anymore)

Its easy for us PC folks to rag on the Apple GPU cause it can't do 200FPS in a shooter... but frankly that is more to do with Apples use of Metal only, and them not caring about that market at all.
 
I'm not sure process really matters like it once did Duke. Yes there are gains obviously... but its not like the old days where every die shrink instantly = massive performance bumps. They are there but its not the main deciding factor anymore. I mean all you need to do is look at current Intel... I mean they have stretched a couple processes out for years. I'm not convinced a magic die shrink is going to result in massive gains anymore. Gains sure, but alone the die shrink isn't going to be all Intel (or anyone else) need do anymore.
Intel and Apple bought out all the 3nm from TSMC for the next 2 years. In my opinion this maybe the reason Nvidia cut their TSMC orders because Nvidia maybe wants to be on 3nm as well. To give you an idea of how much this impacts performance, TSMC’s 6nm helped AMD increase transistor count from 10.7 billion for pre gen Ryzen 5000 series Mobile to 13.1 billion for Ryzen 6000 series Mobile. Your plain M1 has 16 billion. The M1 Pro has 33.7 billion. The M1 Maxx has 57 billion transistors. To give you an idea how insane that is the Radeon 6900 XT has 26.8 billion. If that's not a big difference I don't know what is. The sad thing is a Ryzen 6900HS with a 6900XT is just far more faster than a M1 Pro and still using less transistors. The 6900HS alone is competitive against the M1 Pro and it's using half the transistor count.

As for power savings You can see that here. Compared to AMD's TSMC 7nm the Apple 5nm gives a 20% savings in power consumption, 15% more performance, and 45% area reduction. Now think about the difference with Intel on their 10nm. It's a big deal.
You can list cut down x86 chips with efficiency all day long. The facts are the facts ARMs ISA is more efficient. I'm not saying Apple can't include more involved prefetch or cache unites ect and eat into that... and sure AMD and Intel can strip things out of cores (Intel has little cores after all) and sip less power. But there is no denying that at its core ARM is a slimmer less power hungry ISA. Its just the facts... individual products are just that.
You know what else had a better ISA than x86? PowerPC, and look where that ended up. If only IBM did more to engineer better chips and use better manufacturing. That's one of the reasons why Apple went with Intel because at the time Intel had by far the best manufacturing. You talk about prefetch but everyone had the Spectre bug, including ARM chips. Look up a guy named Jim Keller who had worked for AMD, Apple, and Intel. I'm not saying he's responsible for Spectre but you'll find that a handful of engineers tend to get around in these companies and will bring similar tech with them, along with bugs.
I wasn't suggesting Apple isn't making a 3D GPU... I'm saying they have zero need to compete with AMD or Nvidia on super computer stuff or video games. They need their GPU to perform basic 3D (basic by todays standards) and do some lifting with GPU compute. Their M chips do both of those things very well.
GPU tech does more than just play games. Like rendering videos where the M1 series does worse. GPU performance matters so much that Intel is finally making a good GPU. Will be making a good GPU once they release theirs.
 
What's sad is that AMD will be stuck at 5nm unless they make a deal with Samsung.
They can’t, Samsungs sub 8nm nodes aren’t doing well. They do good enough for small low power chips, but anything large or hot and it falls flat on its face. Samsung has opened up its 3nm processes but I was reading it currently has a hard 15w limit, past that and they fail.

https://wccftech.com/samsung-3nm-gaa-worse-yields-than-4nm/amp/

That’s just one of many, (top google search) but Samsung 5,4, and 3nm process are reporting 40% yield rates on the optimistic side and as low as 15% on the pessimistic side.
At this stage Samsung is not viable for mass consumer chips below the 8nm process which Nvidia showed wasn’t great either. Silicon is too expensive to be throwing away more than half the product.
 
Intel and Apple bought out all the 3nm from TSMC for the next 2 years. In my opinion this maybe the reason Nvidia cut their TSMC orders because Nvidia maybe wants to be on 3nm as well. To give you an idea of how much this impacts performance, TSMC’s 6nm helped AMD increase transistor count from 10.7 billion for pre gen Ryzen 5000 series Mobile to 13.1 billion for Ryzen 6000 series Mobile. Your plain M1 has 16 billion. The M1 Pro has 33.7 billion. The M1 Maxx has 57 billion transistors. To give you an idea how insane that is the Radeon 6900 XT has 26.8 billion. If that's not a big difference I don't know what is. The sad thing is a Ryzen 6900HS with a 6900XT is just far more faster than a M1 Pro and still using less transistors. The 6900HS alone is competitive against the M1 Pro and it's using half the transistor count.

As for power savings You can see that here. Compared to AMD's TSMC 7nm the Apple 5nm gives a 20% savings in power consumption, 15% more performance, and 45% area reduction. Now think about the difference with Intel on their 10nm. It's a big deal.

You know what else had a better ISA than x86? PowerPC, and look where that ended up. If only IBM did more to engineer better chips and use better manufacturing. That's one of the reasons why Apple went with Intel because at the time Intel had by far the best manufacturing. You talk about prefetch but everyone had the Spectre bug, including ARM chips. Look up a guy named Jim Keller who had worked for AMD, Apple, and Intel. I'm not saying he's responsible for Spectre but you'll find that a handful of engineers tend to get around in these companies and will bring similar tech with them, along with bugs.

GPU tech does more than just play games. Like rendering videos where the M1 series does worse. GPU performance matters so much that Intel is finally making a good GPU. Will be making a good GPU once they release theirs.
The M1s are all single units... of course AMD can beat them with a discrete GPU. Again though who cares... Apple isn't going after the gaming GPU market.

Transistor count isn't completely the result of die shrinks though. All the die shrink does is allow them to get more chips on a waffer. You can build a 50 million transistor chip on a 5nm wafer as easily as you can on a 3nm wafer. The chip will be larger... but that isn't always a terrible thing. Heat management and power management have to be worked into the design. Part of what chip desingers have had to do for a long time now is make sure specific calculating bits and storage bits have separation so interference doesn't cause errors. Designs are starting to get so tight that novel ideas of where to slot cache have been worked in for almost a decade now.

The gains for each shrink have been diminishing for awhile now. A lot of manufactureres didn't jump on 5nm not just because Apple bought up all the space... that is great marketing for TMSC and Apple. The truth is most companies where not hot on the jump to 5nm. The density increase was really only 15%... with a higher defect rate, and a 25% cost premium. (making the real cost premium much higher for complicated parts like GPUS)

There is nothing stopping anyone from making larger chips on a the same process... its what Apple did with M2. They claim 20% more transistor count on M2 on the same process. M1 is a 119mm chip... so far all we have is marketing fluff on M2 but its clear Apple shows the chip is physically 20% larger likely its a 130-140mm (or so) size chip. You want to go from 16m transistors to 20m all you need to do is make a bit larger chip.

I predict 3nm is going to be a mess that may take years to see working product. I know Apple and Intel are looking to get parts going... but the shift to 3 is very complicated. Its not 7->5.
Theoretically FinFET hits its limit at 5nm.... to get down to 3 they need to drop to a single fin and up the power delivery to cancel noise. They plan to end FinFet after 3nm switching to Nanosheets. I don't know if they are having issues with 3nm... but I strongly suspect 3nm might as well just be called 5nm+. (as I doubt highly many designs can really up the signal on every connect without noise being a major issue) I strongly suspect very high transistor count chips at 3nm are goin to have terribly high defect rates due to the required change to the FinFET setup. They are on the edge of physics with it right now... which is why this is the last process that can use it. (and I'm not convinced its going to really work at 3nm all that well).

IMO the wise move is probably to design on 5nm and wait TMSC and Samsung to get nanosheet nailed down and 2nm fabs. 3nm is probably going to be a bust.
 
Last edited:
I predict 3nm is going to be a mess that may take years to see working product. I know Apple and Intel are looking to get parts going... but the shift to 3 is very complicated. Its not 7->5.
Theoretically FinFET hits its limit at 5nm.... to get down to 3 they need to drop to a single fin and up the power delivery to cancel noise. They plan to end FinFet after 3nm switching to Nanosheets. I don't know if they are having issues with 3nm... but I strongly suspect 3nm might as well just be called 5nm+. (as I doubt highly many designs can really up the signal on every connect without noise being a major issue) I strongly suspect very high transistor count chips at 3nm are goin to have terribly high defect rates due to the required change to the FinFET setup. They are on the edge of physics with it right now... which is why this is the last process that can use it. (and I'm not convinced its going to really work at 3nm all that well).
3nm is where you have to switch over to GAA and cant use FinFET or at least nobody is, GAA has completely different design rules so it's going to take a while to get those kinks out for sure.
 
They can’t, Samsungs sub 8nm nodes aren’t doing well. They do good enough for small low power chips, but anything large or hot and it falls flat on its face. Samsung has opened up its 3nm processes but I was reading it currently has a hard 15w limit, past that and they fail.

https://wccftech.com/samsung-3nm-gaa-worse-yields-than-4nm/amp/

That’s just one of many, (top google search) but Samsung 5,4, and 3nm process are reporting 40% yield rates on the optimistic side and as low as 15% on the pessimistic side.
At this stage Samsung is not viable for mass consumer chips below the 8nm process which Nvidia showed wasn’t great either. Silicon is too expensive to be throwing away more than half the product.
The next 2 years is going to be interesting since that means AMD will likely be behind Intel in manufacturing and thus use more power compared to Intel. Guess that means AMD is back to being the budget CPU. Right now Intel is the budget CPU.
The M1s are all single units... of course AMD can beat them with a discrete GPU. Again though who cares... Apple isn't going after the gaming GPU market.
Again, the GPU does help do the final render of a video. 2 minutes savings is a lot if all you do is video editing.
Transistor count isn't completely the result of die shrinks though. All the die shrink does is allow them to get more chips on a waffer. You can build a 50 million transistor chip on a 5nm wafer as easily as you can on a 3nm wafer. The chip will be larger... but that isn't always a terrible thing.
If anyone remembers AdoredTV he explained the problems with making bigger chips. The problem is that you're more likely to get a defect on a bigger chip than if you made a smaller chip. This is why AMD went chiplet as this means they can get more functional chips that can clock higher. Apple going 5nm does play a significant role in how many transistors they added but it's also more likely that Apple doesn't have the engineering to go chiplet like AMD did. Even Intel plans to do this. There's a very good chance that Apple is losing a lot of money trying to bin working M chips compared to Intel and AMD. If Apple put cheaper SSD's in their M2 13" Macbook Pro's then maybe the cost of the M2 is costing Apple profits.
There is nothing stopping anyone from making larger chips on a the same process... its what Apple did with M2. They claim 20% more transistor count on M2 on the same process. M1 is a 119mm chip... so far all we have is marketing fluff on M2 but its clear Apple shows the chip is physically 20% larger likely its a 130-140mm (or so) size chip. You want to go from 16m transistors to 20m all you need to do is make a bit larger chip.
It's not entirely the same process as it's an improved 5nm. You can just add more transistors but if the Apple M2 was on Intels 10m process it would be at least 100% bigger and using far more power and running at a slower clock speed too.
I predict 3nm is going to be a mess that may take years to see working product. I know Apple and Intel are looking to get parts going... but the shift to 3 is very complicated. Its not 7->5.
Theoretically FinFET hits its limit at 5nm.... to get down to 3 they need to drop to a single fin and up the power delivery to cancel noise. They plan to end FinFet after 3nm switching to Nanosheets. I don't know if they are having issues with 3nm... but I strongly suspect 3nm might as well just be called 5nm+. (as I doubt highly many designs can really up the signal on every connect without noise being a major issue) I strongly suspect very high transistor count chips at 3nm are goin to have terribly high defect rates due to the required change to the FinFET setup. They are on the edge of physics with it right now... which is why this is the last process that can use it. (and I'm not convinced its going to really work at 3nm all that well).
Might be the reason why Apple delayed their 3nm chip. I don't know if 3nm is going to be a mess but one of the reasons I think Intel bought out TSMC's 3nm was to probably piss off Apple. I think Intel is still pissed at Apple and is trying to mimic their M series chips with big little and even a good media accelerator like the Apple M chips. Going back to Nvidia I think they don't want to go back to Samsung with 8nm if Lakados is right about Samsungs situation. Might be that AMD has also bought out all the 5nm they could get. As it stands right now if consumers are going to buy out all the used GPU's from Nvidia then Nvidia is just leaving money on the table. I don't doubt that Nvidia wants to avoid lowering prices because that's just not what big corporations want to do, but I also believe that Nvidia doesn't have to worry about AMD lowering prices because I think AMD and Nvidia are working together. Intel though could be using that TSMC 3nm they bought and just make a lot of cheap ARC GPU's.
 
Last edited:
Might be the reason why Apple delayed their 3nm chip. I don't know if 3nm is going to be a mess but one of the reasons I think Intel bought out TSMC's 3nm was to probably piss off Apple.
TSMC was forced to delay their 3nm launch and make changes to their design requirements documents to fix production and yield issues. It shouldn’t take Intel and Apple more than a few months to make the needed design changes the big question is what TSMC needs to do which they haven’t been too vocal on.

I’m looking forward to the Intel TSMC produced mobile lineup. That should be absolute Excel beasts.
 
3nm is where you have to switch over to GAA and cant use FinFET or at least nobody is, GAA has completely different design rules so it's going to take a while to get those kinks out for sure.
I understood TMSC was still planning to use a version of FinFET at 3 but that the connects where getting thin and had to compensate with more signal (voltage). I however can be wrong on that I haven't been paying a ton of attention the last year my info could be speculation from awhile back.

Anyway indeed... FinFET has been the driver the last number of years, I predict trying to transition away is going to be messy. Would love to be wrong.
 
The next 2 years is going to be interested since that means AMD will likely be behind Intel in manufacturing and thus use more power compared to Intel. Guess that means AMD is back to being the budget CPU. Right now Intel is the budget CPU.

Again, the GPU does help do the final render of a video. 2 minutes savings is a lot if all you do is video editing.

If anyone remembers AdoredTV he explained the problems with making bigger chips. The problem is that you're more likely to get a defect on a bigger chip than if you made a smaller chip. This is why AMD went chiplet as this means they can get more functional chips that can clock higher. Apple going 5nm does play a significant role in how many transistors they added but it's also more likely that Apple doesn't have the engineering to go chiplet like AMD did. Even Intel plans to do this. There's a very good chance that Apple is losing a lot of money trying to bin working M chips compared to Intel and AMD. If Apple put cheaper SSD's in their M2 13" Macbook Pro's then maybe the cost of the M2 is costing Apple profits.

It's not entirely the same process as it's an improved 5nm. You can just add more transistors but if the Apple M2 was on Intels 10m process it would be at least 100% bigger and using far more power and running at a slower clock speed too.

Might be the reason why Apple delayed their 3nm chip. I don't know if 3nm is going to be a mess but one of the reasons I think Intel bought out TSMC's 3nm was to probably piss off Apple. I think Intel is still pissed at Apple and is trying to mimic their M series chips with big little and even a good media accelerator like the Apple M chips. Going back to Nvidia I think they don't want to go back to Samsung with 8nm if Lakados is right about Samsungs situation. Might be that AMD has also bought out all the 5nm they could get. As it stands right now if consumers are going to buy out all the used GPU's from Nvidia then Nvidia is just leaving money on the table. I don't doubt that Nvidia wants to avoid lowering prices because that's just not what big corporations want to do, but I also believe that Nvidia doesn't have to worry about AMD lowering prices because I think AMD and Nvidia are working together. Intel though could be using that TSMC 3nm they bought and just make a lot of cheap ARC GPU's.

AMD isn't going to be budget no. They have a very very different design... they will be leaning into chiplet technology (in their GPUs as well). If rumors are true about what they have planned I think its Intel that should be worried. They are betting yet again on unproven processes. TMSC has been banging away for a few years now sure... but 3nm is not as little as jump as 5-2 would suggest. They are going to have to either push existing tech to its actual limit. FinFET isn't actually supposed to work past 5 at all it will have serious cross contamination of signal issues... which perhaps they have a work around for, but its a gamble imo. Or they will switch to GAA (nanowire)... which will also be a first and a bit of a gamble at commercial production scales. (I suspect defect rates are going to be insane). [Lakados says they are going GAA... I now a year or two ago Anannaaaanatech was saying they found a FinFet work around but I don't know... could explain the delay though if their planned work around failed and they are brining forward GAA]
AMD may be wise to stick to 5nm... turn out a ton of solid chiplet, and die shrink and up their controller chips. (which I believe is their plan) Having working parts vs having very low supply out of TMSC 3 may work out pretty darn well.

Again on video render... where are you getting Apple loosing on 2D video render ???? Apple is destroying in that. Sure I have seen a PCs win on a few codecs Apple isn't hardwired for... but they aren't popular production codecs, those are accelerated by M1/2. (and again large projects aren't being rendered on laptop hardware anyway).

On the bigger chips are more likely to have errors stuff, IT really really depends on what we are talking about. AMD uses chiplets. YES that is one of the biggest advantages. AMD is also planning to chip up their GPUs, AMD is in a great position. I have zero doubt that AMDs next gen 5nm Chiplets will be 1/4 the size of a 3nm Intel/Apple chip. Production has been the biggest advantage to AMDs Ryzen designs. I mean 1/3 of the 5000s is 12nm. :)

I agree with you on the Intel gamesmanship theory... I mean I'm sure pissing off Apple was just a happy add on for the deal. One other crazy theory... hey if Arc gets blasted by gamers for being < AMD and Nvidia. Intel can blame the new fab process for having to clock lower/cut cache or whatever excuse they want to use. Heck they can even use it as an excuse to drop supply to almost zero if its a real embarrassment. (while still getting enough silicon to take care of their very profitable super computer wins)
 
Last edited:
I understood TMSC was still planning to use a version of FinFET at 3 but that the connects where getting thin and had to compensate with more signal (voltage). I however can be wrong on that I haven't been paying a ton of attention the last year my info could be speculation from awhile back.

Anyway indeed... FinFET has been the driver the last number of years, I predict trying to transition away is going to be messy. Would love to be wrong.
Yeah it looks like it is, but from what I read they are struggling with fin pinch which is what Intel was struggling with on their 10nm.
Here’s hoping they get it sorted faster.

But on the TSMC site all their nodes mention the transistor type, that is now missing from the TSMC page.

https://www.tsmc.com/english/dedicatedFoundry/technology/logic
 
Last edited:
Might be the reason why Apple delayed their 3nm chip. I don't know if 3nm is going to be a mess but one of the reasons I think Intel bought out TSMC's 3nm was to probably piss off Apple.
Apple’s 3nm capacity buy was for their A-series mobile chips. They may be much smaller than desktop and notebook chips, but Apple sells about 250 million of them every year across their phone, tablet, and watch lineup. Intel has never competed in this area, and the use of a combo of high/low power cores had been commercialized by an Android chipmaker a number of years before Intel’s 12th gen launched. Even before that, Apple had used the same concept but on different chips with their MacBook Pros that had Intel CPUs but also had early versions of Apple’s SoC to run background tasks like checking email while the system was asleep. It’s not a new concept.
 
Again on video render... where are you getting Apple loosing on 2D video render ???? Apple is destroying in that. Sure I have seen a PCs win on a few codecs Apple isn't hardwired for... but they aren't popular production codecs, those are accelerated by M1/2. (and again large projects aren't being rendered on laptop hardware anyway).
Not sure if Puget Adobe premiere pro 2022 export score are represensentative of what you are talking but a 3080Ti mobile seem to be around 60% more, not sure would call it loosing considering the delta in power


Live playback is faster has well, x265 encoding, etc...
 
If rumors are true about what they have planned I think its Intel that should be worried. They are betting yet again on unproven processes.
Intel is still working on their 7nm process. They're still going to use it even though it's inferior to 3nm.
Again on video render... where are you getting Apple loosing on 2D video render ???? Apple is destroying in that. Sure I have seen a PCs win on a few codecs Apple isn't hardwired for... but they aren't popular production codecs, those are accelerated by M1/2. (and again large projects aren't being rendered on laptop hardware anyway).
Codecs change all the time so the Media Engine on the M series can render much slower without it. Realistically h.264 and h.265 is probably going to be around for a while. Hand brake currently doesn't support the Media Engine on the M series but is native ARM and will use the GPU.




I agree with you on the Intel gamesmanship theory... I mean I'm sure pissing off Apple was just a happy add on for the deal. One other crazy theory... hey if Arc gets blasted by gamers for being < AMD and Nvidia. Intel can blame the new fab process for having to clock lower/cut cache or whatever excuse they want to use. Heck they can even use it as an excuse to drop supply to almost zero if its a real embarrassment. (while still getting enough silicon to take care of their very profitable super computer wins)
I think ARC will be terrible but also terribly cheap as well. Intel needs mind share to get people away from Nvidia so that's one way to do it. Linus Tech Tips, (yes them again) did a video on ARC and it did have a similar Media Engine like the Apple M series does. Still no where near as good a Apple's but it shows that Intel is kinda mimicking Apple. Would be interesting to see if Nvidia will respond to Apple's Media accelerator.

 
Can't have those companies make a profit for their stockholders. I think all those big companies should be charities. /s

Well, shareholder would imply they care about NVidia’s future, and aren’t just hopping from one uptrend to another.
 
Well, shareholder would imply they care about NVidia’s future, and aren’t just hopping from one uptrend to another.
Yea shareholders don't exactly care about the long term success of a company when they can just log into Robinhood and sell their shares and move onto the next big venture. They also care less about the customers satisfaction of said company.
 
Yea shareholders don't exactly care about the long term success of a company when they can just log into Robinhood and sell their shares and move onto the next big venture. They also care less about the customers satisfaction of said company.
Those are 2 jokes right ?

For an easy example, for almost anything you buy in almost all jurisdiction a company has absolutely no obligation to offer a refund/trade for a working has advertised article, no 30 days, no 15 days, nothing, they virtually offer an excellent return system (Amazon for example being extraordinary), why do they you think ?

If it is 20% a joke, it feels like it is still some cheap redirection, shareholders care about long term success (to the point they do not mind even like giant company doing ultra long R&D projects into quantic computer and so on) because it helps current stock price and they care about customers satisfactions only because it helps current stock price because it helps the perception of future success.
 
Last edited:
Those are 2 jokes right ?

For an easy example, for almost anything you buy in almost all jurisdiction a company has absolutely no obligation to offer a refund/trade for a working has advertised article, no 30 days, no 15 days, nothing, they virtually offer an excellent return system (Amazon for example being extraordinary), why do they you think ?

If it is 20% a joke, it feels like it is still some cheap redirection, shareholders care about long term success (to the point they do not mind even like giant company doing ultra long R&D projects into quantic computer and so on) because it helps current stock price and they care about customers satisfactions only because it helps current stock price because it helps the perception of future success.

7 months ago, NVidia shareholders thought it was worth $346. Now they think it’s worth $140. Did their perception of the company’s future change that much in 7 months?

Most NVDA shareholders don’t even know they own it, it’s a component in hundreds of ETFs and mutual funds, many of which aren’t even tech funds.
 
Those are 2 jokes right ?

For an easy example, for almost anything you buy in almost all jurisdiction a company has absolutely no obligation to offer a refund/trade for a working has advertised article, no 30 days, no 15 days, nothing, they virtually offer an excellent return system (Amazon for example being extraordinary), why do they you think ?
In America your right to a refund depends on the state. In Europe it's something like 14 days. Also, all products must have a 1 year warranty minimum in America while being 2 years for Europe. Also, any business that doesn't have a return policy would likely go out of business. Can you imagine how quickly Wish.com would go bankrupt without a return policy?
If it is 20% a joke, it feels like it is still some cheap redirection, shareholders care about long term success (to the point they do not mind even like giant company doing ultra long R&D projects into quantic computer and so on) because it helps current stock price and they care about customers satisfactions only because it helps current stock price because it helps the perception of future success.
That's the most ideal perception of what investors want but in reality they care far more about short term gains. If they didn't then publically traded companies wouldn't fire employees immediately after their stock drops significantly in value. Shortly after the same company would open up the same positions that they just fired.
 
7 months ago, NVidia shareholders thought it was worth $346. Now they think it’s worth $140. Did their perception of the company’s future change that much in 7 months?

Most NVDA shareholders don’t even know they own it, it’s a component in hundreds of ETFs and mutual funds, many of which aren’t even tech funds.
The people that manage said mutual funds do very much knows, yes perception of the company (and tech companies in general) future (and crypto) did change for many I would assume. People that do not know that own Nvidia share usually will not trigger their sales or buy, their opinion not mattering that much.

In America your right to a refund depends on the state. In Europe it's something like 14 days. Also, all products must have a 1 year warranty minimum in America while being 2 years for Europe. Also, any business that doesn't have a return policy would likely go out of business. Can you imagine how quickly Wish.com would go bankrupt without a return policy?
You talk about something completely different I am talk about a perfectly working product for which the warranty is irrelevant and there is absolutely no right to refund. Maybe such right exist in some state but I never heard of one.

Yes exactly a company nowaday in many sector could not work without very high client satisfaction, who they pretty much demand a refund policy now, shareholders care a lot about a company not going bankrupt, making them caring a lot about their clients satisfaction.

That's the most ideal perception of what investors want but in reality they care far more about short term gains. If they didn't then publically traded companies wouldn't fire employees immediately after their stock drops significantly in value. Shortly after the same company would open up the same positions that they just fired.
The average holding of large investor was around 4.5 year's, you have the whole private equity that tend to be always long term sector, you have the world worker retirement funds that have very long horizon, yes there are many short term holdings fast turnover investor has well, but not all of them.

Look at the biggest shareholder of say Nvidia now, those who have 3.38 billion or more invested:
https://finance.yahoo.com/quote/nvda/holders/

HolderSharesDate Reported% OutValue
Vanguard Group, Inc. (The)198,462,079Mar 30, 20227.94%31,472,116,851
Blackrock Inc.180,881,694Mar 30, 20227.24%28,684,219,365
FMR, LLC145,063,653Mar 30, 20225.80%23,004,194,358
State Street Corporation96,099,151Mar 30, 20223.84%15,239,403,541
Price (T.Rowe) Associates Inc53,129,332Mar 30, 20222.13%8,425,249,565
Geode Capital Management, LLC43,390,541Mar 30, 20221.74%6,880,872,071
Bank of America Corporation28,150,687Mar 30, 20221.13%4,464,135,996
Northern Trust Corporation27,866,371Mar 30, 20221.11%4,419,049,164
Bank Of New York Mellon Corporation23,540,033Mar 30, 20220.94%3,732,978,476
Norges Bank Investment Management21,349,893Dec 30, 20210.85%3,385,666,071

Look 6 year's ago:
https://www.annualreports.com/HostedData/AnnualReportArchive/n/NASDAQ_NVDA_2017.pdf

The biggest one were the exact same 3:
FMR LLC 69,928,236 (11) — 69,928,236 11.96%
The Vanguard Group, Inc. 34,983,002 (12) — 34,983,002 5.98%
BlackRock, Inc. 33,570,738 (13) — 33,570,738 5.74%


When you look at who call/get answer in earning call year after year, often the same institution. You cannot easily buy for 20 plus billions of something else (and do not use Robinhood for it), those people can easily be invested in Nvidia for decade it is not uncommon (look Berkshire postilions in 2000 and 2022 you will see a lot of the same name).

And publicly traded company far from always fire employees immediately after their stock drops significantly in value.

Just think about your own stock position, do you keep them for a very long time or not ? I think my average is over a decade with many of them I am not sure if I will ever sell.
 
The people that manage said mutual funds do very much knows, yes perception of the company (and tech companies in general) future (and crypto) did change for many I would assume. People that do not know that own Nvidia share usually will not trigger their sales or buy, their opinion not mattering that much.


You talk about something completely different I am talk about a perfectly working product for which the warranty is irrelevant and there is absolutely no right to refund. Maybe such right exist in some state but I never heard of one.

Yes exactly a company nowaday in many sector could not work without very high client satisfaction, who they pretty much demand a refund policy now, shareholders care a lot about a company not going bankrupt, making them caring a lot about their clients satisfaction.


The average holding of large investor was around 4.5 year's, you have the whole private equity that tend to be always long term sector, you have the world worker retirement funds that have very long horizon, yes there are many short term holdings fast turnover investor has well, but not all of them.

Look at the biggest shareholder of say Nvidia now, those who have 3.38 billion or more invested:
https://finance.yahoo.com/quote/nvda/holders/

HolderSharesDate Reported% OutValue
Vanguard Group, Inc. (The)198,462,079Mar 30, 20227.94%31,472,116,851
Blackrock Inc.180,881,694Mar 30, 20227.24%28,684,219,365
FMR, LLC145,063,653Mar 30, 20225.80%23,004,194,358
State Street Corporation96,099,151Mar 30, 20223.84%15,239,403,541
Price (T.Rowe) Associates Inc53,129,332Mar 30, 20222.13%8,425,249,565
Geode Capital Management, LLC43,390,541Mar 30, 20221.74%6,880,872,071
Bank of America Corporation28,150,687Mar 30, 20221.13%4,464,135,996
Northern Trust Corporation27,866,371Mar 30, 20221.11%4,419,049,164
Bank Of New York Mellon Corporation23,540,033Mar 30, 20220.94%3,732,978,476
Norges Bank Investment Management21,349,893Dec 30, 20210.85%3,385,666,071

Look 6 year's ago:
https://www.annualreports.com/HostedData/AnnualReportArchive/n/NASDAQ_NVDA_2017.pdf

The biggest one were the exact same 3:
FMR LLC 69,928,236 (11) — 69,928,236 11.96%
The Vanguard Group, Inc. 34,983,002 (12) — 34,983,002 5.98%
BlackRock, Inc. 33,570,738 (13) — 33,570,738 5.74%


When you look at who call/get answer in earning call year after year, often the same institution. You cannot easily buy for 20 plus billions of something else (and do not use Robinhood for it), those people can easily be invested in Nvidia for decade it is not uncommon (look Berkshire postilions in 2000 and 2022 you will see a lot of the same name).

And publicly traded company far from always fire employees immediately after their stock drops significantly in value.

Just think about your own stock position, do you keep them for a very long time or not ? I think my average is over a decade with many of them I am not sure if I will ever sell.

You know Vanguard is almost exclusively passive funds. There is no manager buying and selling NVDA cause of its fundamentals, it’s purely based on market cap, so as it’s market cap grows, the funds have to buy more of it. This is why when NVDA starts to sell off, Vanguard funds start dumping it automatically. Purely momentum trading.

By the way I agree with your points, but I think much of the market has stopped looking at fundamentals and is just doing momentum trading or passive buy and HODLing.
 
Last edited:
You know Vanguard is almost exclusively passive funds
They have many passive funds (4 of them are listed in the top Mutual fund holders, which will be most of an institution holding for a Top US title) but they also have active mutual fund, for example:

https://www.vanguard.ca/en/investor/products/products-group/mutual-funds/VIC400
or
https://www.vanguard.ca/en/investor/products/products-group/mutual-funds/VIC600
or
https://www.vanguard.ca/en/investor/products/products-group/mutual-funds/VIC300
The fund invests mainly in large- and mid-capitalization companies primarily in the United States whose stocks are considered by sub-advisor to be undervalued. Undervalued stocks are generally those that are out of favour with investors and that the sub-advisor believes are trading at prices that are below average in relation to measures, such as earnings and book value

Once they go above .3% fee they tend to have manager buying and selling with analyst making suggestion, they would not need nearly 20,000 employees just for pure passive funds.
 
With a recession and high inflation for some time to come, the used market will be king for deals over the next year or so. Who needs 4000 series when 3000 used will be half the price or less?
 
Thanks. Means it will be easier for the rest of us to get cards.

This is [H] though, you might be in the wrong forum.

Pretty sure [H] isn't a GPU only nor a gaming forum. ;) I didn't say I had zero plans to build... just planning to keep my feet out of this next GPU gen. Gaming isn't my sole (nor my main) computing activity. (who knows there is always some crazy outside chance Intel manages something decent... can't blame them for Crypto profiteering, not that they wouldn't have loved too) lol
 
Last edited:
Not buying things you don't need doesn't make you non-H :rolleyes:.
The wallet warriors have invaded. ;) lol I joke... noticed the idea in more and more game communities as well the last few years.

Breaking out the credit card every 6 months does not make you any more [H] then the kid running up grandmas card with $1,000 in lockbox P2W items every week.

When I discovered [H] Years ago... being [H] meant taking Celerons, and other budget chips and upping the voltage, creating the craziest cooling solution you could imagine.... breaking out the dermal to cut blow holes in your cheap find full tower case. Making a couple grand of parts cry and perform like they didn't come out of the value section. Buying the latest greatest pre build AIO... or the latest and greatest GPU that cost 20% more cause it has a 40 pound cooling solution strapped to it sort of makes you un [H]. imo anyway I don't mean to speak for anyone else. lol
 
According to the Wan Show, Nvidia can't back out. There's going to be a lot of Nvidia 4000 chips.


Finally back on topic. Really strange period for Nvidia. What is the use case for the 4090Ti? Gaming it sounds so overkill and pointless. DLSS makes the need from a gaming standpoint almost pointless. 4090Ti for professional stuff can make sense but very limited for sells. Hell even the 4080 appears to be overkill for gaming but makes more sense.

Then Nvidia will be competing against AMD, who can make their cards cheaper and they may well perform better at a lower power.

I can see why Nvidia wants to cut the order down, can be a darn right blood bath for them.
 
Status
Not open for further replies.
Back
Top