NVIDIA reportedly wants to cut TSMC orders for next-gen RTX 40 GPUs 5nm wafers amid lower demand

Status
Not open for further replies.

Comixbooks

Fully [H]
Joined
Jun 7, 2008
Messages
21,947
The consumer electronics market is slowing down rapidly. The interest in new TVs, mobile phones and PCS and has declined as quickly as pandemic has ‘ended’ and inflation struck global markets.

Both AMD and NVIDIA are revising their TSMC orders, reports DigiTimes. The website citing their industry sources claims that Apple, AMD and NVIDIA all wish to change their orders. AMD reportedly wants to lower its 7/6nm wafer orders while NVIDIA is now facing a problem of over saturated GPU market and possibly lower demand for next-gen GPUs.

NVIDIA now has a large stock of GeForce RTX 30 graphics cards for sale, yet the company is now willing to lower the price just yet. This is despite the second-hand market now being inflated by mining cards which are no longer profitable to use and keep. There are simply too many cards now, which paints a grim outlook for RTX 40 demand


www.videocardz.com
 
With mining being basically dead it makes sense, the secondary market will be getting flooded with used cards, just like when the last mining bubble popped.

Still don't like reading it though after such a shortage and scalpers having a field day prior.
 
Not only is there going to be a glut of used cards, but the price of wafers is up substantially and with current inflation numbers for everything the demand for expensive GPUs (and other electronics/commodities) is going to dry up. Buy a new GPU or put gas in the car to go to work. Not a tough decision to make.
 
Not only is there going to be a glut of used cards, but the price of wafers is up substantially and with current inflation numbers for everything the demand for expensive GPUs (and other electronics/commodities) is going to dry up. Buy a new GPU or put gas in the car to go to work. Not a tough decision to make.
You and your... Logic. (sticks nose in the air) :D.
 
Not sure how the reportedly everyone in electronic toys want to cut down TSMC order end up too specifically NVIDIA reportedly wanted too, really great news for the cars and rest of the industry with some chips use.

That seem a bit how recession start, everyone fear they will end up with stock on their hands, reduce production, reduce employment and it become true.
 
Last edited:
With mining being basically dead it makes sense, the secondary market will be getting flooded with used cards, just like when the last mining bubble popped.

Still don't like reading it though after such a shortage and scalpers having a field day prior.
It doesn't make sense to let the secondary market take away money. Nvidia isn't making a lot of new cards because Nvidia doesn't want to devalue their products. Nvidia can get away with this because Nvidia's mind share is still amazing. I'd like to see their attitude once Intel releases their products into the market. I have a feeling that Intel will price competitively compared to AMD and Nvidia. AMD at this point just doesn't care anymore and will release products to make investors happy.
 
The move makes sense, they've all gouged themselves into a deep pile of shit. Forget spiraling power consumption.

Imagine a $10,000 PC that needs two dedicated 110V circuits... Nah.

The Mac Mini and Mac Studio look more like the future of computing than anything that's coming from nVidia right now. That's probably why they wanted to buy ARM.
 
I hope gamers... say good enough with what they got for a good solid year or even two.

Perhaps the best way to punish the greedy bastards is to hit them where it hurts. Let the miners sit on their shitty beat up firmware flashed junk.... and refuse to pay Nvidia one dime more then a AMD/Intel option.

All the virtue signaling... of we don't like Miners, and we are going to hobble are cards, or we have designed these cards for gamers was exactly that a big fat VS. We all know they where happy to make huge bank and sell massive lots to Crypto farms.

I say screw em for awhile. I mean I'm gaming on a 5700xt... I don't have ray tracing, and I can't push 4k over 100fps. But you know what... it pushes my wide angle (2k?) monitor to a very nice respectable and steady 75 FPS, in most stuff... and the very few games that drop into the 60s are quite freesync stable. I haven't seen anything on the horizon from game developers that will really punish my setup. So I'm going to skip this next gen as well. RT doesn't impress me enough to move... and even if the next gen cards can push all that for real this time, I'm not sure I care enough about the 5 or 6 games that actually implement it well anyway. (Cyberpunk being a huge bust is bad for NV and AMD) IMO If your running a decent NV 2/3000 or AMD 5/6700+ saying no thank you isn't really all that hard.
 
Last time I bought a video card that was at a reasonable price, it was the GTX1060, which had just come out. Paid under $200, that was 6 years ago. I bought a GTX1070ti not long after that and payed close to $800. Haven’t looked at video cards since. Since then I have bought CPU’s with built-in graphics though….good value.
 
Demand might be lower overall but gamer demand might be higher. People who waited 1 1/2 years to get a GPU seem to largely be waiting for the new generation of cards. Makes a lot of sense. Why spend $400-800 now when you can wait a few months and get a faster card?

I think demand from gamers will be high.
 
Right, but then there's less stuff in my case emitting RGB. Sounds like a bad value.
Your right. On second thought I'm hoping I can get a good deal on an entry level card... with a heat sink 4x bigger then my 5700xts, with assignable RGB on the front back and SIDES. Or perhaps I'll just wrap it with a LED light strip. lol :)
 
Honestly I really hope Broadcom gets more allocation from AMD/Apple/Nvidia cutting their orders (if allowed), would help a great ton in the networking/routing space. Those Jericho 2/2c+ (among others) shortages are annoying as heck (although I suppose only the 7nm lines would really matter)
 
Your right. On second thought I'm hoping I can get a good deal on an entry level card... with a heat sink 4x bigger then my 5700xts, with assignable RGB on the front back and SIDES. Or perhaps I'll just wrap it with a LED light strip. lol :)
I’ll be honest, the Rgb on my ftw3 is pretty tits. I have it pink all year long for breast cancer awareness support.
 
Can't have those companies make a profit for their stockholders. I think all those big companies should be charities. /s
Those stockholders are the reason we're in this mess to begin with. If you're a publicly traded company like Nvidia then you have two groups of people to please. The customers like you and me and stockholders. The problem is that stonks make more money than selling products, but selling goods and generating more wealth is what shareholders want through stock buybacks. It's not a matter of making money but making more money. Stockholders want infinite growth which is not possible to achieve. This is why Nvidia isn't focused on making GPU's for gamers but is now expanding into crypto. Great for Nvidia's stockholders even though they got upset because the growth was faked. Even cutting TSMC orders is to satisfy stockholders as they don't want Nvidia competing and lowering prices. Ya know, something that benefits customers.

I personally can't wait for the stock market to crash so those stockholders can get what they paid for. Maybe this time they'll keep stock buy backs illegal.
 
The Mac Mini and Mac Studio look more like the future of computing than anything that's coming from nVidia right now. That's probably why they wanted to buy ARM.
If Mac Mini and Mac Studio is the future then I don't want to live on this planet anymore. Besides the closed nature of Apple products, you also have to deal with the limited amount of software available with ARM as well as it's closed nature. While APU's would be a way to get great value, you do need to replace the CPU. CPU's age a lot better than GPU's do, so you could hold onto an aging CPU while upgrading the GPU. The Ryzen 7 5700G is pretty nice for less than $300 but it comes with Vega 8 graphics, which is like a weak GPU from 2016. For that much money I might as well buy a 6600 XT and keep my Ryzen 2700X.

Also the reason Nvidia wants to buy ARM is because Intel isn't selling them an x86 license. You'd be certain that Nvidia would make x86 chips and sell them with their graphics technology if they could, but they can't. AMD and Intel can push Nvidia out of the market with their APU's, which is why Nvidia has explored ARM and failed at it for the past 13 years. Nvidia wanted to buy ARM because ARM is failing. As much as people wanna believe that ARM is great success because of the Apple M series, you gotta remember that the ARM company itself is broke. ARM released 64-bit back in 2013 and Apple was the first to use it with the A7. Compared to x86 which has had 64-bit since 2003. Nvidia wants to buy ARM to improve their design, and probably to sell more GPU's through ARM since ARM has Mali graphics. Nvidia can sell ARM based chips without owning ARM but that means nothing Nvidia does will be standard in ARM. Also the industry hates dealing with Nvidia and would rather build their own ARM based SOC's like Apple does to save money.
 
If Mac Mini and Mac Studio is the future then I don't want to live on this planet anymore. Besides the closed nature of Apple products, you also have to deal with the limited amount of software available with ARM as well as it's closed nature. While APU's would be a way to get great value, you do need to replace the CPU. CPU's age a lot better than GPU's do, so you could hold onto an aging CPU while upgrading the GPU. The Ryzen 7 5700G is pretty nice for less than $300 but it comes with Vega 8 graphics, which is like a weak GPU from 2016. For that much money I might as well buy a 6600 XT and keep my Ryzen 2700X.

Also the reason Nvidia wants to buy ARM is because Intel isn't selling them an x86 license. You'd be certain that Nvidia would make x86 chips and sell them with their graphics technology if they could, but they can't. AMD and Intel can push Nvidia out of the market with their APU's, which is why Nvidia has explored ARM and failed at it for the past 13 years. Nvidia wanted to buy ARM because ARM is failing. As much as people wanna believe that ARM is great success because of the Apple M series, you gotta remember that the ARM company itself is broke. ARM released 64-bit back in 2013 and Apple was the first to use it with the A7. Compared to x86 which has had 64-bit since 2003. Nvidia wants to buy ARM to improve their design, and probably to sell more GPU's through ARM since ARM has Mali graphics. Nvidia can sell ARM based chips without owning ARM but that means nothing Nvidia does will be standard in ARM. Also the industry hates dealing with Nvidia and would rather build their own ARM based SOC's like Apple does to save money.
The thing with that argument is thus....

Nvidia isn't barred from making ARM processors. In fact they have (and still do) make ARM processors. https://www.nvidia.com/en-us/data-center/grace-cpu/

Nothing about buying ARM for Nvidia... had to do with making their own products. They could make a super ARM processor with a built in 4090 tomorrow (which I mean they are for super computer customers)... or they could make a powerful efficient ARM chip with co processors their most efficient laptop iGPU to take on Apple. They have an architecture license.... they can build and sell any ARM product they want. The only thing buying ARM would have gave them... would have been Nvidia GPUs forced into every low cost ARM device in existence. Also potentially the ability to renegotiate all those ARM licenses in general. (I mean not Apples they have a special status) Even other architecture license holders like Samsung and Qualcomm wouldn't have been immune forever from Nvidia altering those deals.

I agree it would suck to loose the modular nature of the PC, but its also hard to argue that for the vast majority of computing Machines using things like Apples M chips are pretty hard to deny. Apple is at a pretty early point of their switch to ARM really. It has been mostly smooth for the majority of Apple users. Another couple generations... by like M4 or M5, Apple users will think of Apple x86 in the same way they remember (if they are old enough) Apple PPC. The future of Nvidia CPUs is actually very bright. They are going to attack the super computer/high end server markets pretty aggressively. Will we ever see a big ARM consumer push from them ? I wouldn't rule it out they never needed to buy ARM to do that.

For PC I'm hopeful some of the next gen AMD rumors are true... x86 needs to continue innovating hard or Apple will leave them behind. Nvidia I'm not sure really want to invite direct comparison with apple. IMO there is a lot of reason their big swing is Grace, and a market Apple isn't in. Unfortunately for them I believe if they tried to build a M2 killer... the comparison may not favor them. Can they compete with Apple in the same power envelope ? I'm not so sure they can (yet).
 
Last edited:
The entire global market is on the edge of collapse. There is far less money available in your average home for non-essentials. This is something GPU makers are going to have to deal with. If they attempt to maintain high prices, it will make it even less likely people will upgrade. Why spend hundreds of dollars for a few extra fps?
 
If Mac Mini and Mac Studio is the future then I don't want to live on this planet anymore. Besides the closed nature of Apple products, you also have to deal with the limited amount of software available with ARM as well as it's closed nature. While APU's would be a way to get great value, you do need to replace the CPU. CPU's age a lot better than GPU's do, so you could hold onto an aging CPU while upgrading the GPU. The Ryzen 7 5700G is pretty nice for less than $300 but it comes with Vega 8 graphics, which is like a weak GPU from 2016. For that much money I might as well buy a 6600 XT and keep my Ryzen 2700X.

Also the reason Nvidia wants to buy ARM is because Intel isn't selling them an x86 license. You'd be certain that Nvidia would make x86 chips and sell them with their graphics technology if they could, but they can't. AMD and Intel can push Nvidia out of the market with their APU's, which is why Nvidia has explored ARM and failed at it for the past 13 years. Nvidia wanted to buy ARM because ARM is failing. As much as people wanna believe that ARM is great success because of the Apple M series, you gotta remember that the ARM company itself is broke. ARM released 64-bit back in 2013 and Apple was the first to use it with the A7. Compared to x86 which has had 64-bit since 2003. Nvidia wants to buy ARM to improve their design, and probably to sell more GPU's through ARM since ARM has Mali graphics. Nvidia can sell ARM based chips without owning ARM but that means nothing Nvidia does will be standard in ARM. Also the industry hates dealing with Nvidia and would rather build their own ARM based SOC's like Apple does to save money.

X86 is a completely closed duopoly, ARM is the closest thing to an open system we have today. Don't blame the closed nature of Android and iOS on ARM, that's a deliberate effort by Apple and Google to lock down their software. There are upsides to that closed system for nubs, it's nerds like us that those ecosystems harm. Also, team green can make ARM right now, their goal was to control access to ARM to harm their competition, let's not forget, among big tech companies nVidia stands out has a particularly foul turd in that tank. They make Apple and Google look welcoming and friendly just by the way they treat customers and deal with scrutiny.

Coincidentally, AMD makes a pretty damned good X86 SOC(ish). It's not as tightly engineered as Apple's M1 and M2 but they're pretty damned good and they could be as good as if not better than Apple's silicon if they decided to make the package tighter.

In fact I would bet on AMD being able to build a better SOC than Apple very quickly if they focused on it. I genuinely think that they will, I could be wrong, Lord knows I'm wrong more often than not but I honestly think they will.

One more thing: Windows is getting worse and worse. For half of what I do, Linux is so much more stable and capable than Windows. For the other half... Linux is just. fucking. awful. Nevermindthatthough.

I think costs and power consumption might drive a shift in personal computing.
 
X86 is a completely closed duopoly, ARM is the closest thing to an open system we have today. Don't blame the closed nature of Android and iOS on ARM, that's a deliberate effort by Apple and Google to lock down their software. There are upsides to that closed system for nubs, it's nerds like us that those ecosystems harm. Also, team green can make ARM right now, their goal was to control access to ARM to harm their competition, let's not forget, among big tech companies nVidia stands out has a particularly foul turd in that tank. They make Apple and Google look welcoming and friendly just by the way they treat customers and deal with scrutiny.

Coincidentally, AMD makes a pretty damned good X86 SOC(ish). It's not as tightly engineered as Apple's M1 and M2 but they're pretty damned good and they could be as good as if not better than Apple's silicon if they decided to make the package tighter.

In fact I would bet on AMD being able to build a better SOC than Apple very quickly if they focused on it. I genuinely think that they will, I could be wrong, Lord knows I'm wrong more often than not but I honestly think they will.

One more thing: Windows is getting worse and worse. For half of what I do, Linux is so much more stable and capable than Windows. For the other half... Linux is just. fucking. awful. Nevermindthatthough.

I think costs and power consumption might drive a shift in personal computing.

I don't worship at the alter of Jim Keller in anyway... but recently he said the stupidest thing AMD has done was cancel K12. There is part of me that agrees.... part of me that knows AMD needed to focus on something and win at the time. He isn't wrong though If AMD had released the ARM version of ryzen with the very first ryzen. Its hard to imagine Apple would be the king of ARM.... also if Nvidia does actually score a massive hit with Grace in the server/super computer space, its going to be even more painful knowing AMD abandoned a project with a 5-7 year head start.
 
Not only is there going to be a glut of used cards, but the price of wafers is up substantially and with current inflation numbers for everything the demand for expensive GPUs (and other electronics/commodities) is going to dry up. Buy a new GPU or put gas in the car to go to work. Not a tough decision to make.
Daddy is getting a new GPU!
 
Nothing about buying ARM for Nvidia... had to do with making their own products. They could make a super ARM processor with a built in 4090 tomorrow (which I mean they are for super computer customers)... or they could make a powerful efficient ARM chip with co processors their most efficient laptop iGPU to take on Apple. They have an architecture license.... they can build and sell any ARM product they want. The only thing buying ARM would have gave them... would have been Nvidia GPUs forced into every low cost ARM device in existence.
ARMv8 was introduced back in 2013 as 64-bit and that was the last major changes done to ARM. ARMv9 was released last year and we still don't see any devices using it. If you're going to improve ARM then you need to do something drastic like adding instructions or reworking it until near incompatibility. Nvidia could do these things on their own but who would use it when Nvidia is the only one? That's why buying ARM made sense for Nvidia.
Also potentially the ability to renegotiate all those ARM licenses in general. (I mean not Apples they have a special status) Even other architecture license holders like Samsung and Qualcomm wouldn't have been immune forever from Nvidia altering those deals.
Oh yea, that's a big one. Nvidia can do so much to alter the deal.
I agree it would suck to loose the modular nature of the PC, but its also hard to argue that for the vast majority of computing Machines using things like Apples M chips are pretty hard to deny. Apple is at a pretty early point of their switch to ARM really. It has been mostly smooth for the majority of Apple users. Another couple generations... by like M4 or M5, Apple users will think of Apple x86 in the same way they remember (if they are old enough) Apple PPC.
Apple M chips are not amazing and will get worse for Apple. The lack of 3nm for the M2 was a huge blow to Apple. What people need to understand is that the M1 was 5nm while AMD was 7nm and Intel was barely 10nm. That's a pretty big difference in terms of performance and power consumption. I've already proven that the AMD 6900HS is nearly on par with Apple's M1 Pro in terms of power consumption, with the exception of single threaded performance still using 4x more power. When AMD introduces the 7000 series using 5nm then we'll see some serious shit. Also Intels power consumption is surprisingly good except their GPU performance is still bad.
23.png


I give it a couple of years before Apple is either going to AMD or Nvidia for GPU tech. They'll probably continue their ARM trek but I can't imagine their customers will be happy with the lack of software and poor performance. x86 is not in danger here, but is the danger. The reason people don't see this is because hardly anyone benchmarks Apple products outside of websites and YouTubers who exclusively deal with Apple products, and those people can't benchmark. Linus Tech Tips is one of the few, and their benchmarks are still awful.

The entire global market is on the edge of collapse. There is far less money available in your average home for non-essentials. This is something GPU makers are going to have to deal with. If they attempt to maintain high prices, it will make it even less likely people will upgrade. Why spend hundreds of dollars for a few extra fps?
GPU makers are like everything else now in that they cater to the wealthy. Hardly anyone makes products that are affordable. This is going to be a mistake.
 
X86 is a completely closed duopoly, ARM is the closest thing to an open system we have today. Don't blame the closed nature of Android and iOS on ARM, that's a deliberate effort by Apple and Google to lock down their software.
While ARM and x86 don't have to be in open or closed systems, they both have a different history in terms of how they came about. x86 got it's start with IBM, and through it IBM compatibles. The entire concept of the IBM PC was created and designed from the start to be open. Even AMD getting the license to make x86 chips was done by IBM to prevent Intel from having a monopoly.

While anyone can make their own ARM chip, there's no unified computer system that allows anyone to do what they want with this hardware. Not Apple, not Google, not RaspberryPi, nobody has a standard in how to load software onto an ARM device. None of the companies I listed even has a standard within themselves. Other similar licensing like ARM was MIPS and PowerPC, which have died because the singular company behind them made little improvements over time. Other companies could improve them but they're not hardware companies in the same sense. AMD and Intel have a unique relationship in that they have to constantly compete or lose market share. ARM went bankrupt because their licensing deals pay jack.
There are upsides to that closed system for nubs, it's nerds like us that those ecosystems harm.
It harms everyone equally. Nubs have no benefits other than being oblivious.
Coincidentally, AMD makes a pretty damned good X86 SOC(ish). It's not as tightly engineered as Apple's M1 and M2 but they're pretty damned good and they could be as good as if not better than Apple's silicon if they decided to make the package tighter.
Apple SoC is not the same engineering like what AMD does. Apple simply took ARMv8 and gave it more bandwidth and 5nm manufacturing along with fixed function hardware for video encoding and decoding. Their GPU which is an awful imitation of PowerVR. AMD is already light years ahead of Apple, but didn't get TSMC's 5nm back in 2020 like Apple did. Also for some stupid reason, AMD made Zen3 and RNDA2 back in 2020 but didn't use both in laptops until this year. While Apple is moving onto ARMv9 this year, AMD designed Zen4 and 3D V Cache technology. You really can't compare the two companies engineering. AMD has and will continue to out engineer Apple by light years. The same can be said about Intel as they are at least making big changes in how the company functions. Intel poaching Raja Koduri was a smart move as well as them finally taking the GPU industry serious.

The truth is Apple saw how dysfunctional AMD and Intel was in 2020 and paid big money for 5nm and made a very efficient chip. AMD has their head on better than Intel does right now but I think AMD needs to turn their head a few times more. AMD is going to release Zen4 based chips with RNDA2 on 5nm later this year, but will they also do this for laptops? Intel is still trying to get their manufacturing bellow 10nm. Until Intel fixes this, their CPU's will always be at a big disadvantage. Also I still don't see their ARC GPU's. I know Intel GPU's will suck but I don't care because the market needs competition.
 
ARMv8 was introduced back in 2013 as 64-bit and that was the last major changes done to ARM. ARMv9 was released last year and we still don't see any devices using it. If you're going to improve ARM then you need to do something drastic like adding instructions or reworking it until near incompatibility. Nvidia could do these things on their own but who would use it when Nvidia is the only one? That's why buying ARM made sense for Nvidia.

Oh yea, that's a big one. Nvidia can do so much to alter the deal.

Apple M chips are not amazing and will get worse for Apple. The lack of 3nm for the M2 was a huge blow to Apple. What people need to understand is that the M1 was 5nm while AMD was 7nm and Intel was barely 10nm. That's a pretty big difference in terms of performance and power consumption. I've already proven that the AMD 6900HS is nearly on par with Apple's M1 Pro in terms of power consumption, with the exception of single threaded performance still using 4x more power. When AMD introduces the 7000 series using 5nm then we'll see some serious shit. Also Intels power consumption is surprisingly good except their GPU performance is still bad.
View attachment 488858

I give it a couple of years before Apple is either going to AMD or Nvidia for GPU tech. They'll probably continue their ARM trek but I can't imagine their customers will be happy with the lack of software and poor performance. x86 is not in danger here, but is the danger. The reason people don't see this is because hardly anyone benchmarks Apple products outside of websites and YouTubers who exclusively deal with Apple products, and those people can't benchmark. Linus Tech Tips is one of the few, and their benchmarks are still awful.


GPU makers are like everything else now in that they cater to the wealthy. Hardly anyone makes products that are affordable. This is going to be a mistake.

Its really not the instruction set that is the issue. The instruction set itself is fine. Chip design improvements for any ISA is more about cache systems and pipeline/prefetch setups. Apples ARM chip is heavily modified and yet it still uses the same instruction set. Instruction set is mostly unimportant. (the only reasons recent changes mattered much is ARM was still transitioning from 32 to 64b instructions) Take x86 as an example... there is nothing really all that different between a P4/bulldozer/Pentium Pro/Ryzen/i9 Instruction set... in fact they all share the exact same instruction set. Sure there may be a few extensions supported by one and not the other, but the core instruction set is identical. What makes them all different is first Pipeline depth then prefetch units (which are the #1 driver of single thread core IPC) and cache layouts and implementations. The exact same instruction set can be found in a P4 as a current I9... not counting things like AVX-512 code what runs on a I9 will run on a P4.

On the GPU end I don't expect Apple will ever go backwards now and outsource GPUs. They have made it plain even their "pro" machines are going to be running M chips... they will just add more of them. They are already doing that with M1 Pros and M1 maxes. Apple is not going to all of a sudden decide to go after the render farm business and they have shown little interest in going after AAA gaming.

On Nvidia I think we can agree to disagree on what their plans for ARM where. I mean they already had Grace V9 cores locked down... they will be shipping shortly. The design was already complete and they already had early silicon when they started the lets buy ARM push. v9 has no need of changes for a long time.... V9 was required to update the ISA to be a proper 64b ISA. v8 had 64 bit registers, but in many ways it was hobbled by design for max compatibility with v7. A v10 update may actually never be needed... unless ARM decides to add ideas similar to extensions added by Apple/Nvidia ect for use by the pleb license holders like Mediatek.

That is where ARM is different from x86... x86 has 2 manufacturers and that is it, and they both have perpetual cross license after years of court battles. Which means if Intel adds a AVX-1024 extension... AMD can implement it. If AMD adds a SSE44 (joking) Intel can also implement it. That isn't the case in ARM land... in ARM land you have Apple,Nvidia,Qualcomm,Samsung,Intel,AMD all with architecture licenses which means they are free to add extensions all they like. However other companies don't have cross license setups nor does ARM. (this is why companies don't like open source things... having to let other companies use your work isn't popular) In practicality that means when Qualcomm added Hexagon Vector instructions (HVX)... Samsung can't just add them to their chip. Also means ARM can't add extensions to their v8.6 ISA for the mediateks/teslas ect to use. However what ARM can do... is take the idea and incorporate similar but not infringing function into their ISA. (if they can find away to actually do that)

IMO that is the biggest weakness of ARM. Sure you have lots of companies making ARM processors... but they all start adding their own extensions. Those extensions become exclusive to them and you end up with software splits, ect. For ARM themselves it means they are constantly trying to find away to add new functionality to their core ISA instead of just approving extensions... also imagine how much of a PITA it is for ARM engineers. They aren't employed to develop cool new extensions... or come up with radical new ways of implementing prefetch systems ect. They are employed mostly to back engineer other companies ARM chips... and find away to create Qualcomm like Vector instructions, or Apples efficient cache design, Nvidias improved GPU interconnects. All while not stepping on patents. Its run like Cyrix was on a bigger scale.... they spend their days trying to figure out how to do the same thing in a different (Legal) way.

On the power usage stuff... I'm not sure its really fair to use benchmarks like Cinibench. Its a pure bench, and has no real world application. Apples argument is simple... no one is using a M1/2 powered laptop or mini machine to do production 3D rendering. Who cares what the M1 scores in cinibench... no one would ever be buying a M1 mac to batch run Maya or Houdini renders. (having said that M1 pro/max is very capable of editing scenes better then any other laptop on the planet) Need a machine to do on site 8k video editing ? Good luck finding any x86 machine that does half as good a job. Need a machine to do onsite (or even studio) multi track audio. Again there is no PC that gets anywhere close. The problem with the "bench mark" crowd imo is they try and measure every machine in every task. No one is going to buy a mac mini to render 3D projects... so I don't care what cinibench says. Can the M1 stuff do pre render work... YES very very very well. Nothing x86 can touch the M1pro/max for real 3D pre work and video/audio editing. Apples GPUs do what needs done. I have seen a M1 max machine running zbrush... artist using it had no issues detailing a 60m poly model. His take was it was by far the best laptop he ever used... and he told me it was also far better then his 3 year old high end desktop. No doubt his 3 year old desktop probably would pump out final renders faster, but that isn't what he would ever be doing.... so who cares. What he needed it to do (let him take a model home and mess with it) the Apple laptop bettered his desktop.
 
Last edited:
Its really not the instruction set that is the issue. The instruction set itself is fine. Chip design improvements for any ISA is more about cache systems and pipeline/prefetch setups. Apples ARM chip is heavily modified and yet it still uses the same instruction set.
So heavily modified that Apple's M chips also have the same PACMAN flaw as all the others.
Instruction set is mostly unimportant.
My dick is hard for AVX-512 coming to Ryzen.
On the GPU end I don't expect Apple will ever go backwards now and outsource GPUs. They have made it plain even their "pro" machines are going to be running M chips... they will just add more of them. They are already doing that with M1 Pros and M1 maxes. Apple is not going to all of a sudden decide to go after the render farm business and they have shown little interest in going after AAA gaming.
Apple GPU's suck and I fully expect them to integrate AMD's GPU tech into their M series line of products at some point. Apple does not have the engineering capability to make a good GPU. They had to make a multiyear agreement with Imagination to have access to their GPU tech. Imagination couldn't even compete with ATI, 3Dfx, and Nvidia back in the day. Now with Imagination left nearly destroyed by Apple, which are still helping Apple making their GPU's you really think they can compete with the likes of AMD and Nvidia? People can't read GPU benchmarks where the M1 Pro and Maxx are still destroyed by the RTX 3060.
That is where ARM is different from x86... x86 has 2 manufacturers and that is it, and they both have perpetual cross license after years of court battles. Which means if Intel adds a AVX-1024 extension... AMD can implement it. If AMD adds a SSE44 (joking) Intel can also implement it. That isn't the case in ARM land... in ARM land you have Apple,Nvidia,Qualcomm,Samsung,Intel,AMD all with architecture licenses which means they are free to add extensions all they like. However other companies don't have cross license setups nor does ARM. (this is why companies don't like open source things... having to let other companies use your work isn't popular) In practicality that means when Qualcomm added Hexagon Vector instructions (HVX)... Samsung can't just add them to their chip. Also means ARM can't add extensions to their v8.6 ISA for the mediateks/teslas ect to use. However what ARM can do... is take the idea and incorporate similar but not infringing function into their ISA. (if they can find away to actually do that)
What that means is that over time ARM will fragment while x86 won't. Which is great if you think developers want to take the time to code for the various differences that could come out of it.
On the power usage stuff... I'm not sure its really fair to use benchmarks like Cinibench. Its a pure bench, and has no real world application. Apples argument is simple... no one is using a M1/2 powered laptop or mini machine to do production 3D rendering. Who cares what the M1 scores in cinibench... no one would ever be buying a M1 mac to batch run Maya or Houdini renders.
Ok then here's an M1 Pro playing Fortnite and only lasting 1 hour and 36 minutes from a full charge. It's not power efficient when you make these machines work.

Need a machine to do onsite (or even studio) multi track audio. Again there is no PC that gets anywhere close.
Why wouldn't a PC not be able to do multi track audio? I get the video editing thing since Apple has fixed function hardware for that but audio?

Anyway, one thing I should point out here is that Intel and Apple have both bought the shit out of TSMC's 3nm. To the point where AMD couldn't even think of making a 3nm chip from TSMC until 2024. So there's a good possibility that Nvidia was just pushed out of the 3nm and why they are cutting TSMC orders. Intel especially payed a lot for the 3nm, so I do expect Intel to go hard on Apple for the next couple of years. Can you imagine what Intel could do with 3nm instead of their 10nm? Also Samsung got their 3nm working just fine so who knows if Nvidia might look towards Samsung for manufacturing. Samsung already makes Nvidia's 3000 series so they may have a relationship there. One things for sure is that Apple made it clear that good manufacturing can make a world of difference in power efficiency, and even Intel is buying up as much 3nm manufacturing as they can get.
 
Maya or Houdini renders. (having said that M1 pro/max is very capable of editing scenes better then any other laptop on the planet)
Not sure even on Blender that ran native:


His that true for Maya scene editing, it is more capable than a 12900H+mobile 3080 TI ? Source ?

Need a machine to do on site 8k video editing ? Good luck finding any x86 machine that does half as good a job.

 
Not sure even on Blender that ran native:


His that true for Maya scene editing, it is more capable than a 12900H+mobile 3080 TI ? Source ?





Well it would seem you are proving my point... All these bench marks looking for conns, are irrelevent. No 3D model artist gives two shakes how fast any Laptop PC or MAC renders anything. So getting excited about cinebench wins is stupid. Even when it comes to rendering large video file projects... professionals are not doing that on laptops. lol (not that the M1 doesn't win there anyway... its just not going to be a big part of any real world workflow.) Most of these youtube benchs are done by people that have zero clue about what they are benching. Adobe has a benchmark mode... and I know how to make cool GFX for my youtube chan, that doesn't make you a pro. lol No laptop hardware from anyone is suited to rendering shit for real money. But being able to edit on the go... or quickly scrub through some footage ect, that is useful. M1 wins easily for that use.

In Adobe 2D video rendering... Winner M1 Max, by a pretty nice bit. 18s isn't nothing. And that is vs a 3080ti. What is more important to must pros is how smooth it is scrubbing timelines ect... and in that regard M1 wins.

Where Apple losses. Again its FINAL renders, in 3D workflows. (in 2D workflows M1 wins easily in all but obscure windows centric codecs) No professional 3D modeler will be rendering on a laptop. All of the youtube benchmark folks get this 100% wrong. No 3D pro cares what their laptop render times are cause they won't ever be doing final renders on that hardware. (perhaps the odd single frame pre render to check something... but final renders no) The only people that would be rendering on any laptop... are students and Youtubers with less then 10k subs creating cheesy 20s 3D title cards. lol
I have worked for professional 3D houses (not large ones... but large enough) yes a few of those guys have M1 macs and they love them for model and scene work.... no they don't render on them. Why would they... they send their work to the offices render farm. They do love that they don't need a chunky laptop that sucks 300 watts from the wall and will burn them if they pick it up. Even in the video you posted... this guy said as much, the M1 draws 1/4 the power from the wall while scrubbing around adobe premiere, and the same is true for stuff like Zbrush ect. Does it heat up like crazy if you try to render... of course, its not breaking the laws of physics, that just isn't something anyone making money on that stuff would be doing so who cares.

I agree that Apple is not a gaming machine... but who is expecting Macs to do AAA gaming at this point. Apple pretty much gave up any pretense of going after AAA gaming imo. If gaming is an important part of your computing Apple isn't (nor has it been since Apple IIe days) the go to.

This is also why Apple isn't going to buy GPUs from AMD or Nvidia or anyone else. Their GPUs actually handle peoples real M1 workloads just fine. Are they render beasts... hell no. But they don't need to be.
 
Status
Not open for further replies.
Back
Top