Alleged Apple M1X Processor Specifications Surface

They say it's a 12 core but it's only got 8 or 4 usable cores. You can't use all 12 at once.
On standard big.LITTLE ARM CPUs, you would be correct.
With the M1, though as 1_rick and the article he posted shows, all 8 cores (performance and efficiency) can be used simultaneously.

From the article:
All cores can be used together for intensive tasks and Apple claims triple the performance compared to the MacBook with an Intel Core i7.
 
The conversation was about being in the same realm of performance under 25 watt or in a different one, having more core help that in multi core workload obviously (but make it harder to keep it under/around 25 watt), it is not much about loosing/winning but being in the same type of performance tier or not. If the ethernet controller, ram and etc... make the comparison ridiculous (that possible) I am sorry to have mixing them.

If you look at the reviews you quote:
and battles it with AMD’s new Zen3, winning some, losing some.
In my initial comment:
Is a 4800U Renoir mobile apu in such a different class ?

That seem to make the question legitimate.

Zen3 core can reach the performance of the mobile Apple chip only because AT tested a HEDT Enthusiast Desktop Zen3 at 4.9GHz:
While AMD’s Zen3 still holds the leads in several workloads, we need to remind ourselves that this comes at a great cost in power consumption in the +49W range while the Apple M1 here is using 7-8W total device active power.

Your question was answered twice in this thread. Here goes the third: Neither AMD nor Intel have a mobile chip that can compete in performance within the same power budget.
 
Last edited:
Zen3 core can reach the performance of the mobile Apple chip only because AT tested a HEDT Zen3 at 4.9GHz:


Your question was answered twice in this thread. Here goes the third: Neither AMD nor Intel have a mobile chip that can compete in performance within the same power budget.

Ok but this is an M1x thread. (not mobile) If you subjugate any set of factors you can find a metric that shines light in the direction you want.

The 4700u competes quite well within the power budget. My bad I forgot cinebench isn't a metric. When are they porting userbenchmark to apple now ? Asking for a friend.

You keep Don Quixoting everything that attempts to distill the relative performance of the various players in the game. Power, memory, thermals, frequency etc

By that logic, you can easily reverse the dynamic. Apples M chips are built on cutting edge 5nm and can barely keep up with yesterdays 7nm previous generation technology. (this is an example to show how stupid this single metric game is)
 
Zen3 core can reach the performance of the mobile Apple chip only because AT tested a HEDT Zen3 at 4.9GHz:
Well I am misreading completely Anandtech, I am relooking at it and I do not see any HEDT Zen3 (those are way over 100 watts), they seem to have been testing mobile chips the 4800U and the 4900HS, not a 5950x

Neither AMD nor Intel have a mobile chip that can compete in performance within the same power budget.
For sure (at the under 9 watt one) we were talking about 25 watt.
 
On standard big.LITTLE ARM CPUs, you would be correct.
With the M1, though as 1_rick and the article he posted shows, all 8 cores (performance and efficiency) can be used simultaneously.

From the article:
That would be interesting, though I would like to see benchmarks. There should be some out now I think.
 
Well I am misreading completely Anandtech, I am relooking at it and I do not see any HEDT Zen3 (those are way over 100 watts), they seem to have been testing mobile chips the 4800U and the 4900HS, not a 5950x
111168.png
117493.png
For sure (at the under 9 watt one) we were talking about 25 watt.

You said that the 4800U did beat the M1 "both 15 watts". But the M1 beats the 4800U whereas using only about one third of the power.
If the M1 beats the 4800. The M1X will increase the gap.
 
View attachment 303395
View attachment 303396


You said that the 4800U did beat the M1 "both 15 watts". But the M1 beats the 4800U whereas using only about one third of the power.
If the M1 beats the 4800. The M1X will increase the gap.
What? M1 beats the 4800u with 1/3 the watts? The above graphs, single threaded for more emphasis, are for a mac mini which is a 25w system. Either the 4800u uses 75w or your math is bogus.

If you posted the multi-threaded graphs, next page in the article, the M1 falls between the 4900HS (35w) and the 4800u (15w).

119365.png

(single thread for life)
 
I am happy for these chips and I expect some pretty good things from them, Apple tends not to half-ass their launches and I would expect a level of hardware integration that is more akin to consoles than traditional PCs with their OS launches. So going forward I am thinking we will see many applications that seem to perform better than they should simply due to the degree of low-level optimization that Apple can perform that you just can't do on a PC. You can argue on a number of fronts but the fact that it seems to trade blows with the latest and greatest of AMD and Intel mobile parts might not make it the absolute best mobile part but by no means can you call it a bad performance, so if the M1 is what we can expect in their 10w power target than the M1X at a 45w or 65w should put on a pretty good show, maybe not absolute best in class but by no means a slouch and probably better than anything Intel is able to offer up at the moment. But given AMD's supply constraints, there is no way they could feed Apple's requirements for a launch so if the worst we can say about their chips is "well AMD bests them in these tests" then I think we can say with a pretty good degree of certainty that it's a good enough part for the intended job set.
 
You said that the 4800U did beat the M1 "both 15 watts". But the M1 beats the 4800U whereas using only about one third of the power.
If the M1 beats the 4800. The M1X will increase the gap.
You are having multiple conversation at the same time.

Never said the 4800 did beat the M1 (I quoted you reviews trading blows), the statement I was responding to was:
having that much CPU and GPU within a sub-25 watt TDP is virtually unheard of, and nothing in the x86-64 camp even comes close.

I look at some mobile APU from AMD advertised to be 15 watt affair running handbrake twice as fast, cinebench faster, and like the graph you just posted seem to be in the same tiers etc... and ask are they really not even close if you make them run at 25 watt ?
 
I’m not sure why everyone is harping about the TDP. That’s the power envelope they are targeting.

What I want to know is what does a 95w iMac version performs like or a 125w Mac Pro version. If you raise its TDP to what desktop parts are now or HEDT and do a more direct comparison I think a lot of this [H] hoopla would quiet down a bit.

It’s Apple so no one here really cares but it’s still impressive imo.
 
What? M1 beats the 4800u with 1/3 the watts? The above graphs, single threaded for more emphasis, are for a mac mini which is a 25w system. Either the 4800u uses 75w or your math is bogus.

If only you had followed the discussions... One is a SoC, and the other is an APU. So when comparing the power of the systems you cannot count only the APU. Moreover I also wrote an "about" which you have ignored.

If you posted the multi-threaded graphs, next page in the article, the M1 falls between the 4900HS (35w) and the 4800u (15w).

View attachment 303456

(single thread for life)

First, the graph is ordering by integer performance only. The M1 has superior float performance. If you average integer+float, then the performance of the M1 is above the 4900HS.

Second, that graph is comparing 8C+SMT in Zen2 to both 4C and 4C+4C configurations in the M1 chip.

4C+4C is about 33% faster than 4C, which is more or less the gains from adding SMT to Zen2 cores. The Icestorm cores in the M1 perform as if you had added SMT to the Firestorm cores. So x86 needs 8C+SMT to achieve the throughput of what is essentially an equivalent of 4C+SMT in the ARM side. Or said in another form; if you disable SMT in the 4900HS, its performance would fall to the level of the M1 (4 cores) result. That is, you need about two Zen2 cores (2 total Threads) to match the throughput of a single Firestorm core (1 Thread).

Moreover, you need a Zen3 core @4.9GHz to match the single thread performance of a Firestorm core @3.2GHz.
 
I’m not sure why everyone is harping about the TDP. That’s the power envelope they are targeting.

What I want to know is what does a 95w iMac version performs like or a 125w Mac Pro version. If you raise its TDP to what desktop parts are now or HEDT and do a more direct comparison I think a lot of this [H] hoopla would quiet down a bit.

It’s Apple so no one here really cares but it’s still impressive imo.
If the rumor of 8+4C is correct and clocks are maintained, then the M1X would be at #1 position in the graph posted by schmide. The TDP of this M1X would be about 2x that of the M1. So the M1X would beat the 10900K in both multithread and single thread but using about 1/3 of the power.
 
For those that are interested, Puget Systems benchmarked the M1 in their Adobe production tests.

https://www.pugetsystems.com/labs/a...n-for-Adobe-Creative-Cloud-1975/#Introduction

TLDR: performance is on par with a 6600k or 6700K with a GTX 1060 or 1070 depending on the app. This is substantially slower (as much as 300-500%) than current X86/RTX hardware. Puget did test against $2500 workstations though, which were $700 more than the Mac.

It’s impressive considering these are non-native apps, but Apple has a long way to go to match the R5 5600 / RTX 3070 combo you can get for about $1600 right now, or the R9-4xxx with RTX 2060s you can get in laptops for $1100 as of writing.


“ From a performance standpoint, the new Apple M1 MacBooks do fairly well considering that they are using a complete processor based around the ARM instruction set and software that is likely not fully optimized yet. But compared to a typical desktop workstation from Puget Systems that is around 2-3x faster on average (albeit at a higher cost), they certainly can't keep up.” -Puget
 
Last edited:
If only you had followed the discussions... One is a SoC, and the other is an APU. So when comparing the power of the systems you cannot count only the APU. Moreover I also wrote an "about" which you have ignored.

I'm pretty sure SoC (system on a chip) and APU (accelerated processing unit) are synonymous. I guess if you remove the GPU you can have a SoC that isn't an APU but both of these chips have GPUs so moreover moot.

But there is a point to be made here. With regards to power, the 4800u system exposes a lot more than the M1 (more, pci-e, memory, display, usb, etc) Those traces to peripheral components drain power. The M1 having integrated ram saves quite a bit.

So as much as you argue power, it really is an apples to other comparison.

First, the graph is ordering by integer performance only. The M1 has superior float performance. If you average integer+float, then the performance of the M1 is above the 4900HS.

Second, that graph is comparing 8C+SMT in Zen2 to both 4C and 4C+4C configurations in the M1 chip.

4C+4C is about 33% faster than 4C, which is more or less the gains from adding SMT to Zen2 cores. The Icestorm cores in the M1 perform as if you had added SMT to the Firestorm cores. So x86 needs 8C+SMT to achieve the throughput of what is essentially an equivalent of 4C+SMT in the ARM side. Or said in another form; if you disable SMT in the 4900HS, its performance would fall to the level of the M1 (4 cores) result. That is, you need about two Zen2 cores (2 total Threads) to match the throughput of a single Firestorm core (1 Thread).

I'm starting to believe you don't get the dynamics at play here or are willing to ignore reality to push your agenda. For one. You can't just average two distinct metrics equally when one is the dominate factor. Integer represents most of the common code path.

BTW I don't discount the FPU performance. I actually think it's a bit over provisioned for a single threaded core but it does shine.

If all the above was true the M1 would dominate everything which it doesn't. It trades blows.

Moreover, you need a Zen3 core @4.9GHz to match the single thread performance of a Firestorm core @3.2GHz.

So equal on single thread and Zen scales to 16, while M1 scales to 4 huge 4 small = 6 ?

When the M1x or whatever comes out, that will be something to watch.
 
Last edited:
It’s because it’s Apple. If this were anybody else they would be cheering and jumping up and down.
True, if it was AMD there would be scores of people cheering and agreeing how x86 is on its deathbed.
 
I'm pretty sure SoC (system on a chip) and APU (accelerated processing unit) are synonymous. I guess if you remove the GPU you can have a SoC that isn't an APU but both of these chips have GPUs so moreover moot.
juanrga is correct - an SoC is not the same as an APU.
An APU (modern CPU + GPU) is not as complex, nor has as many components on-package, as an SoC (has nearly everything, save for some external/proprietary controllers, and sometimes even includes RAM and disk storage).

So when comparing the TDP of an SoC to an APU, the TDP and additional power consumption of the extra components on the APU's motherboard would need to be included and factored into the performance metrics to be more apples-to-apples.
AMD themselves makes both APUs and SoCs, and while they may use the same CPUs in both, they are definitely not the same overall packages.
 
juanrga is correct - an SoC is not the same as an APU.
An APU (modern CPU + GPU) is not as complex, nor has as many components on-package, as an SoC (has nearly everything, save for some external/proprietary controllers, and sometimes even includes RAM and disk storage).

So when comparing the TDP of an SoC to an APU, the TDP and additional power consumption of the extra components on the APU's motherboard would need to be included and factored into the performance metrics to be more apples-to-apples.
AMD themselves makes both APUs and SoCs, and while they may use the same CPUs in both, they are definitely not the same overall packages.
FP6

MP1

Looks like Renoir has way more on chip.

Apple did get rid of the T2 chip though.

Edit:

ASRock Industrial 4X4 BOX-V1000M

Less chips. 2 power, a multimedia, giga lan, and a wireless pci-e module.

finpage-packageshot.jpg
 
Last edited:
I'm pretty sure SoC (system on a chip) and APU (accelerated processing unit) are synonymous.

They aren't. If you read the Anandtech review, the author comments how you cannot directly compare the TDP of the M1 to the TDP of the APUs, because the SoC includes chips not included in the APU.

So equal on single thread and Zen scales to 16, while M1 scales to 4 huge 4 small = 6 ?

16T Zen2 to match 8T from the M1.
 
juanrga is correct - an SoC is not the same as an APU.
An APU (modern CPU + GPU) is not as complex, nor has as many components on-package, as an SoC (has nearly everything, save for some external/proprietary controllers, and sometimes even includes RAM and disk storage).

So when comparing the TDP of an SoC to an APU, the TDP and additional power consumption of the extra components on the APU's motherboard would need to be included and factored into the performance metrics to be more apples-to-apples.
AMD themselves makes both APUs and SoCs, and while they may use the same CPUs in both, they are definitely not the same overall packages.

From AT review:

We noted that although Apple doesn’t really publish any TDP figure, we estimate that the M1 here in the Mac mini behaves like a 20-24W TDP chip.

We’re including Intel’s newest Tiger Lake system with an i7-1185G7 at 28W, an AMD Ryzen 7 4800U at 15W, and a Ryzen 9 4900HS at 35W as comparison points. It’s to be noted that the actual power consumption of these devices should exceed that of their advertised TDPs, as it doesn’t account for DRAM or VRMs.
 
They aren't. If you read the Anandtech review, the author comments how you cannot directly compare the TDP of the M1 to the TDP of the APUs, because the SoC includes chips not included in the APU.

But the Renoir APU does more. It provides more interfaces on chip. Other than the neural engine they have basically the same internals sans LLC placement and size.

Branding the APU as other is wrong. It's as much a SoC as the M1.

16T Zen2 to match 8T from the M1.

So threads are now a bad thing? Less is the new more I guess.
 
Saw one of my favorite music programs, Logic Pro, running on the M1, and it's pretty impressive for an ARM chip. Can definitely see the potential as Apple continues to iterate the chip to M2, M3 etc.

Yet another paradigm shift in computing that Microsoft will miss because they are still coasting on inertia of 30 year old spaghetti code and a mentality to match.
 
Since this is basically the M thread.

Gary Explains - MacBook Air (M1) Real World Performance and Thermal Throttling Test



Even if it isn't the best metric, to see two systems a 9980hk (8c16t 2.4-5g) and an M1 (4x4 3g) stutter and burst through the varying workloads is quite telling.
 
Since this is basically the M thread.

Gary Explains - MacBook Air (M1) Real World Performance and Thermal Throttling Test



Even if it isn't the best metric, to see two systems a 9980hk (8c16t 2.4-5g) and an M1 (4x4 3g) stutter and burst through the varying workloads is quite telling.

The battery stopping charging when it got too hot was both cool and troubling. My next question would be when the battery starts getting low will it finally throttle down to start charging or will it simply run the battery dry or just pull straight wall voltage and stop dealing with the battery all together.
 
Saw one of my favorite music programs, Logic Pro, running on the M1, and it's pretty impressive for an ARM chip. Can definitely see the potential as Apple continues to iterate the chip to M2, M3 etc.

Yet another paradigm shift in computing that Microsoft will miss because they are still coasting on inertia of 30 year old spaghetti code and a mentality to match.
Microsoft’s biggest problem is they can’t get many of their developers to update their tool sets and programming practices. Apple on the other hand just tells theirs “this is your new set of API’s you have 48 hours to become compliant, you will not be told again.”

where Apple developers shrug and get to work and Microsoft’s fight tooth and nail to keep their 40 year old code base relevant.
 
I also wanted to note: Everything is on the chip and nothing is upgradeable. Ram - you buy 8gb you get only forever 8gb and it currently only goes up to 16gb.

I think the SSD is also soldered down on the board, so no upgrading on the new machiens. Period.

This was the main reason I didn't pull the trigger on one yet.

I mentioned that in the other apple topic as well and some person responded (in a tech enthusiast forum mind you) to the effect of: "that is plenty for what it does".
Everything is soldered onto the board. You also pay MASSIVE inflated costs to upgrade. That $699 mac mini? If you want a larger SSD add +$200 JUST for a slightly larger SSD (to go from 256Gb to 512). A part that apple probably gets for MAYBE $50. add another +$200 to upgrade to 16Gb. APPLE really has people over a barrel with nullifying user upgrades. It's absurd how much they are charging for such paltry upgrades, and apple has been doing that for decades but at least in the past savvy people could do it themselves. Now they have no other options
 
Apple's upgrade prices are ridiculous. Completely agreed.

But considering that people are paying $1000 for cell phones these days (both Android and iPhone devices) and replacing them every other year (or maybe even yearly?), I think most of their customers don't care. Especially now if you figure they are more closed-box devices like an iPhone or iPad than a general purpose PC. Want the 256GB iPad? Pay extra. Want the 512GB Macbook Air? Pay extra.

Anyone who wants to self-upgrade their system probably wouldn't be looking for a (for instance) Mac Mini anyways. Sure you could upgrade the 2018 Mac Mini's RAM yourself, but... such a pain in the ass to do.
 
I mentioned that in the other apple topic as well and some person responded (in a tech enthusiast forum mind you) to the effect of: "that is plenty for what it does".
Everything is soldered onto the board. You also pay MASSIVE inflated costs to upgrade. That $699 mac mini? If you want a larger SSD add +$200 JUST for a slightly larger SSD (to go from 256Gb to 512). A part that apple probably gets for MAYBE $50. add another +$200 to upgrade to 16Gb. APPLE really has people over a barrel with nullifying user upgrades. It's absurd how much they are charging for such paltry upgrades, and apple has been doing that for decades but at least in the past savvy people could do it themselves. Now they have no other options
On top of what deruberhanyok said about a self-contained box... what would you like Apple to do, exactly? It sounds like you're aghast that Apple would dare make more than a tiny profit.

That's not to excuse all of Apple's rates as they are, but it can develop in-house silicon (among other things) in part because it refuses to operate on razor-thin margins for most products. I'm not a big fan of Windows PC vendors who price so aggressively that they either sacrifice money for other elements (like support or long-term R&D) or are perpetually a stone's throw away from operating at a loss.
 
On top of what deruberhanyok said about a self-contained box... what would you like Apple to do, exactly? It sounds like you're aghast that Apple would dare make more than a tiny profit.

That's not to excuse all of Apple's rates as they are, but it can develop in-house silicon (among other things) in part because it refuses to operate on razor-thin margins for most products. I'm not a big fan of Windows PC vendors who price so aggressively that they either sacrifice money for other elements (like support or long-term R&D) or are perpetually a stone's throw away from operating at a loss.
Race to the bottom pricing only works when you can move volume. And most PC manufacturers shave every corner they can to get there. Apple could not do most of what they do operating that way. I mean I don’t see Acer developing their own OS, store fronts, add campaigns, silicon, ... And they move roughly the same number of units a year.
 
Race to the bottom pricing only works when you can move volume. And most PC manufacturers shave every corner they can to get there. Apple could not do most of what they do operating that way. I mean I don’t see Acer developing their own OS, store fronts, add campaigns, silicon, ... And they move roughly the same number of units a year.
Exactly. Part of the problem with licensing someone else's OS and chips is that it becomes extremely difficult to stand out in any other way besides pricing (including value for money). You see Windows PC and Android phone makers starving themselves for half a point of market share — because they sure as hell can't rely on their devices being unique.

Again, that's not to excuse Apple's every pricing decision. But it's a bit ironic to see people complain about Apple's supposedly extortionate pricing in a thread where people are marvelling at how the company's first in-house chip is already outperforming Intel's latest. If some folks had their way and turned Apple into just another cut-every-corner PC maker, we wouldn't be in this position.
 
If some folks had their way and turned Apple into just another cut-every-corner PC maker, we wouldn't be in this position.

Apple has insane profits in every market they are in.... I'm sure they cut every possible corner that needs to be cut to maximize that profit. This new
direction with their processors is surely a way for them to continue to maximize profit by making everything proprietary.

Apple just doesn't represent good value. Almost every person I know that likes Apple products only does because "Android is for poors" and they've got to keep up their image.
 
Apple has insane profits in every market they are in.... I'm sure they cut every possible corner that needs to be cut to maximize that profit. This new
direction with their processors is surely a way for them to continue to maximize profit by making everything proprietary.

Apple just doesn't represent good value. Almost every person I know that likes Apple products only does because "Android is for poors" and they've got to keep up their image.
I can't agree with that, I can assure you that Apple does indeed represent a phenomenal value over time, I manage hundreds of devices and the Apple ones are my second lowest TCO falling right behind the ChromeOS devices, once all management and licensing factors are included over a 5-year span. If I can wrangle the budget then in 3 years' time when the next staff refresh is due Apple will be a serious contender for the bid. This may change in the coming months once I get Intune online, it should ease up management and let me ditch out on some of the more costly security software licensing, so I will be able to get 3 years of numbers on that before making the decision. May end up taking those savings and investing it into higher-end Dell Latitudes instead, it will probably come down to some committee vote but as of right now I can safely say that the long-term numbers put Apple near the forefront for total costs.
 
I can't agree with that, I can assure you that Apple does indeed represent a phenomenal value over time, I manage hundreds of devices and the Apple ones are my second lowest TCO falling right behind the ChromeOS devices, once all management and licensing factors are included over a 5-year span. If I can wrangle the budget then in 3 years' time when the next staff refresh is due Apple will be a serious contender for the bid. This may change in the coming months once I get Intune online, it should ease up management and let me ditch out on some of the more costly security software licensing, so I will be able to get 3 years of numbers on that before making the decision. May end up taking those savings and investing it into higher-end Dell Latitudes instead, it will probably come down to some committee vote but as of right now I can safely say that the long-term numbers put Apple near the forefront for total costs.

Yeah I agree with you. I'm no Apple fanboy but from a productivity standpoint Apple devices tend to last longer than the various flavors of Windows based mobile devices. This is simply because you can't go out and by a "cheaply" made flavor of an Apple based laptop. The majority of the population don't go out and buy the most expensive machines, especially ones that are purchased for basic productivity or employer issued. The cheapest MacBook will outlast the cheapest Dell/HP/Acer POS, everyone knows this. I've had Dell XPS machines for work for a number of years and my newest employer gave me an MBP 15" with the i9 and I would never go back to a Windows based laptop.
 
Apple has insane profits in every market they are in.... I'm sure they cut every possible corner that needs to be cut to maximize that profit. This new
direction with their processors is surely a way for them to continue to maximize profit by making everything proprietary.

Apple just doesn't represent good value. Almost every person I know that likes Apple products only does because "Android is for poors" and they've got to keep up their image.
I'm sorry, but there's a big difference between charging high prices and, say, building a $400 laptop with flimsy plastic shells and shoddy batteries that virtually guarantee calls to (predictably poor) tech support. And can you please not confuse anecdotes with reality? I use Macs because of their tight ecosystem integration and how well they suit my workflow, and I'm pretty sure I'm far from alone.

Profit matters for this, I'm sure, but there's really no mystery to this switch — it's about getting past Intel's stagnant chip lineup and giving Macs a meaningful way to stand out. Apple wants to avoid situations where another company determines its fate, whether it's Intel, Adobe or Microsoft, and it was clearly being held back in this case.
 
I wonder if NVIDIA will be able to build something like the M1x when they eventually take control of ARM? Since they’ll be in charge of future roadmaps for the architecture, we might see GeForce integration in some manner on most SoC. But I want nvidia to pursue the desktop space and hopefully they can get big gaming publishers to join them.

As a hypothetical, how many of you would switch away from x86 if NVIDIA built a desktop class ARM processor with Vulkan support and big name publishers like EA, Crytek, Epic, CDPR etc on board and they had 15-20% better performance vs their x86 counterparts and decent virtualization for Windows x86 apps? I think I’d be one of the first to switch over.
 
I wonder if NVIDIA will be able to build something like the M1x when they eventually take control of ARM? Since they’ll be in charge of future roadmaps for the architecture, we might see GeForce integration in some manner on most SoC. But I want nvidia to pursue the desktop space and hopefully they can get big gaming publishers to join them.
Isn't the SOC in Nintendo console already quite close to what you describe ?

https://en.wikipedia.org/wiki/Tegra
https://en.wikipedia.org/wiki/Tegra#Xavier

I really do not no, it is not a rhetorical question, it is just that they are called a SOC and have maxwell/pascal/volta and incoming a new one with Ampere gpu in them.
 
Yeah, NVIDIA has been building SOCs like these for a long time. They're just not being widely used. I think the last "big name" use for them (before the Switch) that wasn't NVIDIA hardware was... Surface 2? Or the original Nexus 7 tablet?

I'd consider the Switch, which uses something like Tegra X1, which uses Maxwell GPU tech (which was used in the NVIDIA Shield Android TV box in 2015), probably their biggest mainstream use?

Tegra X2 which uses Pascal GPU tech is only in use in NVIDIA development boards.

"Xavier", which uses Volta, was announced in 2016, made available nearly 3 years later, and is only in use in NVIDIA development boards. It has 8 CPU cores of NVIDIA's own design ("Carmel").

"Orin" was announced in 2018 and on paper it is a monster - 12 ARM Hercules cores (Cortex A78-based) and an Ampere-derived GPU, but no further information is known (after two years).

So they've got the tech sitting around, or they've designed it, but it doesn't seem like anyone wants to use it. I have a feeling that Xavier-based systems would be pretty snappy (as would "Orin" if they ever release them) but you can't even really find GPU benchmarks to show how fast they'd be playing games, for instance. I don't know if that's for lack of interest or some kind of user agreement required to own the things. CPU seems pretty good compared with its contemporaries though:

https://www.phoronix.com/scan.php?page=article&item=nvidia-xavier-carmel&num=1

https://www.phoronix.com/scan.php?page=article&item=nvidia-jetson-xavier&num=1

I'd be interested, though as an Ubuntu guy I'm not a fan of NVIDIA's handling of GPU drivers.
 
Back
Top