Apple to Announce its own Mac Processor

Where were you when they pulled actual boxes of FCP 7 off the shelves? Thats not how pro or enterprise operations live. FCPx was dogshit.
People literally still edit videos on FCP 7 to this day, even on a film that actually gets distribution (Also as another fun fact that the article brings up: the Academy Award winning film Parasite was also edited on FCP7). So, I'm not sure what your point is. If your problem that FCPx was and I reiterate was dogshit, considering that the previous program even to this day works just fine, then there isn't really a problem for said Pros.

Among other things it expected you to maintain multiple clients assets in your personal media library.
Aaaand? Sounds like what every editor does with every NLE?

People were put out when Itanium was killed, but it wasn't nearly as popular as half the stuff Apple takes to the woodshed when they get bored of it.
Aaaand? This post actually doesn't even make sense. The XServe as an example didn't sell well. Straight up. Even though it was a great product. Axing it was a business decision, it didn't make sense to sell things no one was buying. In other words: NOT popular. The 17" Macbook Pro also fits into this category. 17" laptops hit increasingly declining sales. You know the size that is the most popular and continues to be the most popular? Their smaller laptops, most notably the 13". The 13" Macbook Pro is actually the highest volume Mac that Apple makes. Period. And its been that way for a decade plus. So yeah, 17" = not popular.
So far to reiterate your cases that you're mad about: things Apple did continue to support for half a decade plus and stuff people no one bought. And I guess in the case of FCP you have an option to continue using the same program or move to another program that eventually got all the same features. Great. Be mad at them for no reason.
 
Last edited:
Where were you when they pulled actual boxes of FCP 7 off the shelves? Thats not how pro or enterprise operations live. FCPx was dogshit. Among other things it expected you to maintain multiple clients assets in your personal media library.
FCP 7 was released in 2009, right around when digital-only releases were taking off.
It has been nearly a decade since I, and many others I know, bought software in a physical box in a physical store.

Digital distributions are how enterprise operations live, and have for over a decade now - so what's your point?
Seriously, who buys a physical box of software these days, especially for enterprise...??? :meh:

People were put out when Itanium was killed, but it wasn't nearly as popular as half the stuff Apple takes to the woodshed when they get bored of it.
Who in their right mind, Intel aside, was "put out" when Itanium was killed?
Apple doesn't get rid of products because they are "bored of it", they get rid of products that don't sell - much like the XServe servers, like idiomatic mentioned.

Apple ditching Intel's x86-64 buggy 14nm++++ garbage is a win-win for Apple, and is eventually going to pave the way for 3rd-party ARM software and OS support.
 
Nah, I am referring to this article which I forgot to link in my original post above.

It's a few years old now, but they compared:
- Intel Sandy Bridge (C2700)
- AMD Bobcat (Zacate E-240)
- Intel Atom (N450)
- ARM Coretex A15 MPcore
- ARM Coretex A9 (OMAP4430)
- ARM Coretex A8 (OMAP3530)
- Longsoon STLS2F01

The interesting chart is Fig10 as it is the best measure of the actual energy used to perform a given task. Some architectures fare better than others, but ISA is not the defining characteristic. IN some tasks Intel's X86 chips do the best, in others ARM do the best, but in no case can you predict performance based on which ISA the chip is using.

Their conclusion is that ISA certainly matters on some very small 1-2mm^2 or sub milliwatt designs where RISC designs really pay off, but outside of that the architecture of the chip is much more important than the ISA.

I posted a response then in the comments sections of that article about six years ago:

Is this another gem from the same "University of Wisconsin" team that wrote a plain wrong paper that has been debunked in many places? The former paper was based in incorrect methodology, invalid data, and unsupported conclusions. For their former paper the team did everything possible to favor Intel over ARM. I recall I was so perplexed when I did read their bad paper that I did search more info about the authors and found that the senior author was closely related to Intel (one of his coworkers is a Intel lab guy, several students have Intel grants...).

This time you don't give a link to their updated paper, but reading some of the info that you disclose I can see that they are repeating the same mistakes again.

Everyone knows that ARM ISA is more efficient that x86 ISA. This is a fact, which has been measured lots of times.
 
I agree ARM has been shown to perform a few times... but those earlier chips always fizzled out when compared to the competition. They showed promise and by the time they got to production they where no longer all that attractive.

Production chips have been tested before and showed advantages over x86 competition. I already mentioned TX2.

Fair, but for Supercomputer types of highly parallelized loads the sky is really the limit. You can just keep adding CPU's of any architecture until you get to the performance level you desire.

Not really. If that was the case they cold just use tons of phone SoCs or the former server chips instead designing a new CPU with SVE and HBM.

but, but, I thought ARM was so efficient!

don't most of these supercomputers use GPUs and the like to achieve their perf numbers and not really CPUs?

All the computers in the top list (except Fugaku) are accelerated: they use either GPUs or another kind of accelerators. Fugaku is a CPU-only system. A similar system with only x86 CPUs would be two or three times less efficient than Fugaku.
 
Last edited:
All the computers in the top list (except Fugaku) are accelerated: they use either GPUs or another kind of accelerators. Fugaku is a CPU-only system. A similar system with only x86 CPUs would be two or three times less efficient than Fugaku.
A similar system would be like Intel's Xeon Phi, though, not something like Epyc, don't you agree?
 
A similar system would be like Intel's Xeon Phi, though, not something like Epyc, don't you agree?


Not really. Xeon Phi was never all that efficient, and was only produced because Larabee was such a failure (and Intel was looking for something to do with their densely-packed Atom cores).

Also, with the exception of the first generation, you couldn't run an OS image on the Phi. You can run an OS image on A64FX. It can also run standard ARM Scalable Vector Extensions code (with a NEON 128-bit fallback), while the Phi needs some special loving to produce anything competitive out of those vector units.

The Phi was x86 in name only.
 
Last edited:
Back
Top