Apple Turns Its Back on Customers and NVIDIA with macOS Mojave

Megalith

24-bit/48kHz
Staff member
Joined
Aug 20, 2006
Messages
13,000
Mac users who prefer NVIDIA cards are calling on Apple to relinquish their iron grip on green team’s drivers and what GPUs are supported. The company’s latest OS, macOS 10.14 Mojave, reduced official support to just two cards, the Quadro K5000 and GeForce GTX 680, but that’s just part of the problem: “NVIDIA currently cannot release a driver unless it is approved by Apple,” so users encountering problems must wait even longer for a solution.

“Developers using Macs with NVIDIA graphics cards are reporting that after upgrading from 10.13 to 10.14 (Mojave) they are experiencing rendering regressions and slow performance. Apple fully controls drivers for Mac OS. Unfortunately, NVIDIA currently cannot release a driver unless it is approved by Apple.” Additional posts on the site say NVIDIA is working with Apple on Mojave support, but no clear timetable is mentioned.
 
I am on 10.13.6 for my GT 650M and GTX 1070.

The GT 650M is a discrete GPU inside my Macbook from 2012. Is that somehow not supported?

I bet my Mac will work fine on 10.14, but I will not be able to install Nvidia's own driver.

On 10.13 (using Nvidia's driver), there is an Nvidia Control Panel that lets users toggle which driver they want to use; Nvidia's or Apple's.

Using Apple's driver turns my 1070 into something slower than integrated graphics and using Nvidia's boosts gaming performance on the 'supported' GT 650.

I regret getting a 1070 over a 980 ti because of MacOS.

I had to wait over a year just for NV and Apple to allow Pascal support whereas Maxwell was supported on earlier versions of MacOS.

The Hackintosh world is severely affected, and AMD is becoming the preferred choice. Having 1 GPU option really sucks.
 
  • Like
Reactions: blurp
like this
Hey Microsoft, this is an example of what NOT to do. Before you get any ideas.

Macos makes pc's, they are not something you simply customize, it's their configuration and that's how you use it and not a single person should buy it unless you explicitly accept that nothing except apple hardware will work cause it's always been that way.

Apple hates Nvidia, and if you are a dev just buy a damn PC! devs who develop for apple should be using an amd gpu or intel cause that is what they use, problem solved :)
 
Which machine can you actually put a GPU in that is supported by this OS? Sounds like this is aimed more at the Hackintosh community than anything else.
 
Which machine can you actually put a GPU in that is supported by this OS? Sounds like this is aimed more at the Hackintosh community than anything else.

That’s my thought as well, but external gpu support will be the excuse.
 
That’s my thought as well, but external gpu support will be the excuse.

A large part of the community still use the cMP (Cheese grater Mac Pro 4,1's and 5,1's) as they're still very capable machines with quite a bit of life left in them yet. Basically the best Mac's ever made.

As can safely be assumed, Apple don't like this and would rather they spent $2500,00 on a Mac Mini!
 
wow what's amazing is the macs that are still working after 6 years, beside didn't nvidia drop support for those cards anyway ?
 
For all two models where you can replace the GPU, which are about 15 years old by now.

Or the 14 people who are running OS X with an eGPU adapter...
 
For all two models where you can replace the GPU, which are about 15 years old by now.

Or the 14 people who are running OS X with an eGPU adapter...

I'm still running a dual LGA1366 system, no need to upgrade. 12C/24T is great and tonnes of ram is great.
 
I'm still running a dual LGA1366 system, no need to upgrade. 12C/24T is great and tonnes of ram is great.
can't agree more. really happy i bought the mid 2012 with 128gb from owc.

put in x5690s earlier this year, and i couldn't be happier.

only thing that sucks about the dual lga1366 system is that the intel math kernel library (MKL) has a cutting-edge eigensolver that cannot split across physical CPUs (it seems), as i could only get 50% loadcap.

maybe they've fixed it by now, but i couldn't imagine this being a priority when i originally inquired 5 or so years ago.

small gripe, but worth sharing.
 
This is targeted at all the die-hard pro users using eGPU's. Why won't they take the hint? Apple isn't abusing you, you are abusing yourself.
 
You don't buy Apple for high end desktop, or gaming, those use cases where the GPU matters...

If you bought one for those use cases, you chose... poorly.
 
Apple hasn't forgiven Nvidia for that catastrophe with their G84 and G86S mobile gpus that pretty much crippled every single laptop that had it worldwide.
 
You don't buy Apple for high end desktop, or gaming, those use cases where the GPU matters...

If you bought one for those use cases, you chose... poorly.

I think this is a bit over the top.

Quite a few users of certain creative applications have always preferred Apple as their platform of choice over anything else. In the past it was actually how Apple built their desktop market share. In many of these scenario's the GPU is a very important and very utilized component.

Sure, you wouldn't buy a Mac for gaming, but I see very few people actually gaming on Mac's.
 
can't agree more. really happy i bought the mid 2012 with 128gb from owc.

put in x5690s earlier this year, and i couldn't be happier.

only thing that sucks about the dual lga1366 system is that the intel math kernel library (MKL) has a cutting-edge eigensolver that cannot split across physical CPUs (it seems), as i could only get 50% loadcap.

maybe they've fixed it by now, but i couldn't imagine this being a priority when i originally inquired 5 or so years ago.

small gripe, but worth sharing.

Bear in mind that the cMP 4,1/5,1 isn't NUMA capable, both CPU's have their own memory isolated from each other. My particular LGA1366 platform is NUMA capable so memory can be pooled and accessed by both processors.

I'm not too sure if this has an effect on your particular issues or not.
 
Bear in mind that the cMP 4,1/5,1 isn't NUMA capable, both CPU's have their own memory isolated from each other. My particular LGA1366 platform is NUMA capable so memory can be pooled and accessed by both processors.

I'm not too sure if this has an effect on your particular issues or not.

it would make a lot of sense. i really appreciated the design of the mid 2010. the x58 with two sockets was going to have some drawbacks, i guess.

in many use cases, i could saturate memory and CPU without issue, but i think for the eigensolver it's a bit more strict for its working memory allocation that is used to find the optimal solution.

and since the optimal solution is deemed as such after evaluating many possibilities, where the state space of these candidate solutions is typically big (for my real world cases), i think you're correct that the lack of NUMA probably relegates it to one cpu.

the northbridge being on a separate pcb from the southbridge still gives cool feels, though.

edit: found this interesting
 
Last edited:
it would make a lot of sense. i really appreciated the design of the mid 2010. the x58 with two sockets was going to have some drawbacks, i guess.

in many use cases, i could saturate memory and CPU without issue, but i think for the eigensolver it's a bit more strict for its working memory allocation that is used to find the optimal solution.

and since the optimal solution is deemed as such after evaluating many possibilities, where the state space of these candidate solutions is typically big (for my real world cases), i think you're correct that the lack of NUMA probably relegates it to one cpu.

the northbridge being on a separate pcb from the southbridge still gives cool feels, though.

edit: found this interesting

I love the 4,1/5,1 cMP. A beautifully designed machine that still holds it's own even today. Even if the EFI supported NUMA, I don't think the OS does? Although I could be wrong.

EDIT:

Depending on application, there's pro's and cons to NUMA vs SMP. However, these days with applications coded the way they are, NUMA generally holds a slight performance advantage as we move to more cores over outright clock speed.

Having said that, the advantage is minimal.
 
Back
Top