Apples M2 looks like a beast.

Apple also gives you control over that data, including during the setup process. Microsoft's approach is "will you let us share a ton of data, or just some data?"

There is no way to turn it off entirely that I know of. Could you tell me how?

I got bored and read the terms and conditions of my iPhone yesterday before updating it.
 
You'll note the ad and personalization, etc tracking. You can turn it all off and also change your advertising ID.

No, you can't, you can only turn off some of it, see what is shared, or disable or delete your account:

Untitled.jpg


And that's just for apps, they do other stuff with your wifi, GPS, Bluetooth, and personal information.
 
You need to own something to say something about it? I really doubt anyone who owns a BMW knows anything about the car beyond gas goes in, cars goes vroom. This applies to Apple M1 owners, as I've proven them wrong as a non-M1 owner.

It is so funny, your arguments always have the same problem whether it's cars or computers. Your assumptions are always wrong because you lack actual experience. I used to own a BMW M2 before I ditched it for a Porsche and I was an engineering partner that helped the early aftermarket companies (specifically CSF Racing) bring track-focused parts to market. The car was torn apart in my garage with 200 miles on it and I was laser scanning the radiator, intercooler, oil cooler, and DCT cooler to help develop parts that could handle track duty (the M2 over heated if you were skilled to drive it fast enough, unlike what all the magazine drivers claimed). I personally tore out every subsystem on that car including the suspension, brakes, cooling system, developed coding methods to disable intrusive electronic nannies that interfered with track use, etc.

So yeah, I don't doubt that you assume people that drive BMWs and use Apple products don't know what they are doing, because you don't know what you are doing and it seems a logical assumption that everyone is just like you. But if you ever actually get out on the track with someone that does, you're going to realize very, very quickly that reality doesn't jive with your assumptions.

For what it's worth, though I no longer track a BMW (I feel the brand has gone in a different direction focused on luxury and sustainability instead of driving enjoyment) they are the second most common cars I see at track events behind Porsches and maybe tied with Miatas. A LOT of BMW owners know a LOT about how their cars work, perhaps more than almost every other brand.

But of course, you wouldn't realize that you picked perhaps literally the worst brand to try to make your point - because you don't own a BMW and don't drive on the track.
 
Last edited:
It is so funny, your arguments always have the same problem whether it's cars or computers. Your assumptions are always wrong because you lack actual experience. I used to own a BMW M2 before I ditched it for a Porsche and I was an engineering partner that helped the early aftermarket companies (specifically CSF Racing) bring track-focused parts to market. The car was torn apart in my garage with 200 miles on it and I was laser scanning the radiator, intercooler, oil cooler, and DCT cooler to help develop parts that could handle track duty (the M2 over heated if you were skilled to drive it fast enough, unlike what all the magazine drivers claimed). I personally tore out every subsystem on that car including the suspension, brakes, cooling system, developed coding methods to disable intrusive electronic nannies that interfered with track use, etc.

So yeah, I don't doubt that you assume people that drive BMWs and use Apple products don't know what they are doing, because you don't know what you are doing and it seems a logical assumption that everyone is just like you. But if you ever actually get out on the track with someone that does, you're going to realize very, very quickly that reality doesn't jive with your assumptions.

For what it's worth, though I no longer track a BMW (I feel the brand has gone in a different direction focused on luxury and sustainability instead of driving enjoyment) they are the second most common cars I see at track events behind Porsches and maybe tied with Miatas. A LOT of BMW owners know a LOT about how their cars work, perhaps more than almost every other brand.

But of course, you wouldn't realize that you picked perhaps literally the worst brand to try to make your point - because you don't own a BMW and don't drive on the track.
You are an outlier, for every one of you there's 10,000 people like my friend Rashed. Rashed leases BMWs and knows nothing about them other than they're expensive.
 
You are an outlier, for every one of you there's 10,000 people like my friend Rashed. Rashed leases BMWs and knows nothing about them other than they're expensive.

Of course, but that is not my point. My point was that Duke's lack of experience and propensity for grandiose statements led him to choose an example that was not really correct. While most car owners in general lease cars and know nothing about them, those numbers are far less for performance-oriented brands like BMW and Porsche. For example, if he had made that statement about Mercedes or Buicks I would have no leg to stand on. But as an overall owner group, BMW owners tend to be among the most knowledgeable about how their cars work and perform (the entire reason BMWs gained prominence was their performance, not their luxury), and they have among the highest participation rate in racing and working on them. Ultimately, Duke's lack of experience in this area led him to make a relatively foolish statement. I'm not saying you need to own products or have experience in an area before commenting on it, but it certainly helps.
 
Last edited:
It is so funny, your arguments always have the same problem whether it's cars or computers. Your assumptions are always wrong because you lack actual experience. I used to own a BMW M2 before I ditched it for a Porsche and I was an engineering partner that helped the early aftermarket companies (specifically CSF Racing) bring track-focused parts to market. The car was torn apart in my garage with 200 miles on it and I was laser scanning the radiator, intercooler, oil cooler, and DCT cooler to help develop parts that could handle track duty (the M2 over heated if you were skilled to drive it fast enough, unlike what all the magazine drivers claimed). I personally tore out every subsystem on that car including the suspension, brakes, cooling system, developed coding methods to disable intrusive electronic nannies that interfered with track use, etc.
How is any of this proving me wrong? You did all this which is great but you really think most people do this? BTW, I too am a car nut and here's one of many videos I made showing me fixing my Porsche 928 I rebuilt entirely. The sad thing is that the temp sensor housing I designed and printed had already been done by someone else on rennlist, making my accomplishment seem lesser and also wasting my time if I had known someone had already done it. So tell me again how I lack actual experience, not that it matters at all to this discussion.


So yeah, I don't doubt that you assume people that drive BMWs and use Apple products don't know what they are doing, because you don't know what you are doing and it seems a logical assumption that everyone is just like you. But if you ever actually get out on the track with someone that does, you're going to realize very, very quickly that reality doesn't jive with your assumptions.
WoW. Know what they say when you assume, you make an ass out of u and me. You think all Apple users are computer science majors, which btw I am and a network engineer. I understand that most people who use a computer have the mentality to believe it's a box filled with magic fire. Same goes for cars, except the engine is filled with horses. The fact that you're upset that I even said this is amazing. End users are ignorant to how computers work. Most people who use computers, including Apple products are functionally illiterate. For what it's worth, here's my Corvette C5 with a "mild" cam, long tube headers, ported and polished 241 heads, and a tune. All done by me. The transmission was bad in the video but I have since replaced the trans and torque converter that took me 11 hours straight to do. You think I'm only active on this forum? You'll find me all over the place. Had to recently tell off some people at Benzworld because they thought a Mercedes 300E had two fuel relays when it just has one built into the MAS relay. They couldn't read a wiring diagram, but I have since fixed the issue by reflowing the solder joints on the MAS.


For what it's worth, though I no longer track a BMW (I feel the brand has gone in a different direction focused on luxury and sustainability instead of driving enjoyment) they are the second most common cars I see at track events behind Porsches and maybe tied with Miatas. A LOT of BMW owners know a LOT about how their cars work, perhaps more than almost every other brand.

But of course, you wouldn't realize that you picked perhaps literally the worst brand to try to make your point - because you don't own a BMW and don't drive on the track.
I have a friend who in 2020 bought a BMW 650i that I told him not to get the car because BMW's are shit. I never owned a BMW but like Apple products I have worked on them. Quick Google shows that they're known to have valve seals go bad and that gets expensive fast to repair. Guess what happened? Guess who's going to fix it for him this year when I have time? I repair Apple products as well and I have stories for what shit designs I've seen and had to repair. I don't own Apple products for the same reason I don't own BMW's, and that's because I know they're shit. That doesn't mean I need to own them to critique them. We do have the internet where people can make that mistake for you and tell you not to, and give good reasons. Technical reasons, which is the best kind of correct.

 
Last edited:
Leaked benchmarks are trickling in. 11.5% single core, 19.5% multi core, and 45% increase in GPU according to Geekbench 5 over the M1.

I mean yeah it’s purely synthetic, but that’s not a bad set of numbers.
leaked from where, link?
 
That about a Ryzen 3700x multicore speed and 12700K single thread on a low watt laptop chips 4+4 cores.
 
Leaked benchmarks are trickling in. 11.5% single core, 19.5% multi core, and 45% increase in GPU according to Geekbench 5 over the M1.

I mean yeah it’s purely synthetic, but that’s not a bad set of numbers.

https://browser.geekbench.com/v5/cpu/15482594

Here’s a MacRumors article that contains links to the leaker.
https://www.macrumors.com/2022/06/15/m2-geekbench-benchmark/
The single thread 11.5% is a little dissapointing, since the core clock speed bumped from 3.2 to 3.49 GHz, a 9% jump. Meaning most of the single thread jump is from the clock speed bump and not any IPC improvement.
 
The single thread 11.5% is a little dissapointing, since the core clock speed bumped from 3.2 to 3.49 GHz, a 9% jump. Meaning most of the single thread jump is from the clock speed bump and not any IPC improvement.
Hoping we at least see some decent power and thermal results from this.
 
The single thread 11.5% is a little dissapointing, since the core clock speed bumped from 3.2 to 3.49 GHz, a 9% jump. Meaning most of the single thread jump is from the clock speed bump and not any IPC improvement.
That'll be alright if there isn't a significant hit to efficiency. Obviously it'd be nice to have a more substantial jump after the better part of two years, but Apple was starting from a good place. I'm just wondering how M2 stacks up to comparable 12th-gen Core and Ryzen 6000 chips outside of synthetic tests.
 
That'll be alright if there isn't a significant hit to efficiency. Obviously it'd be nice to have a more substantial jump after the better part of two years, but Apple was starting from a good place. I'm just wondering how M2 stacks up to comparable 12th-gen Core and Ryzen 6000 chips outside of synthetic tests.

What I'm curious about is whether apple can get single threaded ipc much higher. They made big wins by going wider than everybody else, using large fast caches, and high bandwidth memory interfaces. These are all very logical choices but I wonder if they are tapped out. Their last 2 cpu releases have been kinda weak in the ipc deparment. When intel and AMD both go wider with Zen 5/15th gen (If I remember correctly) I'm assuming IPC will be pretty comparable across all 3 brands, then it's just a question of clock speeds and power efficiencies which it's not obvious to me in 2 years where everybody will be at. Apple might be 1ghz faster by then, AMD and Intel might be substantially more power efficient. Apple will probably still have better manufacturing nodes at tmsc though so I assume they'll be ahead overall at least in laptop/mobile form factors. It also doesn't seem like intel and AMD are willing to make fast integrated graphics like Apple, so I suspect x86 laptops will still be using external gpu's with the pro's and con's (way faster, way higher power consumption).
 
What I'm curious about is whether apple can get single threaded ipc much higher. They made big wins by going wider than everybody else, using large fast caches, and high bandwidth memory interfaces. These are all very logical choices but I wonder if they are tapped out. Their last 2 cpu releases have been kinda weak in the ipc deparment. When intel and AMD both go wider with Zen 5/15th gen (If I remember correctly) I'm assuming IPC will be pretty comparable across all 3 brands, then it's just a question of clock speeds and power efficiencies which it's not obvious to me in 2 years where everybody will be at. Apple might be 1ghz faster by then, AMD and Intel might be substantially more power efficient. Apple will probably still have better manufacturing nodes at tmsc though so I assume they'll be ahead overall at least in laptop/mobile form factors. It also doesn't seem like intel and AMD are willing to make fast integrated graphics like Apple, so I suspect x86 laptops will still be using external gpu's with the pro's and con's (way faster, way higher power consumption).
I suspect Apple can improve single-threaded performance, but it may take some time and depend on more efficient nodes. And yes, I suspect Apple will still have manufacturing advantages that AMD and Intel can't easily challenge. I am curious to see how the GPU situation evolves; both AMD and Intel have talked a lot about improving integrated graphics, but the truth is still that I'd rather have Apple's GPU tech if I couldn't have dedicated hardware, all other things being equal.

The main thing to remember: Apple has been consistently iterative in a way other chip designers typically haven't. The iPhone now generally outperforms all Android phones in part because Apple keeps delivering notable improvements each year. Qualcomm might provide yearly SoC updates, but it struggles more to provide tangible upgrades (Snapdragon 8 Gen 1, for example). While Apple isn't guaranteed to repeat history, it has a real chance when it's up against x86 rivals that tend to go years between significant design upgrades or get mired in process problems (it's not clear Intel is out of the woods yet).
 
I suspect Apple can improve single-threaded performance, but it may take some time and depend on more efficient nodes. And yes, I suspect Apple will still have manufacturing advantages that AMD and Intel can't easily challenge. I am curious to see how the GPU situation evolves; both AMD and Intel have talked a lot about improving integrated graphics, but the truth is still that I'd rather have Apple's GPU tech if I couldn't have dedicated hardware, all other things being equal.

The main thing to remember: Apple has been consistently iterative in a way other chip designers typically haven't. The iPhone now generally outperforms all Android phones in part because Apple keeps delivering notable improvements each year. Qualcomm might provide yearly SoC updates, but it struggles more to provide tangible upgrades (Snapdragon 8 Gen 1, for example). While Apple isn't guaranteed to repeat history, it has a real chance when it's up against x86 rivals that tend to go years between significant design upgrades or get mired in process problems (it's not clear Intel is out of the woods yet).
The M1 and the A14 share a lot of design characteristics. The M2 seems to share many of the changes that were made to the A15 with some noticeable tweaks with caching and such.
Looking at the performance changes going from the A14 to the A15 are probably going to be pretty close in terms of CPU performance, the GPU is the outlier. The M1 performs about the same as a 1650TI, so if the increase of 43% that was leaked is accurate this at least bring it up around the 2060 moble edition which is at least usable.
Obviously though tasks that are optimized for Apple Silicons GPU are going to run damed smooth though (gaming is not amongst those optimized tasks though).
 
The M1 and the A14 share a lot of design characteristics. The M2 seems to share many of the changes that were made to the A15 with some noticeable tweaks with caching and such.
Looking at the performance changes going from the A14 to the A15 are probably going to be pretty close in terms of CPU performance, the GPU is the outlier. The M1 performs about the same as a 1650TI, so if the increase of 43% that was leaked is accurate this at least bring it up around the 2060 moble edition which is at least usable.
Obviously though tasks that are optimized for Apple Silicons GPU are going to run damed smooth though (gaming is not amongst those optimized tasks though).
If that kind of GPU performance holds up, no wonder Apple is leaning into gaming-friendly features Metal 3. Having the equivalent of RTX 2060 mobile graphics in an ultraportable like the MacBook Air would be pretty damn nice, even though it's not up to the latest hardware. Not that I'm expecting Apple to buy major game developers or pitch Macs as hardcore gaming rigs, as I've touched on in another thread... but I would get a kick out of knowing I could play recent 3D games at solid frame rates on a fanless laptop only slightly thicker than my phone.
 
I suspect Apple can improve single-threaded performance, but it may take some time and depend on more efficient nodes.
Single threaded performance is hard to do because there's not many ways to do it without sacrificing efficiency. Remember in that CPU's are not GPU's because CPU's are specifically about processing things in order. You can either increase the clock speed, which is what Apple seems to have done here, or create better branch prediction, which Apple didn't because they copied the PACMAN flaw from ARM, or add more cache, which Apple did do. Clock speed increase will eat more power along with bigger cache so do expect battery life to go down a bit with the M2.
And yes, I suspect Apple will still have manufacturing advantages that AMD and Intel can't easily challenge.
Apple's only advantage is 5nm and AMD is going to use 5nm for Zen4. AMD has chiplets and Intel has 3D stacking, so what advantage does Apple have?

If that kind of GPU performance holds up, no wonder Apple is leaning into gaming-friendly features Metal 3. Having the equivalent of RTX 2060 mobile graphics in an ultraportable like the MacBook Air would be pretty damn nice, even though it's not up to the latest hardware. Not that I'm expecting Apple to buy major game developers or pitch Macs as hardcore gaming rigs, as I've touched on in another thread... but I would get a kick out of knowing I could play recent 3D games at solid frame rates on a fanless laptop only slightly thicker than my phone.
The M1's and probably the M2's won't be a match for an RTX 2060. RTX 3060 performs nearly the same with faster Ray-Tracing than the 2060, and speaking of which what Ray-Tracing does the M1 have? Before someone says it does then please show me a game using it. The reason Apple is adding anything gaming to the Metal API is because Apple woke up and realized that people do play games on their computers. Sony certainly has with all the ports they're bringing to PC.

The M1 performs about the same as a 1650TI, so if the increase of 43% that was leaked is accurate this at least bring it up around the 2060 moble edition which is at least usable.
Obviously though tasks that are optimized for Apple Silicons GPU are going to run damed smooth though (gaming is not amongst those optimized tasks though).
Comparing the M1 to a 1650 TI is more accurate but also the 1650TI is based on old Nvidia tech. Saying that it's equivalent to a 2060 Mobile is like saying it has Ray-Tracing, because that's the big feature of the 2060's. That should give you an idea on how woefully behind Apple's GPU tech is.
 
Last edited:
so what advantage does Apple have?

They're at least a generation ahead with their accelerators, and they have a closed ecosystem. Maybe not as closed as, say, a Playstation or Xbox, but you can squeeze a lot of performance out of the hardware when you've got a lot of control.

But in raw performance terms, only Apple fans will say it's overall better and faster when you compare things in earnest.

Now I do expect Apple to be a power-sipper. Not just the chips, but the devices as a whole. Of course, that also comes with being in complete control; AMD and Intel can and probably do make chips that are as efficient, but they can't tell laptop and tablet companies how to set up their wireless stuff and all the other parts that they don't provide.
 
Comparing the M1 to a 1650 TI is more accurate but also the 1650TI is based on old Nvidia tech. Saying that it's equivalent to a 2060 Mobile is like saying it has Ray-Tracing, because that's the big feature of the 2060's. That should give you an idea on how woefully behind Apple's GPU tech is.
Yeah but if your looking at pure raster performance it’s the closest thing I could find.
The Metal 3 and all the features there are going to swing things one way or another but if straight raster should be close. And let’s be honest, nobody is running a 2060 mobile with raytracing and blah blah blah thinking that it’s a good 1080p performer.
 
Single threaded performance is hard to do because there's not many ways to do it without sacrificing efficiency. Remember in that CPU's are not GPU's because CPU's are specifically about processing things in order. You can either increase the clock speed, which is what Apple seems to have done here, or create better branch prediction, which Apple didn't because they copied the PACMAN flaw from ARM, or add more cache, which Apple did do. Clock speed increase will eat more power along with bigger cache so do expect battery life to go down a bit with the M2.
The early reviews of the 13-inch MBP M2 I've seen suggest there is a significant increase in single-threaded speed, although that might be due to clock speed and cache as you suggested. Battery life is roughly as good as it was with the M1 model... I've seen a bit more or less runtime depending on the media outlet and their benchmarking procedures.


The M1's and probably the M2's won't be a match for an RTX 2060. RTX 3060 performs nearly the same with faster Ray-Tracing than the 2060, and speaking of which what Ray-Tracing does the M1 have? Before someone says it does the please show me a game using it. The reason Apple is adding anything gaming to the Metal API is because Apple woke up and realized that people do play games on their computers. Sony certainly has with all the ports they're bringing to PC.
As Lakados said, the RTX 2060 comparison is more in terms of raw general-purpose performance. Ray tracing support would be nice, to be clear, but you're not about to play a ray-traced game with other integrated graphics. Apple seems to be in an odd middle ground where it's likely outperforming Xe and AMD's APUs by a wide margin, but doesn't (yet) have the feature set to fully compete with modern dedicated GPUs.

I do think Apple is acknowledging computer gaming. The key, though, is that it's unlikely to mimic the Windows route and doesn't need to for this to prove successful. Apple hasn't had to court 'serious' gamers (the ones with pricey PCs and mile-long Steam libraries) to thrive; it'll probably be more than thrilled if it gets a student to buy a MacBook Air that can play the odd online shooter, or a family to get an iMac that plays recent kid-friendly titles.
 
They're at least a generation ahead with their accelerators, and they have a closed ecosystem. Maybe not as closed as, say, a Playstation or Xbox, but you can squeeze a lot of performance out of the hardware when you've got a lot of control.
Consoles are tied to the clock speed of the CPU, so it's no where near the same as Apple's hardware. The fact that most developers are using MoltenVK shows that it isn't even a huge factor.
Now I do expect Apple to be a power-sipper. Not just the chips, but the devices as a whole. Of course, that also comes with being in complete control; AMD and Intel can and probably do make chips that are as efficient, but they can't tell laptop and tablet companies how to set up their wireless stuff and all the other parts that they don't provide.
You did watch the video I posted like a page or two ago where someone ran Fortnite on a M1 Max and got 1 hour and 36 minutes out of it. It's not efficient when you're not using the media engine which is used for video editing and playback.
Yeah but if your looking at pure raster performance it’s the closest thing I could find.
The Metal 3 and all the features there are going to swing things one way or another but if straight raster should be close. And let’s be honest, nobody is running a 2060 mobile with raytracing and blah blah blah thinking that it’s a good 1080p performer.
Again I posted a video from Linus Tech Tips where a RTX 3060 destroys M1 products in pure raster performance. Since the RTX 2060 is about the same in raster performance then the 2060 would still destroy an M1 machine. I don't know about the M2 but I seriously doubt it. Do I need to post those videos again to show this? Only when video editing is involved is when the M1's show their potential and that's mainly from the media engine. Also DLSS was created so games can be played with reasonable frame rates using Ray Tracing, including the RTX 2060. Minecraft is barely playable on the Apple M1, and certainly can't do Ray-Tracing like an RTX 2060. I do Ray-Tracing in Minecraft on my Vega 56 in Linux. You can fault this for Minecraft using OpenGL but that's still Apple's fault for having shit OpenGL drivers. Do you know how many applications still use OpenGL and have no plans to go Vulkan let alone Metal?



The reason Apple M1 apologists say the M1 is closer to a 2060 or 3060 is because of Tomb Raider, which seems to perform the same on a M1 and 3060. Once you move to other games the differences become huge. Borderlands 3 which is available on Mac has to run on low settings to get a playable frame rate while on the 3060 it happily runs at max settings with no problems. Fortnite is native to Mac and it gets about 120fps, while a Ryzen 3 3200 with Vega 8 graphics can get the same frame rate. A RTX 3060 will get well over 200 fps in Fortnite, assuming everything is set to 1080p. I've not found many gaming benchmarks with the M1 vs PC but what few I've found is usually only doing Tomb Raider and if they include other games it's usually down voted because Apple apologists didn't like the results. It's closer to a 1650 Ti in both raster performance and capabilities.
 
Linus Tech Tips just released this and it starts with gaming benchmarks. Only world of warcraft nearly matched the RTX 3090 in performance with graphic glitches.

 
When you look at 8YjMIjLLIwA:744 time stamp

The power meter and the size of everything, it is a bit ridiculous how close or even above to a full 12900K-3900rtx some numbers are

The fact that the scaling is close to 100% for many things, that really impressive to think how up it could get it you just put more of it.

The end about the lack of advantage above an MacBook Pro do seem to make sense, if Apple make their cheaps that much efficiants, one could imagine wanting much more IO on the desktop to make the difference attractive.
 
When you look at 8YjMIjLLIwA:744 time stamp

The power meter and the size of everything, it is a bit ridiculous how close or even above to a full 12900K-3900rtx some numbers are

The fact that the scaling is close to 100% for many things, that really impressive to think how up it could get it you just put more of it.

The end about the lack of advantage above an MacBook Pro do seem to make sense, if Apple make their cheaps that much efficiants, one could imagine wanting much more IO on the desktop to make the difference attractive.
The things to take away from the video is this.
  1. Dolphin which is using ARM and Metal API still runs slow compared to PC.
  2. Civ 6 runs like dirt on the M1.
  3. CSGO is bad because it uses OpenGL and Apple gave up on OpenGL.
  4. Warhammer 3 requires M1 but uses Rosetta2 because it needs Intel features.
  5. World of Warcraft was nearly as fast as RTX 3090 but with visual artifacts.
  6. Video editing is still Apple M1's strength.
  7. M1 Ultra runs really hot. Though the fan speed doesn't change.
 
The things to take away from the video is this.
I have the feeling everyone gave up on OpenGL ?

I feel one of the main take away missing is how many things achieved to nearly double performance (some more like WoW) wise from the regular to Ultra, I feel all of your point outside number 5&7 were already true on the regular M1
 
You get those artifacts on PC as well its a long-standing issue
I never played Shadowlands so I can't confirm. I'm only in TBC.
I have the feeling everyone gave up on OpenGL ?
Yes and no. OpenGL hasn't been a thing in Windows for nearly 2 decades with some exceptions here and there. OpenGL was a thing for Mac until Apple introduced Metal and basically left it to bit rot. It's still the #1 API for Linux and Android. Vulkan is gaining traction but still OpenGL is the default.
I feel one of the main take away missing is how many things achieved to nearly double performance (some more like WoW) wise from the regular to Ultra, I feel all of your point outside number 5&7 were already true on the regular M1
It did double but for the price you're paying you aren't getting the best possible performance outside of video editing.
 
Yes and no. OpenGL hasn't been a thing in Windows for nearly 2 decades with some exceptions here and there. OpenGL was a thing for Mac until Apple introduced Metal and basically left it to bit rot. It's still the #1 API for Linux and Android. Vulkan is gaining traction but still OpenGL is the default.
I mean, not really. GL is not the default for most Windows stuff, but it is still used a lot. Intel, AMD, and nVidia all supply current accelerated OpenGL drivers. Can't say how good they are for AMD/Intel but nVidia's are extremely fast. Pro software is still heavily OpenGL, things like AutoCAD and such. I imagine in the long run it'll keep fading but on Windows it is still very much a supported thing.
 
I never played Shadowlands so I can't confirm. I'm only in TBC.
Blizz introduced ray-traced shadows in shadowlands, Nvidia eventually (recently) patched the issue but on a Mac, you have to turn shadows to low or off in the graphics. I don't know anybody playing with an AMD GPU so I can't say with any certainty if it happens there or not
 
Pro software is still heavily OpenGL, things like AutoCAD and such. I imagine in the long run it'll keep fading but on Windows it is still very much a supported thing.
Yes but I feel it is pretty much all legacy/large momentum not because of openGL support, the last OpenGL release will be 5 year's old next month, everyone seem to shift to Vulkan:
https://develop3d.com/visualisation/solidworks-vulkan-ray-tracing/

At work we are still full OpenGL but for the same reason we are still on openGL 3.x, not because of any faith in the future of openGL but just no need for anything fancy rendering wise (which will be common in the industrial/chemist/etc... rendering world), having good openGL drivers do not mean you had put mush effort in it in the last 3 year's a feel like.
 
Last edited:
Yes but I feel it is pretty much all legacy/large momentum not because of openGL support, the last OpenGL release will be 5 year's old next month, everyone seem to shift to Vulkan:
https://develop3d.com/visualisation/solidworks-vulkan-ray-tracing/

At work we are still full OpenGL but for the same reason we were still on openGL 3.x, not because of any faith in the future of openGL but just no need for anything fancy rendering wise (which will be common in the industrial/chemist/etc... rendering world), having good openGL drivers do not mean you had put mush effort in it in the last 3 year's a feel like.
For sure it is basically legacy at this point, just making the point that it is still very much supported and used, it isn't something that is gone contrary to what Apple might think.
 
Blizz introduced ray-traced shadows in shadowlands, Nvidia eventually (recently) patched the issue but on a Mac, you have to turn shadows to low or off in the graphics. I don't know anybody playing with an AMD GPU so I can't say with any certainty if it happens there or not
I have an RX 6800 and Shadowlands on Alder Lake. I haven't played in a while but I can log in later tonight and look, if you can tell me what settings cause the issue.
 
I have an RX 6800 and Shadowlands on Alder Lake. I haven't played in a while but I can log in later tonight and look, if you can tell me what settings cause the issue.
Shadows on higher settings using DX12 while moving the camera around. You have to get the angle right so your clipping between the light and the object.
 
Shadows on higher settings using DX12 while moving the camera around. You have to get the angle right so your clipping between the light and the object.
Ok, I haven't logged in in a few months, but there's a social contract now? *Scoffs vulgarly*
 
Shadows on higher settings using DX12 while moving the camera around. You have to get the angle right so your clipping between the light and the object.
Can you suggest a place that isn't that raid, because I haven't done it yet? I'm wandering around in Oribos a bit and I can see the difference in ray-traced shadows, and there's a place near the bank that's got a forge that throws orange light that demonstrates the dynamic lighting pretty well, casting most of my character orange, but I haven't yet found a place I can try to repro what's in the video.
 
Can you suggest a place that isn't that raid, because I haven't done it yet? I'm wandering around in Oribos a bit and I can see the difference in ray-traced shadows, and there's a place near the bank that's got a forge that throws orange light that demonstrates the dynamic lighting pretty well, casting most of my character orange, but I haven't yet found a place I can try to repro what's in the video.
It’s situational, you can trigger it in Revendreth pretty easily in the wild or
Flying around. But it’s been months since I’ve played, but that’s where I remember it being worst.
 
Back
Top