Video RAM vs Cuda Cores

Peat Moss

Limp Gawd
Joined
Oct 6, 2009
Messages
368
I'm trying to decide between an RTX 3060 and 3060 ti for my next build. The 3060 has more memory but the 3060 ti has more cuda cores.

Just wondering what programs would benefit from more cuda cores, and which applications would benefit from more memory? What are the trade-offs?
 

UnknownSouljer

Supreme [H]ardness
Joined
Sep 24, 2001
Messages
7,029
I'm trying to decide between an RTX 3060 and 3060 ti for my next build. The 3060 has more memory but the 3060 ti has more cuda cores.

Just wondering what programs would benefit from more cuda cores, and which applications would benefit from more memory? What are the trade-offs?
This isn't directly answering your question, but I think you're going about it backwards. It's much better to figure out and know what your use case is, then figure out what is best for your use case rather than the other way around.

If you're going to be rendering all day as an example, then you probably know which pieces of software you need to run and you can look up bench marks for yourself to see which is more performative running that software. The same goes for photo/video work or AI work or mining or whatever your use case is.
(Forgive me if this is more just a theoretical question, I understand some people really just like theoretical knowledge - I'm a bit more pragmatic in my approach).


The Ti is generally speaking faster in all games. Memory only has an affect on texture sizes, which generally aren't relevant unless you're playing at 4k+ resolutions. It's likely that in the near future texture sizes will increase, but it still probably won't matter for those playing on 2560x1440 or 1920x1080. There is generally a "texture slider" for a reason, and if a game has "8k textures" it won't benefit users playing at lower resolutions. In other words you'd be able to lower the texture slider away from "maximum" or "extreme" which would gain performance without any visual penalty and also lowering vRAM usage at those lower resolutions.



EDIT: Also if you see either card in stock and you can buy them at MSRP then you probably should just do so. Most people aren't struggling with a decision of: "which card to buy" they're struggling with just "trying to buy a card".
 
Last edited:

Sir Beregond

Limp Gawd
Joined
Oct 12, 2020
Messages
228
Yeah what UnknownSouljer said is perfect.

If you want to know which one is faster in gaming, that's easy...the 3060 Ti. Higher VRAM is useful for high textures at high resolution, but it's not a vacuum. Higher VRAM should really be paired with higher end GPUs. To me, I doubt 12GB serves the 3060 much. Would have rather seen a 12GB 3070.

If you are asking for non-gaming use cases then you need to provide what software/rendering/compute stuff you would be using.
 

x509

2[H]4U
Joined
Sep 20, 2009
Messages
2,491
[ ...0

If you are asking for non-gaming use cases then you need to provide what software/rendering/compute stuff you would be using.
How about photo editing applications like Adobe Photoshop or Lightroom. Problem is, I can't just get 3-4 different cards to try them out, and then return all except the one I want to keep.
 

LukeTbk

[H]ard|Gawd
Joined
Sep 10, 2020
Messages
1,070
Just wondering what programs would benefit from more cuda cores, and which applications would benefit from more memory? What are the trade-offs?
https://www.pugetsystems.com/recommended/Recommended-Systems-for-Adobe-Photoshop-139/Hardware-Recommendations#:~:text=How much VRAM (video card,of VRAM should be plenty.

Apparently:
unless you have multiple 4K displays, even 4GB of VRAM should be plenty

Lightroom:
Since Lightroom Classic does not heavily use the GPU, VRAM is typically not a concern. If you have a 4K display we recommend having at least 6GB of VRAM, although all the video cards we currently offer for Lightroom have at least 8GB of VRAM.

Because they sell system they could be exaggerating how much VRAM is needed a little bit, but it give an idea.
 
Last edited:

Peat Moss

Limp Gawd
Joined
Oct 6, 2009
Messages
368
This isn't directly answering your question, but I think you're going about it backwards. It's much better to figure out and know what your use case is, then figure out what is best for your use case rather than the other way around.

If you're going to be rendering all day as an example, then you probably know which pieces of software you need to run and you can look up bench marks for yourself to see which is more performative running that software. The same goes for photo/video work or AI work or mining or whatever your use case is.
(Forgive me if this is more just a theoretical question, I understand some people really just like theoretical knowledge - I'm a bit more pragmatic in my approach).


The Ti is generally speaking faster in all games. Memory only has an affect on texture sizes, which generally aren't relevant unless you're playing at 4k+ resolutions. It's likely that in the near future texture sizes will increase, but it still probably won't matter for those playing on 2560x1440 or 1920x1080. There is generally a "texture slider" for a reason, and if a game has "8k textures" it won't benefit users playing at lower resolutions. In other words you'd be able to lower the texture slider away from "maximum" or "extreme" which would gain performance without any visual penalty and also lowering vRAM usage at those lower resolutions.



EDIT: Also if you see either card in stock and you can buy them at MSRP then you probably should just do so. Most people aren't struggling with a decision of: "which card to buy" they're struggling with just "trying to buy a card".

Thanks, you're right, I should have specified what work apps I use:

Adobe Photoshop
Adobe Premier Elements (thinking of switching to DaVinci Resolve)
Navisworks 3D viewer
I sometimes have more than a dozen browser tabs open, plus Premier Elements, plus Photoshop.

Just one 4K monitor 27"

I don't game.
 

x509

2[H]4U
Joined
Sep 20, 2009
Messages
2,491
https://www.pugetsystems.com/recommended/Recommended-Systems-for-Adobe-Photoshop-139/Hardware-Recommendations#:~:text=How much VRAM (video card,of VRAM should be plenty.

Apparently:
unless you have multiple 4K displays, even 4GB of VRAM should be plenty

Lightroom:
Since Lightroom Classic does not heavily use the GPU, VRAM is typically not a concern. If you have a 4K display we recommend having at least 6GB of VRAM, although all the video cards we currently offer for Lightroom have at least 8GB of VRAM.

Because they sell system they could be exaggerating how much VRAM is needed a little bit, but it can an idea.
Thanks for that link. The chart on that web page shows me that a 3060 is within a few percent of a 3090. The money I "save" by not getting a 3090 would buy me a very nice 4K display. ;)
 

UnknownSouljer

Supreme [H]ardness
Joined
Sep 24, 2001
Messages
7,029
For the most part, in your use case, you'd benefit more from the faster card (Ti).

Thanks, you're right, I should have specified what work apps I use:

Adobe Photoshop
Photoshop doesn't push the GPU that much. There are a few filters that will use the GPU, it's primarily a CPU intensive app. You can more or less use any mid-range graphics card in the past 10 years and have it run fine with Photoshop.

Where vRAM does come into play is if you work with incredibly large files. If you work with medium format GFX100 or PhaseOne 100MP files OR you are a Photoshop artist that tends to have 100's of layers, then there would be benefit for higher vRAM. Basically as long as your entire file can load into the vRAM that the video card has, you're fine. More vRAM past keeping your file in RAM and what is necessary to drive your display doesn't give increased performance benefit. Even 100MP files don't tend to be more than 400MB a pop - so if you had 10 or so open and they each had multiple layers and adjustments to push you past 8GB of vRAM, then it would start to make a difference. Again, for a majority of use cases, the faster card is more beneficial here (in this case the Ti). For me, I generally work with 42MP files and they're 80MB a pop. After I finish all my retouching and other adjustment layer work, they get up to between 250MB-450MB. It never gets close to filling up my vRAM. Generally the slow rust HDD's are a much bigger slowdown (saving/loading) - my GPU for the most part is taking a nap.

Lightroom tends to be a bigger problem if you're trying to scroll through a lot of images quickly. But if your library has smaller previews setup it's a non-issue. Again, the problem there being all my files are on slow HDD's. The HDD's are the bottleneck. Not my graphics card. As soon as the image fully loads in vRAM, it's smooth again. But LR, like PS, also isn't a particularly optimized program.
Adobe Premier Elements (thinking of switching to DaVinci Resolve)
Premiere again doesn't properly fully utilize GPU's. The Mercury Playback engine is ancient. Premiere continues to be a mostly CPU limited, single-core limited, program. It does have CUDA acceleration for some things, but again, you aren't likely to be bottle-necked by vRAM in this application. Take the faster card.

Resolve is much more interesting. Resolve actually is coded properly and will use every drop of CPU, GPU, RAM, and HDD resources you have. For that app you get gains everywhere, the faster components you can use, the faster it will go. It still will benefit from the faster card over more vRAM. The only case where the increased vRAM could come into play is if you need a big frame buffer, like for 8k footage. I think with 6k and under you'll be fine with 8GB of vRAM. However, I would suggest that you check yourself around the web for people bench-marking Resolve with various cards.

Also for what its worth, I'd move to Resolve as fast as you can - it's a much better program than Premiere in virtually every way, except in perhaps two areas: 1.) Premiere has more third party plugins and third party support, 2.) Fewer people know how to use Resolve versus Premiere (if you need to collaborate with others). I will say though that both of those issues are rapidly shrinking though, Resolve is the standard now for Color Grading (and it has been for a while), more editors are also starting to move to it to also use as an NLE. It's only a matter of time before more folks are using its built in After Effects like features as well. A lot of the major third party plugins that I care about are already making versions for Resolve, and most of the specialty plugins, Blackmagic has already made themselves. Their noise reduction "node" as an example is excellent - it's not necessary to use something like Neat Noise Reduction.
Navisworks 3D viewer
This I can't really respond on. You'll have to do research, but I would tend to think that it would be similar to video editing in that faster card equals faster render. The only way vRAM could play a part is if more needs to be placed in vRAM while rendering than the card can offer. I would imagine that that is a pretty small problem or use case as well, but perhaps you're doing renders with massive texture sizes at ultra high resolutions, necessitating more vRAM.
I sometimes have more than a dozen browser tabs open, plus Premier Elements, plus Photoshop.
This is tough to answer. I think we all have browsers and apps open at the same time. Again, Photoshop and Premiere won't be particularly taxing to your video card. Ironically it might be Chrome (if you use that browser) that slows down your machine the most. Again, literally the only case in which having more vRAM over the faster card will matter is if you're using apps that actually use more vRAM then you have leading to a big slow down as the vRAM has to swap with the HDD (or hopefully SSD). For everything else the faster card is faster. Apps needing more vRAM generally are more niche, but again if you're dealing with massive files then that's the very small, very limited use case that would benefit from more vRAM.
Just one 4K monitor 27"

I don't game.
 
Last edited:

x509

2[H]4U
Joined
Sep 20, 2009
Messages
2,491
For the most part, in your use case, you'd benefit more from the faster card (Ti).


Photoshop doesn't push the GPU that much. There are a few filters that will use the GPU, it's primarily a CPU intensive app. You can more or less use any mid-range graphics card in the past 10 years and have it run fine with Photoshop.

Where vRAM does come into play is if you work with incredibly large files. If you work with medium format GFX100 or PhaseOne 100MP files OR you are a Photoshop artist that tends to have 100's of layers, then there would be benefit for higher vRAM. Basically as long as your entire file can load into the vRAM that the video card has, you're fine. More vRAM past keeping your file in RAM and what is necessary to drive your display doesn't give increased performance benefit. Even 100MP files don't tend to be more than 400MB a pop - so if you had 10 or so open and they each had multiple layers and adjustments to push you past 8GB of vRAM, then it would start to make a difference. Again, for a majority of use cases, the faster card is more beneficial here (in this case the Ti). For me, I generally work with 42MP files and they're 80MB a pop. After I finish all my retouching and other adjustment layer work, they get up to between 250MB-450MB. It never gets close to filling up my vRAM. Generally the slow rust HDD's are a much bigger slowdown (saving/loading) - my GPU for the most part is taking a nap.

Lightroom tends to be a bigger problem if you're trying to scroll through a lot of images quickly. But if your library has smaller previews setup it's a non-issue. Again, the problem there being all my files are on slow HDD's. The HDD's are the bottleneck. Not my graphics card. As soon as the image fully loads in vRAM, it's smooth again. But LR, like PS, also isn't a particularly optimized program.

[...]
UnknownSouljer You seem more knowledgable than most people about Adobe programs, so maybe you can confirm or not this "rumor." The "rumor" is that Adobe optimizes its programs for CUDA and Nvidia, in preference to AMD.
 

Peat Moss

Limp Gawd
Joined
Oct 6, 2009
Messages
368
For the most part, in your use case, you'd benefit more from the faster card (Ti).


Photoshop doesn't push the GPU that much. There are a few filters that will use the GPU, it's primarily a CPU intensive app. You can more or less use any mid-range graphics card in the past 10 years and have it run fine with Photoshop.

Where vRAM does come into play is if you work with incredibly large files. If you work with medium format GFX100 or PhaseOne 100MP files OR you are a Photoshop artist that tends to have 100's of layers, then there would be benefit for higher vRAM. Basically as long as your entire file can load into the vRAM that that video card has, you're fine. More vRAM past keeping your file in RAM and what is necessary to drive your display doesn't give increased performance benefit. Even 100MP files don't tend to be more than 400MB a pop - so if you had 10 or so open and they each had multiple layers and adjustments to push you past 8GB of vRAM, then it would start to make a difference. Again, for a majority of use cases, the faster card is more beneficial here (in this case the Ti). For me, I generally work with 42MP files and they're 80MB a pop. After I finish all my retouching and other adjustment layer work, they get up to between 250MB-450MB. It never gets close to filling up my vRAM. Generally the slow rust HDD's are a much bigger slowdown (saving/loading) - my GPU for the most part is taking a nap.

Lightroom tends to be a bigger problem if you're trying to scroll through a lot of images quickly. But if your library has smaller previews setup it's a non-issue. Again, the problem there being all my files are on slow HDD's. The HDD's are the bottleneck. Not my graphics card. As soon as the image fully loads in vRAM, it's smooth again. But LR, like PS, also isn't a particularly optimized program.

Premiere again doesn't properly fully utilize GPU's. The Mercury Playback engine is ancient. Premiere continues to be a mostly CPU limited, single-core limited, program. It does have CUDA acceleration for some things, but again, you aren't likely to be bottle-necked by vRAM in this application. Take the faster card.

Resolve is much more interesting. Resolve actually is coded properly and will use every drop of CPU, GPU, RAM, and HDD resources you have. For that app you get gains everywhere, the faster components you can use, the faster it will go. It still will benefit from the faster card over more vRAM. The only case where the increased vRAM could come into play is if you need a big frame buffer, like for 8k footage. I think with 6k and under you'll be fine with 8GB of vRAM. However, I would suggest that you check yourself around the web for people bench-marking Resolve with various cards.

Also for what its worth, I'd move to Resolve as fast as you can - it's a much better program than Premiere in virtually every way, except in perhaps two areas: 1.) Premiere has more third party plugins and third party support, 2.) Fewer people know how to use Resolve versus Premiere (if you need to collaborate with others). I will say though that both of those issues are rapidly shrinking though, Resolve is the standard now for Color Grading (and it has been for a while), more editors are also starting to move to it to also use as an NLE. It's only a matter of time before more folks are using its built in After Effects like features as well. A lot of the major third party plugins that I care about are already making versions for Resolve, and most of the specialty plugins, Blackmagic has already made themselves. Their noise reduction "node" as an example is excellent - it's not necessary to use something like Neat Noise Reduction.

This I can't really respond on. You'll have to do research, but I would tend to think that it would be similar to video editing in that faster card equals faster render. The only way vRAM could play a part is if more needs to be placed in vRAM while rendering than the card can offer. I would imagine that that is a pretty small problem or use case as well, but perhaps you're doing renders with massive texture sizes at ultra high resolutions, necessitating more vRAM.

This is tough to answer. I think we all have browsers and apps open at the same time. Again, Photoshop and Premiere won't be particularly taxing to your video card. Ironically it might be Chrome (if you use that browser) that slows down your machine the most. Again, literally the only case in which having more vRAM over the faster card will matter is if you're using apps that actually use more vRAM then you have leading to a big slow down as the vRAM has to swap with the HDD (or hopefully SSD). For everything else the faster card is faster. Apps needing more vRAM generally are more niche, but again if you're dealing with massive files then that's the very small, very limited use case that would benefit from more vRAM.


Thanks so much for your detailed reply.

One last question: What about future proofing? Would 8 GB of vRAM last 5 or 6 years? What is the pace for doubling vRAM?
 

UnknownSouljer

Supreme [H]ardness
Joined
Sep 24, 2001
Messages
7,029
UnknownSouljer You seem more knowledgable than most people about Adobe programs, so maybe you can confirm or not this "rumor." The "rumor" is that Adobe optimizes its programs for CUDA and Nvidia, in preference to AMD.
When Premiere first started really getting used in the late 00's and early 10's, it was definitely more optimized for nVidia hardware. Adobe still hasn't really optimized the Mercury Playback Engine since that time literally over 10 years ago - which is one of the reasons it continues to run like trash and also has terrible bugs that they're just now finally getting around to fixing.
My commentary on this aside, believe it or not I think some level of newer optimization has come to Premiere just due to pressure from Apple. Apple hasn't built a computer with an nVidia GPU since 2011. Apple has also made the push to 64-bit only in execution. And Metal is the only renderer as Apple continues to essentially not support OpenGL except on legacy programs.
All of those things alone I believe has forced Adobe to start optimizing their code for AMD hardware, 64-bit instruction sets, and Metal at minimum. There are some other things as well like the adoption of ProRes & ProRes RAW that give some level of credence to these theories. This is all true at least on Apple side, but I assume is also true on Windows side as well.

Truth be told though, I haven't worked directly with Premiere in about 5 years for reasons that I've indirectly expressed. It's a terrible, buggy, slow, program that honestly shouldn't be the standard for small production houses or individual users. Not when other NLE's such as Blackmagic Davinci Resolve are so much faster, better optimized, more powerful, cheaper(!), and easier to use than Premiere. Premiere is literally only used at this point because it had first mover advantage and a lot of people learned that NLE first and more or less are entrenched in Adobe's ecosystem.

Thanks so much for your detailed reply.

One last question: What about future proofing? Would 8 GB of vRAM last 5 or 6 years? What is the pace for doubling vRAM?
My opinion in general is there is no such thing as future proofing and you should buy based around what you need to do today. If our conversation truly is about future proofing then you should just spend $6000+ on two nVidia 3090's as they give the greatest likelihood of being relevant that far in the future and you want/need some guarantee of longevity. For everyone else it makes far more sense to spend a fraction of that and then just buy again when its needed. There is zero certainty basically on any prediction 5 years out - if you can do that you should be playing the stock market or making bets or something of that nature.
To drive that point home, what if in 5 years it's pointless to use a PC for video editing, because Apple has taken over the whole market with ARM chips that are 5x faster than WinTel chips? Or what if that's what happens to the entire market with Microsoft, Apple, Qualcomm, and nVidia all move to ARM combined and your graphics card ceases to be relevant because it's still using a system integrator paradigm instead of a fully built one? Whether you think that's a silly prediction or a serious one the point remains the same: there isn't a way you or I could possibly know.

The only thing you can predict maybe with some degree of accuracy is your own uses and use cases. If you plan to move to 8k or 12k workflows in the next 5 years then likely 8GB vRAM won't be enough for that. But then you could tell yourself that if you're making enough money to be either shooting or working with 8k video in the first place you'll have the money to replace your video card. Frankly if this is for any sort of serious business work then $400 2 years from now to edit 8k video as a business expense isn't a big deal.
If that's a big concern then I would seriously consider looking at AMD options which generally have more vRAM per card in general or moving up to higher end graphics cards in nVidia's product stack that also have more vRAM. Heck, or even getting a used 2080Ti or something like that (if you can find one... good luck the market is jammed up right now to say the least). nVidia for the last 3 gens has tended to be pretty stingy with their vRAM allotment. The only other thing you can do is wait and hope and pray that the Ti version of a lot of their other cards come out with higher vRAM and then try to buy one of those. But if you can predict what those cards will be, what they will cost, and their availability then your crystal ball works far better than mine (and/or you have insider information).
 

x509

2[H]4U
Joined
Sep 20, 2009
Messages
2,491
UnknownSouljer You convinced me to take another look at AMD 6000 cards. Then I saw the pricing. Worse than Nvidia 3060 Ti, even for a 6800.
 

UnknownSouljer

Supreme [H]ardness
Joined
Sep 24, 2001
Messages
7,029
UnknownSouljer You convinced me to take another look at AMD 6000 cards. Then I saw the pricing. Worse than Nvidia 3060 Ti, even for a 6800.
The price on a lot of things is pretty messed up. The 6800 is designed to go against the 3070. There is basically a 5% difference in speed in geek bench between a 3060Ti and a 3070. So nVidia priced all of this stuff pretty aggressively to put the squeeze on AMD, but also consequently on themselves. In either case though if vRAM is the concern - for things that will exceed 8GB of frame buffer it's worth it to move to a 6800 over the 3060Ti or 3070 and there will be a significant performance improvement as it has twice the vRAM as nVidia's offerings. Again if you did read my posts, 8GB of vRAM just isn't enough to edit 8k video. If you're editing 8K RED RAW today, you'd pay the price premium for the 6800 because that vRAM matters. Every time the vRAM runs out and you're feeling the buffering from your HDD, it's going to suck. You can still do it on those 8GB cards, but you'll have zero ability real time playback at native resolution - especially after you've started to add things like color grading, noise reduction, any form of filtration, on down the line.

I don't believe in future proofing either like my earlier posts alludes, but in theory higher textures and more geometry is coming. I think nVidia has smartly been at 8GB for a long time to save cash, but I also think there is a real possibility that within this generation 8GB won't be enough for games. Only time will tell though.

However, I would say again, honestly if you can buy any of these cards, you should probably just do so. Even if you don't want to use it you'll likely be able to flip it on Craigslist for 30% above cost. Honestly discussing which cards are faster for which purposes is purely academic at this point. Few people can get cards with top tier performance from either this generation or the previous one regardless of used or new without paying some absurd premium.
 

x509

2[H]4U
Joined
Sep 20, 2009
Messages
2,491
The price on a lot of things is pretty messed up. The 6800 is designed to go against the 3070. There is basically a 5% difference in speed in geek bench between a 3060Ti and a 3070. So nVidia priced all of this stuff pretty aggressively to put the squeeze on AMD, but also consequently on themselves. In either case though if vRAM is the concern - for things that will exceed 8GB of frame buffer it's worth it to move to a 6800 over the 3060Ti or 3070 and there will be a significant performance improvement as it has twice the vRAM as nVidia's offerings. Again if you did read my posts, 8GB of vRAM just isn't enough to edit 8k video. If you're editing 8K RED RAW today, you'd pay the price premium for the 6800 because that vRAM matters. Every time the vRAM runs out and you're feeling the buffering from your HDD, it's going to suck. You can still do it on those 8GB cards, but you'll have zero ability real time playback at native resolution - especially after you've started to add things like color grading, noise reduction, any form of filtration, on down the line.

I don't believe in future proofing either like my earlier posts alludes, but in theory higher textures and more geometry is coming. I think nVidia has smartly been at 8GB for a long time to save cash, but I also think there is a real possibility that within this generation 8GB won't be enough for games. Only time will tell though.

However, I would say again, honestly if you can buy any of these cards, you should probably just do so. Even if you don't want to use it you'll likely be able to flip it on Craigslist for 30% above cost. Honestly discussing which cards are faster for which purposes is purely academic at this point. Few people can get cards with top tier performance from either this generation or the previous one regardless of used or new without paying some absurd premium.
I don't do anything like editing 8K RAW. I'll be happy when I can upgrade to a 4K monitor for Lightroom and Photoshop.
 

UnknownSouljer

Supreme [H]ardness
Joined
Sep 24, 2001
Messages
7,029
I don't do anything like editing 8K RAW. I'll be happy when I can upgrade to a 4K monitor for Lightroom and Photoshop.
Nothing really stopping you from doing that now, outside of essentially everything being over-costed.
I've been working with 4k for about 4 years and recently moved to 6k this year. However for straight up LR and PS all of this stuff really doesn't matter at all. As 'controversial' as this might be to say, my machine from 2011 was more than sufficient to deal with all the work I was doing at 20+/- MP in both LR and PS at 4k and only started to be a problem with 42MP files. An RX580 (even a 4GB one) is PLENTY if all you do is work with 50MP and under in PS, LR, and C1. vRAM will generally not be an issue and GPU acceleration also not generally an issue unless you're working with video. PS and LR's requirements have basically plateaued while there have been fairly big gains in the past 2-3 years in compute (see AMD Ryzen and their forcing competition against Intel) and graphics cards from 2010 until now have had huge performance gains. So if all you need is to just do work today with photos, just grab an 8GB RX580 and call it a day. I would argue you won't see enough performance difference between spending $200 and $1000 on a graphics card to justify the cost for just LR and PS.

For most people a laptop is more than sufficient for PS and LR. And at this point if you don't game at all, the new M1 Mac Mini turns processing and working even with medium format files a joke and that whole machine can be had for less than the cost of these absurd high end video cards from AMD and nVidia. It also decodes h.265 far faster than almost any dedicated GPU/CPU and more or less destroys any work that you need to do at up to 4k resolution in video. For our OP, honestly since he doesn't game, the M1 is probably a much better investment and likely to be more "future proof" than a 3060Ti specifically for photo/video editing (as a side note, I'd actually wait for second gen Mac ARM hardware if you can, they'll have way more kinks worked out and they'll of course be much faster with much better GPU acceleration and more ports). Of course that rendering program is a wild card, and I'm not sure if it's supported on macOS (I checked, it's made by Autodesk and they haven't chosen to develop on macOS 'yet' for whatever reason). That's probably the program to keep him on a WinTel machine, other than possible dislike of Apple hardware/software/ecosystem.
 
Last edited:
Top