Core i9-10980XE Review Roundup

Well, it's more of the same from Intel. The price cut is nice, but you'd have to overclock the hell out of it and need the extra lanes and memory bandwidth for it to make sense over a 3950X. Or you'd have to have an existing X299 setup without a 7980XE or 9980XE processor already.

I think it makes the most sense where you need more lanes than LGA1151 or AM4 provide, need the best single-core performance you can get, and still need great multi-threaded performance.

I'm also surprised that AMD let Intel have this one. Had they built a TR version of the 3950X optimized for overclocking (or efficiency at stock) the 10980XE likely wouldn't have a leg to stand on.


[with respect to cooling, it's not like it's challenging or even expensive to toss a 280mm or 360mm AIO sandwich in any number of inexpensive enthusiasts chassis...]
 
when you pull the crap intel did with the embargo lift and release something that is barely faster than the chip it's suppose to be replacing, they deserve it.

I get that people like to judge with their feelings, most especially when it comes to AMD -- so many are seriously way too sensitive.

Intel's product lineup is tapped out, this is the very best that they can do- and it's actually pretty good. And you find even in these 'very negative' reviews being celebrated in this thread that the reviewers are rather much admitting that even as they give Intel their due for the marketing coup.
 
I think it makes the most sense where you need more lanes than LGA1151 or AM4 provide, need the best single-core performance you can get, and still need great multi-threaded performance.

I'm also surprised that AMD let Intel have this one. Had they built a TR version of the 3950X optimized for overclocking (or efficiency at stock) the 10980XE likely wouldn't have a leg to stand on.


[with respect to cooling, it's not like it's challenging or even expensive to toss a 280mm or 360mm AIO sandwich in any number of inexpensive enthusiasts chassis...]

that's because they're going to continue producing the 2950x, 2970/2990wx processors so so they'll fill in the price gap between the 3950x and 3960x instead of spreading their limited supply of 7nm chiplets even thinner than it already is.


I get that people like to judge with their feelings, most especially when it comes to AMD -- so many are seriously way too sensitive.

Intel's product lineup is tapped out, this is the very best that they can do- and it's actually pretty good. And you find even in these 'very negative' reviews being celebrated in this thread that the reviewers are rather much admitting that even as they give Intel their due for the marketing coup.

has nothing to do with feelings but what ever floats your boat, guess i'll just put you back on the ignore list..
 
Intel's product lineup is tapped out, this is the very best that they can do- and it's actually pretty good. And you find even in these 'very negative' reviews being celebrated in this thread that the reviewers are rather much admitting that even as they give Intel their due for the marketing coup.
No, it's far from pretty good. It's barely better than a cheaper CPU in a few workloads and gets trashed by the more expensive one. It's good in limited scenarios at best.
 
No, it's far from pretty good. It's barely better than a cheaper CPU in a few workloads and gets trashed by the more expensive one. It's good in limited scenarios at best.

...a cheaper CPU on a consumer platform. That cheaper CPU is also faster than its more expensive siblings too, core for core.

So yeah, limited number of workloads, but when is HEDT any more than 'limited'?
 
that's because they're going to continue producing the 2950x, 2970/2990wx processors so so they'll fill in the price gap between the 3950x and 3960x instead of spreading their limited supply of 7nm chiplets even thinner than it already is.

The ones that don't overclock and have NUMA issues for consumer workloads?

Okay, I guess that's a solution.

has nothing to do with feelings but what ever floats your boat, guess i'll just put you back on the ignore list..

Sure it does. You're not interested in performance, you're interested in how said performance is portrayed, and how a company handles their marketing.

Also, have a like for the ignore, some people just can't handle objectivity (y)
 
...a cheaper CPU on a consumer platform. That cheaper CPU is also faster than its more expensive siblings too, core for core.

So yeah, limited number of workloads, but when is HEDT any more than 'limited'?
If one requires HEDT features (PCIE lanes, memory bandwidth...) there is limited reason to get the 10980XE over any of the new Threadrippers. Price difference compared to the 3960X isn't enough to justify the performance loss and the 3970X is just that much better. Not to mention the 3990X upgrade path that is in a league of its own.
 
For 2 times the price.

True, but:

https://www.anandtech.com/show/1504...0x-and-3970x-review-24-and-32-cores-on-7nm/15

"Now the HEDT market is a tricky one to judge. As one might expect, overall sales numbers aren’t on the level of the standard consumer volumes. Still, Intel has reported that the workstation market has a potential $10B a year addressable market, so it is still worth pursuing. While I have no direct quotes or data, I remember being told for several generations that Intel’s best-selling HEDT processors were always the highest core count, highest performance parts that money could buy. These users wanted off-the-shelf hardware, and were willing to pay for it – they just weren’t willing to pay for enterprise features. I was told that this didn’t necessarily follow when Intel pushed for 10 cores to $1979, when 8 cores were $999, but when $1979 became 18 cores, a segment of the market pushed for it. Now that we can get better performance at $1999 with 32 cores, assuming AMD can keep stock of the hardware, it stands to reason that this market will pick up interest again."
 
If one requires HEDT features (PCIE lanes, memory bandwidth...) there is limited reason to get the 10980XE over any of the new Threadrippers. Price difference compared to the 3960X isn't enough to justify the performance loss and the 3970X is just that much better. Not to mention the 3990X upgrade path that is in a league of its own.

You're not wrong, but there's more to consider, such as board cost and single-core performance, in addition to CPU cost.

Again, highly user dependent.
 
I think it makes the most sense where you need more lanes than LGA1151 or AM4 provide, need the best single-core performance you can get, and still need great multi-threaded performance.

I'm also surprised that AMD let Intel have this one. Had they built a TR version of the 3950X optimized for overclocking (or efficiency at stock) the 10980XE likely wouldn't have a leg to stand on.


[with respect to cooling, it's not like it's challenging or even expensive to toss a 280mm or 360mm AIO sandwich in any number of inexpensive enthusiasts chassis...]

Except, single-threaded performance doesn't favor Intel in this scenario unless clock speeds exceed that of a Ryzen 3000 CPU. The 3950X boosts to 4.7GHz which is over the 10980XE's standard Turbo Boost 2.0 clocks. Turbo Boost Max 3.0 only kicks in on four cores max and even then, 4.8GHz is the best you'll see. AMD has slightly better IPC at this point with Zen 2. Also keep in mind that AMD can't just make a Threadripper simply optimized for overclocking. AMD is already pushing the limits of what can be done currently on its 7nm process. The 3900X and most likely, 3950X's can't overclock for shit and that's not likely to change much if at all in Threadripper form. AMD isn't letting Intel have squat either. The 3950X and 3960X do leave a gap, but one that in some ways can be filled by AMD's 2nd generation Threadripper parts. However, those parts have their downsides leaving Intel some gap at which it can be useful.

A 360 AIO is also insufficient for a highly overclocked 10980XE. I saw temperatures hit 100c easily many times during my review using a custom loop. My opinion is that the 10980XE would be more appealing and more competitive if it were closer in price to the 3950X. The 10980XE when overclocked can be faster in some cases and its on an HEDT platform which is appealing. I also think that Intel should have thrown the 165w TDP out the window as its largely a lie anyway and raised the clocks on these CPU's a bit more.

No need to buy 18 cores when you can get 32

This makes absolutely no sense. The 3970X is nearly double the price of Intel's Core i9 10980XE. The 2990WX isn't really an alternative either. While better in some multi-threaded workloads for sure, there are cases where the 10980XE will perform better. Outside of applications that can utilize 32c/64t, the 2990WX is going to get beaten in a lot of areas as the 10980XE has better IPC and vastly superior clocks without the internal latency penalties the 2990WX incurs from being built the way it is.

Is AMD at Ivybridge performance yet, kappa

AMD has passed this level of performance.
 
Last edited:
Also keep in mind that AMD can't simply make a Threadripper simply optimized for overclocking. AMD is already pushing the limits of what can be done currently on its 7nm process. The 3900X and most likely, 3950X's can't overclock for shit and that's not likely to change much if at all in Threadripper form. AMD isn't letting Intel have squat either. The 3950X and 3960X do leave a gap, but one that in some ways can be filled by AMD's 2nd generation Threadripper parts. However, those parts have their downsides leaving Intel some gap at which it can be useful.

I'm thinking more along the lines of releasing a TR with two CCDs instead of four, as AMD's 2nd-gen TR really doesn't provide solid competition to Intel HEDT up to sixteen cores. A hypothetical two-CCD TR3 with sixteen cores would provide the cache advantages and platform advantages while potentially being able to clock higher simply due to less heat. A two-CCD part with one core per CCX disabled would yield a twelve-core part with the same advantages, and in both cases AMD could charge less for them than they are for the 24-core+ parts with four CCDs which could bridge that US$400 gap.

I'm really just surprised that AMD left that spot to Intel. AMD now owns the top of the HEDT market, but the ~US$700 gap between the 3950X and 3960X has been filled with Intel HEDT parts that are at least competitive with the consumer parts from both companies and as well as TR3 in single-core while providing the advantages of HEDT over consumer platforms, in stark contrast to TR1 and TR2 that come with a raft of compromises.
 
I'm thinking more along the lines of releasing a TR with two CCDs instead of four, as AMD's 2nd-gen TR really doesn't provide solid competition to Intel HEDT up to sixteen cores. A hypothetical two-CCD TR3 with sixteen cores would provide the cache advantages and platform advantages while potentially being able to clock higher simply due to less heat. A two-CCD part with one core per CCX disabled would yield a twelve-core part with the same advantages, and in both cases AMD could charge less for them than they are for the 24-core+ parts with four CCDs which could bridge that US$400 gap.

I'm really just surprised that AMD left that spot to Intel. AMD now owns the top of the HEDT market, but the ~US$700 gap between the 3950X and 3960X has been filled with Intel HEDT parts that are at least competitive with the consumer parts from both companies and as well as TR3 in single-core while providing the advantages of HEDT over consumer platforms, in stark contrast to TR1 and TR2 that come with a raft of compromises.

What makes you think AMD left that spot for Fat Elvis? It's not even in the Top 50 CPUs on launch day, lol.
 
What makes you think AMD left that spot for Fat Elvis? It's not even in the Top 50 CPUs on launch day, lol.

You're seriously expecting HEDT CPUs to be in the top anything with respect to sales?

Lol yourself.
 
You're seriously expecting Intel's HEDT CPUs to be in the top anything with respect to sales?

Lol yourself.

No. I'm not really expecting Fat Elvis to enter the chart. Who'd be dumb enough to buy it after the scathing reviews?
 
I'm thinking more along the lines of releasing a TR with two CCDs instead of four, as AMD's 2nd-gen TR really doesn't provide solid competition to Intel HEDT up to sixteen cores. A hypothetical two-CCD TR3 with sixteen cores would provide the cache advantages and platform advantages while potentially being able to clock higher simply due to less heat. A two-CCD part with one core per CCX disabled would yield a twelve-core part with the same advantages, and in both cases AMD could charge less for them than they are for the 24-core+ parts with four CCDs which could bridge that US$400 gap.

I'm really just surprised that AMD left that spot to Intel. AMD now owns the top of the HEDT market, but the ~US$700 gap between the 3950X and 3960X has been filled with Intel HEDT parts that are at least competitive with the consumer parts from both companies and as well as TR3 in single-core while providing the advantages of HEDT over consumer platforms, in stark contrast to TR1 and TR2 that come with a raft of compromises.

That's an interesting take on it. I have wondered why AMD didn't release something in between the 3950X and 3960X for the same reason. It leaves a gap for Intel that AMD doesn't have to let them have. I didn't think about a TR utilizing only two CCD's specifically, just in more general terms.
 
That's an interesting take on it. I have wondered why AMD didn't release something in between the 3950X and 3960X for the same reason. It leaves a gap for Intel that AMD doesn't have to let them have. I didn't think about a TR utilizing only two CCD's specifically, just in more general terms.

I'm just reminded about Intel's Kaby Lake (I believe) quad-core HEDT CPUs. They didn't make much sense unless you needed HEDT for the platform support but little else, at which point they actually became desirable.

With TR3, yeah, you have a ton of cores and finally Threadripper doesn't appear to have any real solid weaknesses, but at the same time the price of entry to the platform is pretty extreme at around US$2000 -- and Intel managed to release a refresh that drops that price of entry in exchange for cores and little else.

Really, I'm just surprised.
 
I'm just reminded about Intel's Kaby Lake (I believe) quad-core HEDT CPUs. They didn't make much sense unless you needed HEDT for the platform support but little else, at which point they actually became desirable.

With TR3, yeah, you have a ton of cores and finally Threadripper doesn't appear to have any real solid weaknesses, but at the same time the price of entry to the platform is pretty extreme at around US$2000 -- and Intel managed to release a refresh that drops that price of entry in exchange for cores and little else.

Really, I'm just surprised.

Oh no, nothing made sense about Kaby Lake-X. It was the most retarded thing Intel has ever produced. The Core i7 7740X is an LGA 2066 part that limited the X299 platform to 16 PCIe lanes via the CPU and only offered dual-channel memory support. It literally disabled every feature that you jump onto the HEDT platform for. It was pointless. The reason this was done was to produce a part that would overclock better due to the power delivery capabilities and larger heat dissipation area of the LGA 2066 heat spreader. Also, without the iGPU, it afforded more overclocking headroom. Mine will hit 5.1GHz or 5.2GHz on some boards.

It was nothing more than a 7700K in an LGA 2066 package with its iGPU disabled.
 
Well, it's more of the same from Intel. The price cut is nice, but you'd have to overclock the hell out of it and need the extra lanes and memory bandwidth for it to make sense over a 3950X. Or you'd have to have an existing X299 setup without a 7980XE or 9980XE processor already.


Well I have the 14 core 7940x and I'm not really enticed by anything right now.


However one thing could easily put this in for anyone's favor....

Who will be available.

a. amd 16 core
b. amd 24 or 32 core
c. intel 18 core
d. all the above both
e. none of the above

I have a feeling availability might be king
 
It was nothing more than a 7700K in an LGA 2066 package with its iGPU disabled.

Yup, looks like I should have looked it up first. I know that there were other *lake-based HEDT CPUs that straddled the line at various points for those that needed the connectivity of HEDT but not the cores or the prices that the top models demanded.
 
Yup, looks like I should have looked it up first. I know that there were other *lake-based HEDT CPUs that straddled the line at various points for those that needed the connectivity of HEDT but not the cores or the prices that the top models demanded.

The initial LGA 2066 offerings were BS. They only featured 28 PCIe lanes (or 16 PCIe lanes) on all models that were priced less than $1,000. When the first refresh hit bringing us the 9xxx series parts, the neutering of the PCIe controller stopped and all models got the full 40 PCIe lanes. The same is true of Cascade Lake-X giving us 44 PCIe lanes for all SKU's.
 
Linus showed the 10980XE to be doing pretty well across the board -- better than AMD in gaming, and by 'better', we mean Intel's lows ahead of AMD's highs, more than a 'margin of error' -- while also being competitive in multicore workloads.

That's versus the 3950X that no one can buy. So Intel has a point: if I had to buy something in that range today, it'd be Intel.

Of course, let's wait to see what TR3 brings. I'm quite interested to find out how well they do in gaming and in the few productivity workloads (ahem, Premiere) that their previous iterations were not optimized for.

time to throw the blue panties in the hamper my friend. the crimson tide just rolled in. and will be here for a minute.
 
Just about the only thing I've gotten out of this thread so far is that Linus posted a huge whiny rant complaining about the literal exact thing his video was about. It was a six hour delay and he had already shot and edited his AMD review. He could have easily done a single video which was a head-to-head review and simply released it 6 hours later than the rant.

Oh right, I also learned that Intel's new stuff beats AMD's new stuff in some ways while losing to it in others - and many comparisons actually could go either way depending on the allegiance of the one doing the benchmarking. Or, in other words, the two brands have achieved parity.
 
  • Like
Reactions: Auer
like this
Just about the only thing I've gotten out of this thread so far is that Linus posted a huge whiny rant complaining about the literal exact thing his video was about. It was a six hour delay and he had already shot and edited his AMD review. He could have easily done a single video which was a head-to-head review and simply released it 6 hours later than the rant.

Oh right, I also learned that Intel's new stuff beats AMD's new stuff in some ways while losing to it in others - and many comparisons actually could go either way depending on the allegiance of the one doing the benchmarking. Or, in other words, the two brands have achieved parity.

Linus is very popular with the meme crowd and a lot of /r groups. Not surprisingly he drummed up the numbers on Youtube. I suspect a large amount of his followers are still living with their parents.

His "Intel’s behavior is PATHETIC – Core i9 10980XE Review" has 1.3 million views so far. Pretty impressive. And lucrative.

In comparison, GN's "Intel Core i9-10980XE CPU Review: Premiere, Blender, Overclocking, & Power" has 100k views.
 
Thanks for posting the reviews; much appreciated.

I'm far from enthused with the 10980XE. I Intel had moved it to 10nm then it might have been a contender. As it sits now it just seems like an old car with dents removed and a new coat of paint.
 
  • Like
Reactions: erek
like this
But other stuff like Photoshop, Lightroom, and a number of other 'productivity / content creation' suites? We're seriously lacking credible alternatives, so it makes sense to point out when weird architectures like TR1 and TR2, and Zen in general, don't behave as expected.
Agreed. But the question is why? Lazy compiler use (the ole genuineintel flag)? Poor optimisation? Lack of fucks? Knowing Adobe, probably all three. I don't know about 24/7 pro-workloads but for semi-pro (really it is professional use but more simple stuff and not to level of some hard out graphics designer), it's more than enough for me on Paintshop pro '18. It also costs a fraction of the price I think was about 70 USD on a sale? I can do everything I used to want out of PS - Adobe can walk off a plank at that price. However it may need more threading as with very, very large images (8-10k px+) it will be taking a while for full-res changes. I should monitor the CPU loading next time and report back. But that's on a 2600k which is hardly new or fast these days.. seemed great on a 2600x but I wasn't doing as large images.

I think just like people not wanting to leave FCX on video workloads until recent years (but MUH APPLE) look how that turned out... changed really quickly when they made some tablet level bullshit for pro use. Most of the industry types I worked with in past hated the new final cut and I heard them bitching about it unprompted.. enough said. I've heard similar about the Adobe subscription level stuff - some like it who are not heavy users but people working remotely in weird places or without internet are not fans at all. Or people who just want to pay up front and actually own their software.
 
Agreed. But the question is why? Lazy compiler use (the ole genuineintel flag)? Poor optimisation? Lack of fucks? Knowing Adobe, probably all three. I don't know about 24/7 pro-workloads but for semi-pro (really it is professional use but more simple stuff and not to level of some hard out graphics designer), it's more than enough for me on Paintshop pro '18. It also costs a fraction of the price I think was about 70 USD on a sale? I can do everything I used to want out of PS - Adobe can walk off a plank at that price. However it may need more threading as with very, very large images (8-10k px+) it will be taking a while for full-res changes. I should monitor the CPU loading next time and report back. But that's on a 2600k which is hardly new or fast these days.. seemed great on a 2600x but I wasn't doing as large images.

I think just like people not wanting to leave FCX on video workloads until recent years (but MUH APPLE) look how that turned out... changed really quickly when they made some tablet level bullshit for pro use. Most of the industry types I worked with in past hated the new final cut and I heard them bitching about it unprompted.. enough said. I've heard similar about the Adobe subscription level stuff - some like it who are not heavy users but people working remotely in weird places or without internet are not fans at all. Or people who just want to pay up front and actually own their software.

Photoshop and Lightroom are my bread winners. Standards are standards and it's hard to break habits and norms.

Fortunately they are not really demanding on hardware. But sometimes picky tho.

Agreed that there should be more options for ownership vs subscription. For my situation the sub works fine. For now.
 
What are you using instead of premiere?

Premiere looks like dog shit optimisation with such outlying results for a threaded workload, considering 3950 has no numa bullshit this time around. Nothing has changed in years there, same for GPU offloading. Adobe has really slipped since cs6 and now they are not the only game in town, thank fuck. I will not be buying from them again.
 
One last thing regarding Photoshop and Lightroom.

At $190, the i5-9600KF is a solid option atm. You really don't need more cores unless you browse some really massive image libraries on the daily.

Should be fine for most workflows.
 
I'm thinking more along the lines of releasing a TR with two CCDs instead of four, as AMD's 2nd-gen TR really doesn't provide solid competition to Intel HEDT up to sixteen cores. A hypothetical two-CCD TR3 with sixteen cores would provide the cache advantages and platform advantages while potentially being able to clock higher simply due to less heat. A two-CCD part with one core per CCX disabled would yield a twelve-core part with the same advantages, and in both cases AMD could charge less for them than they are for the 24-core+ parts with four CCDs which could bridge that US$400 gap.

I'm really just surprised that AMD left that spot to Intel. AMD now owns the top of the HEDT market, but the ~US$700 gap between the 3950X and 3960X has been filled with Intel HEDT parts that are at least competitive with the consumer parts from both companies and as well as TR3 in single-core while providing the advantages of HEDT over consumer platforms, in stark contrast to TR1 and TR2 that come with a raft of compromises.

They have a 16 core epyc already. It's almost certainly coming to TR3, but it's yield and production capability right now, AMD can't build enough. Probably Q1-Q2 2020 is my guess. Maybe I'm wrong and they'll keep the 2000 series for that which is disappointing (TR2 is a no-deal for me due to NUMA/etc) as I know a few people on here are hankering for a 16 core TR3 solution.

Agreed re: thermals too. Taking advantage of the chiplet layout will enable some interesting performance configurations in future. But Zen3 looks to go 1x8 CCX so we maybe won't be able to see the 1 CCX clocked high other low trick after Zen 2 :(
I'd pay good money for a 3950x style T-rex but with two high bin dies for a 4.4-4.5 all core.. that would be a beast in some workloads.

We are already seeing higher clocks on some of the 3600 etc stuff, had a few people hitting 4.5+ all core already which wasn't the case at launch. So microcode/optimisations/etc have probably helped a little but I'd say they are finally getting that few percent extra out of sillicon yield improvements.. or simply can't use them all for 3900s... TSMC 7nm was the fastest ramp of anything made for consumer sillicon by a large margin, so it wouldn't be impossible to see such gains already. It was ~2x faster than anything else!!!

7nm ramp.jpg


What are you using instead of premiere?
At this point I only edit 1080 so it's not a major issue using older stuff if it works. Still on CS5 (CS6 is massively overpriced due to it being the last 'good' CS.. very hard to find online these days due to that) but in future will be moving likely to Davinci or Vegas or one of the other newer contenders. Any suggestions?
I'll be digging into that when the time comes for the 4k jump.. will probably DIY the 4k camera too with C-mount to keep costs reasonable, global 4k/120 BSI sensors with large pixels (4um+) from Sony that are affordable are around the corner.. means HDR 60p and excellent light sensitivity when shooting slower (I've seen 4.7um starvis sensors doing *amazingly* well in basically 0 light - practically better than my eyes..) and it will open the door to excellent results for a fraction of the cost of the mid range 4k stuff, without being so light dependent like current crop of mid range 4k stuff is. Sony really has got such a large lead in the sensor business now, they make most of the good stuff for other companies, IIRC only Canon still does their own stuff for DSLR.
I also want to play with the polarisation interrogating sensors they have for ML/material processing designs, that could be hellishly good fun. Whole 'nother world with polarisation angle being selection capable on-chip. The gratings are integrated into the packaging...
 
They have a 16 core epyc already. It's almost certainly coming to TR3, but it's yield and production capability right now, AMD can't build enough. Probably Q1-Q2 2020 is my guess. Maybe I'm wrong and they'll keep the 2000 series for that which is disappointing (TR2 is a no-deal for me due to NUMA/etc) as I know a few people on here are hankering for a 16 core TR3 solution.

Agreed re: thermals too. Taking advantage of the chiplet layout will enable some interesting performance configurations in future. But Zen3 looks to go 1x8 CCX so we maybe won't be able to see the 1 CCX clocked high other low trick after Zen 2 :(
I'd pay good money for a 3950x style T-rex but with two high bin dies for a 4.4-4.5 all core.. that would be a beast in some workloads.

We are already seeing higher clocks on some of the 3600 etc stuff, had a few people hitting 4.5+ all core already which wasn't the case at launch. So microcode/optimisations/etc have probably helped a little but I'd say they are finally getting that few percent extra out of sillicon yield improvements.. or simply can't use them all for 3900s... TSMC 7nm was the fastest ramp of anything made for consumer sillicon by a large margin, so it wouldn't be impossible to see such gains already. It was ~2x faster than anything else!!!

View attachment 202485



At this point I only edit 1080 so it's not a major issue using older stuff if it works. Still on CS5 (CS6 is massively overpriced due to it being the last 'good' CS.. very hard to find online these days due to that) but in future will be moving likely to Davinci or Vegas or one of the other newer contenders. Any suggestions?
I'll be digging into that when the time comes for the 4k jump.. will probably DIY the 4k camera too with C-mount to keep costs reasonable, global 4k/120 BSI sensors with large pixels (4um+) from Sony that are affordable are around the corner.. means HDR 60p and excellent light sensitivity when shooting slower (I've seen 4.7um starvis sensors doing *amazingly* well in basically 0 light - practically better than my eyes..) and it will open the door to excellent results for a fraction of the cost of the mid range 4k stuff, without being so light dependent like current crop of mid range 4k stuff is. Sony really has got such a large lead in the sensor business now, they make most of the good stuff for other companies, IIRC only Canon still does their own stuff for DSLR.
I also want to play with the polarisation interrogating sensors they have for ML/material processing designs, that could be hellishly good fun. Whole 'nother world with polarisation angle being selection capable on-chip. The gratings are integrated into the packaging...

I use Vegas a lot, humble bundle does a deal on it every so often and once you have a license I think it was 200 or 250 to upgrade to the newest when they run their promotions. I've been meaning to switch to resolve for everything but the Vegas interface is just too easy and quick haha. Motion tracking and layers (or should I say nodes) look to be more advanced on resolve. Older versions of vegas used to crash a lot but the newest has some decent stability improvements. I don't think it can make use of the 64 threads on the 3970x yet though. Color grading on resolve is way better than vegas.
 
Back
Top