Threadripper 3000, who bought it?

I would absolutely have bought a 16 core TR3 for $900-$950 because I don't need 24 cores but I do need the PCIE lanes for 1 gpu (16 lanes), 5 m.2 SSD's (20 lanes), 1 dual 40GB NIC (8 lanes). Instead I'm stuck getting 24 cores and paying more, ah well.

What's the use case? I can't think of much that would leverage all that and not also love lots of cores.
 
What's the use case? I can't think of much that would leverage all that and not also love lots of cores.

That is silly. Those things are not related at all.

I would have bought an 8 core TR3 if I could.

I want to never run out of expansion slots.

I don't understand how anyone gets away with only 24 PCIe lanes. I consider the 40 on my x79 chip to be the absolute minimum that is usable.
 
  • Like
Reactions: mikeo
like this
That is silly. Those things are not related at all.

I would have bought an 8 core TR3 if I could.

I want to never run out of expansion slots.

I don't understand how anyone gets away with only 24 PCIe lanes. I consider the 40 on my x79 chip to be the absolute minimum that is usable.

Realistically, a GPU and an nvme drive are all people put in most pcs - hedt excluded to a degree, but most uses that need that much expansion are usually also accelerated by more cores. Multi gpus are rare these days and aren't limited in performance by dropping to x8 for the most part. Nvme raid setups are likely database or video editing scratch space, both of which scale in performance with cores. Multi gpu is usually scientific compute which can also leverage all the cores you can throw at it. So im wondering what the case for 40 plus pcie lanes and limited cores is. Straight file serving?

Edit: in both cases im genuinely curious what the use case is. Please don't say VMs.
 
Last edited:
Realistically, a GPU and an nvme drive are all people put in most pcs - hedt excluded to a degree, but most uses that need that much expansion are usually also accelerated by more cores. Multi gpus are rare these days and aren't limited in performance by dropping to x8 for the most part. Nvme raid setups are likely database or video editing scratch space, both of which scale in performance with cores. Multi gpu is usually scientific compute which can also leverage all the cores you can throw at it. So im wondering what the case for 40 plus pcie lanes and limited cores is. Straight file serving?

Edit: in both cases im genuinely curious what the use case is. Please don't say VMs.

Being able to add multiple NMVE drives?

Dedicated 10 gig Ethernet to a NAS server?

Various accessories, like additional USB ports on a PCIE card, or that old sound card you don't really use anymore but want to keep around for the occasional recording job?

And then it's also nice to have a few spare slots available just in case you want to do something else.

I never want to be expansion limited again.
 
Being able to add multiple NMVE drives?

Dedicated 10 gig Ethernet to a NAS server?

Various accessories, like additional USB ports on a PCIE card, or that old sound card you don't really use anymore but want to keep around for the occasional recording job?

And then it's also nice to have a few spare slots available just in case you want to do something else.

I never want to be expansion limited again.

Yep. The reasons above are why I primarily buy HEDT.
 
What's the use case? I can't think of much that would leverage all that and not also love lots of cores.

I have a basement server that's 2 x 8 core e5v2 Xeon and 384GB of RAM with 9 8TB drives, and a 500GB cache m.2 SSD. It runs freenas (the drives are a raidz2) and pfsense. I then have my gaming/etc. desktop that is the Threadripper (currently 1920x). It has 4x1TB Samsung server m.2 SSD's (not the fastest but cheap on ebay) in a bifurcation card as my boot drive, and a 16TB iSCSI LUN from the basement freenas box (why put spinning drives in my desktop when I can have spinning drives in a faster RAID with real redundancy in the basement?!). Enter the 40GB NIC (also cheap on ebay) with a fiber line direct connected to the basement server to make that iSCSI LUN as quick as possible. Then I also have a 1TB fast mydigitalssd m.2 SSD that acts as a clientside cache for the iSCSI LUN via PrimoCache.

Yes, I know probably literally NO ONE ELSE has such a setup....but I'm sure there are a decent number of people with other use cases of needing more lanes and not more cores, and it isn't like the Zen 2 chiplets that are going into TR3's are "special", they could just use the same 4 core cut down ones that are going into 3700x/3800x's and put 4 of them in a TR3 "3955x" or something and charge a $150-$200 premium.
 
I have a basement server that's 2 x 8 core e5v2 Xeon and 384GB of RAM with 9 8TB drives, and a 500GB cache m.2 SSD. It runs freenas (the drives are a raidz2) and pfsense. I then have my gaming/etc. desktop that is the Threadripper (currently 1920x). It has 4x1TB Samsung server m.2 SSD's (not the fastest but cheap on ebay) in a bifurcation card as my boot drive, and a 16TB iSCSI LUN from the basement freenas box (why put spinning drives in my desktop when I can have spinning drives in a faster RAID with real redundancy in the basement?!). Enter the 40GB NIC (also cheap on ebay) with a fiber line direct connected to the basement server to make that iSCSI LUN as quick as possible. Then I also have a 1TB fast mydigitalssd m.2 SSD that acts as a clientside cache for the iSCSI LUN via PrimoCache.

Yes, I know probably literally NO ONE ELSE has such a setup....but I'm sure there are a decent number of people with other use cases of needing more lanes and not more cores, and it isn't like the Zen 2 chiplets that are going into TR3's are "special", they could just use the same 4 core cut down ones that are going into 3700x/3800x's and put 4 of them in a TR3 "3955x" or something and charge a $150-$200 premium.

I think those setups are more common on [H]. My 12 core xeon x79 that got replaced by the new threadripper will get moved into a zfs array with 6x 4tb SATA ssds in raidz1 (micron 5100 ecos that were on sale for 340 for a while) and a bunch of old 3tb/4tb spinners in raidz2. Thats why I always run ECC on my desktops, so when it gets replaced the old one can be used for the ZFS server. Currently running on an old quad core xeon, haven't upgraded to 40g yet, what's the preferred 40g ebay cards these days?
 
I think those setups are more common on [H]. My 12 core xeon x79 that got replaced by the new threadripper will get moved into a zfs array with 6x 4tb SATA ssds in raidz1 (micron 5100 ecos that were on sale for 340 for a while) and a bunch of old 3tb/4tb spinners in raidz2. Thats why I always run ECC on my desktops, so when it gets replaced the old one can be used for the ZFS server. Currently running on an old quad core xeon, haven't upgraded to 40g yet, what's the preferred 40g ebay cards these days?

I'm rocking dual port Mellanox Connectx-3's, they're usually like $30 each, then $15ish for QSFP+ transceivers if you're going fiber not DAC. I don't want to think about the cost of 40G cat6/7 transceivers so I stick with fiber, my office is right above the basement so it's easy anyway, but a bit long for DAC. e5-v2 setups are great right now because DDR3 ECC RDIMMS are literally $1/GB on ebay these days, that's why I went bananas on my server and filled it with 24x16GB sticks.
 
So what I'm gathering is it's mainly driven by storage, which does make sense. What's curious to me is apparently the need to have a whole mess of high speed storage available but no need to do much processing on it that an 8 or 16c couldn't easily handle. For me, file movement speed is a much lower concern than actual processing. I can move files in a few minutes, but the processing can take hours, depending on complexity.

I think [H] is probably the exception on a lot of things which is why I was genuinely curious, but maybe that's an idea for a different thread.
 
  • Like
Reactions: mikeo
like this
Yeah with [H] the storage server probably only gets used by one client at a time via direct access 10/40g so no need for the processing power required for hundreds of clients. Also slots get used by LSI sas cards too.
 
So what I'm gathering is it's mainly driven by storage, which does make sense. What's curious to me is apparently the need to have a whole mess of high speed storage available but no need to do much processing on it that an 8 or 16c couldn't easily handle. For me, file movement speed is a much lower concern than actual processing. I can move files in a few minutes, but the processing can take hours, depending on complexity.

I think [H] is probably the exception on a lot of things which is why I was genuinely curious, but maybe that's an idea for a different thread.

I only very rarely do anything that requires any level of processing, which is why lots of cores are a waste on me.

On the rare occasion I'll transcode something, but that's only a handful of times per year.
 
Mellanox CX456A ConnectX-4 Dual Port 100G would be boss to have at home if they weren't 460 each on ebay. A little over 1k for 100g at home lol, thats [h]ard. Looking at two MCX354A-FCBT cards for 40g and maybe 56gbe if FDR VPI works. However it looks like I might need to upgrade to windows 10 pro for workstations to get RDMA working properly for 40g speeds instead of just another 10g ethernet setup, and that license is 290 on amazon right now.....

Edit: seems like the upgrade from 10 pro to workstations is cheaper than buying the standalone license, maybe I will go with the upgrade.
 
Last edited:
Mellanox CX456A ConnectX-4 Dual Port 100G would be boss to have at home if they weren't 460 each on ebay. A little over 1k for 100g at home lol, thats [h]ard. Looking at two MCX354A-FCBT cards for 40g and maybe 56gbe if FDR VPI works. However it looks like I might need to upgrade to windows 10 pro for workstations to get RDMA working properly for 40g speeds instead of just another 10g ethernet setup, and that license is 290 on amazon right now..... hmmm.....

I actually forgo RDMA, cause Storage Spaces just....aren't that great. S2D (Storage Spaces Direct) looks pretty good, but I can't find much (any) good documentation about running it on a single node not a cluster. With max possible MTU and a direct link I get like 36GB line speed with Freenas and that's "good enough".
 
I actually forgo RDMA, cause Storage Spaces just....aren't that great. S2D (Storage Spaces Direct) looks pretty good, but I can't find much (any) good documentation about running it on a single node not a cluster. With max possible MTU and a direct link I get like 36GB line speed with Freenas and that's "good enough".

yeah looks like RDMA / SMB direct isn't quite supported by samba servers yet...
https://samba.plus/fileadmin/proposals/SMB-Direct.pdf
 
I only very rarely do anything that requires any level of processing, which is why lots of cores are a waste on me.

On the rare occasion I'll transcode something, but that's only a handful of times per year.

What do you do with all of these files that you need high speed access to then? I'm still not fully understanding I guess. I'm still on lowly gigabit as even the times I do need to transfer a lot of data, it's usually to or from a laptop (ie capture in field and do some pre processing) which all only have GB nics anyway. If it's over a few GB I'll plug the laptop into the lan, otherwise I just transfer via 5ghz. The bulk storage is directly attached to my main machine that does all the work anyway.

I guess I'm looking for an excuse to keep my x399 or go to a trx40 but you guys aren't helping me see why I "need" it other than for the cores:) My bottom barrel am4 can already handle more drives than my case can - granted that's easy to rectify, and upgrading to x570 would get me m.2 raid, I have a pcie usb card (which only takes 1 lane) installed already for additional usb connectivity. All I can think of is if I want thunderbolt, but I don't have any tb peripherals so it'd be odd flex to add it...
 
I buy hedt for the cores 1st.
2nd the lanes...

I have 2080ti my gaming and render monster for Davinci Resolve 16.

I have an 8x 10gig Fiber to switch to 10gig switch to nas connection.

I have a 1x usb3 4 port card

I have 3x nvme drives

I still have excess lanes pouring out of my ass lol

I have so many lanes left over I thought about putting my old adaptec SAS hba in just to play with my old Cheetah 15krpm sas drive

Nothing even capable in x570 or z390 chips.
 
Started my 3960x build last night 1000d 2 480 rads...I knew this thing was big but Jesus it's the size of a dishwasher and I never had a case that didn't look/feel cramped doing a loop in before this bad boy. Doing my leak test today. Going to be a nice upgrade over my 5930k.
 
I don't want to think about the cost of 40G cat6/7 transceivers so I stick with fiber

I'm not aware of any production QSFP or faster CAT6a or better transceivers... Best I've seen over twisted-pair is 10Gbase-T. I know they're working on faster, just not aware of anything being released. Over 10Gbit, it's all DACs and fiber.

What's curious to me is apparently the need to have a whole mess of high speed storage available but no need to do much processing on it that an 8 or 16c couldn't easily handle.

Well, it's mostly that I don't want a rack of spinners next to my work desk :D

So they go in a NAS, and it goes someplace else. And then 10Gbit goes wherever the data may be needed.

But in a single-user environment, DAS is fine.

As for the number of cores... my NAS has a 7600K in it. That's four cores, no hyper-threading. A 1050 Ti backs it up for live transcoding where needed, but otherwise, file storage is just not an intense workload. Most nicer NASs have Atom-class CPUs in them.

I actually forgo RDMA, cause Storage Spaces just....aren't that great. S2D (Storage Spaces Direct) looks pretty good, but I can't find much (any) good documentation about running it on a single node not a cluster. With max possible MTU and a direct link I get like 36GB line speed with Freenas and that's "good enough".

I've made it work, but, ZFS on BSD or ZoL is going to be more robust and more flexible as array complexity grows. Microsoft needs to finish fucking around with ReFS and actually get it in the game first. Or, you know, eventually.

That is if they're not just going to roll ZoL into their custom Linux kernel and call it a day, cause that'd probably be better all around.
 
  • Like
Reactions: mikeo
like this
I've made it work, but, ZFS on BSD or ZoL is going to be more robust and more flexible as array complexity grows. Microsoft needs to finish fucking around with ReFS and actually get it in the game first. Or, you know, eventually.

That is if they're not just going to roll ZoL into their custom Linux kernel and call it a day, cause that'd probably be better all around.

I honestly have more faith in FreeNAS/FreeBSD supporting RDMA/RoCE/iWARP/whatever interruptless direct network memory reading technology than I have in Microsoft making parity ReFS into a nice home solution....

...and I work at Microsoft.
 
I posted pics of the Dallas location that I shot a little over a week ago in that thread for anyone who wants to see that.

Link the thread? I really want to see haha

We have two Frys in Atlanta area and I haven't been to either one in about a year or so.
 
I'm using ten year old laptops and desktops at home and at work and... well, with SSDs and more memory, they get the basic stuff done.

For doing work, I'm with you, and I don't think AMDs TR 3000 CPUs are overpriced for what they offer -- just that the base price of entry is quite high while the utility over a consumer solution is a hard sell for most.


I tell my small business customers to NOT buy new machines. I tell them to go to Amazon Renewed and buy Lenovo T430 laptops and Dell core i5 SFF desktops with 8GB/250GB SSD/Win10 for around £170 a box. Three machines for the price of one. Anything more than quad core and a SATA SSD is super overkill.

The need for this level of power available now is appealing to a smaller and smaller group.

Essentially we are going to see a huge price/performance crash in a few years.
 
Alright, so not only did I buy one, but I bought two 3960x's accidentally.

I was holding off on cancelling my B&H pre-order until I had my other 3960x courtesy of tangoseal in hand. It got delayed by USPS and is likely arriving tomorrow, and in the meantime B&H unexpectedly shipped.

Soo.... If anyone needs a 3960x, give me a holler. I'll sell at my cost plus shipping. No scalping here.

I can just return it, but I figured since these are still out of stock everywhere Is give the H some first dibs.
 
Alright, so not only did I buy one, but I bought two 3960x's accidentally.

I was holding off on cancelling my B&H pre-order until I had my other 3960x courtesy of tangoseal in hand. It got delayed by USPS and is likely arriving tomorrow, and in the meantime B&H unexpectedly shipped.

Soo.... If anyone needs a 3960x, give me a holler. I'll sell at my cost plus shipping. No scalping here.

I can just return it, but I figured since these are still out of stock everywhere Is give the H some first dibs.

You better hold onto that B&H one since USPS may give the one from Tango to your neighbor and claim it was delivered...basstards
 
Ughh Zarathustra[H]

Looks like your chip gets there wednesday. Talk about a bitching delay. Must be some serious weather or something along the route or a plane broke down.

Not quite sure what happened, but in the grand scheme of things a two day delay doesn't seem that bad. That said I am feeling a little bit like a kid at Christmas right now, so I wish it would hurry up and get here :p

Then again I have no idea how long shipping from Atlanta to Boston usually takes, so maybe this is a significant delay.

The weirdest shit happens in logistics, though.

I once worked for a company that had had a supplier issue resulting in a recall, putting us in back order.

We fixed the supplier issue and all worked overtime and got a full truckload of material out the door to help resolve the backorder. Trucker had an accident somewhere in the mid west, and when the police showed up it turns out he was drunk. Got arrested, and the entire truck was taken into custody by the police.

There we are negotiating with the police to get our material released so we can ship product to our increasingly angry customers :p
 
Last edited:
Then again I have no idea how long shipping from Atlanta to Boston usually takes, so maybe this is a significant delay.

Anything but ground should be overnight, and if ground isn't overnight, it should at most be two days. Shippers don't make money by keeping stuff back. Either there was some sort of external delay, or volume was so high that the load your package was on had to wait for another shift most likely.

- a former UPSer
 
Anything but ground should be overnight, and if ground isn't overnight, it should at most be two days. Shippers don't make money by keeping stuff back. Either there was some sort of external delay, or volume was so high that the load your package was on had to wait for another shift most likely.

- a former UPSer

This one is USPS, but a lot of the same stuff probably goes across carriers.

Appreciate the info!
 
Ughh Zarathustra[H]

Looks like your chip gets there wednesday. Talk about a bitching delay. Must be some serious weather or something along the route or a plane broke down.

Well, weird.

It went straight from "in transit" to "attempted delivery failed" at 4:11pm today, never "out for delivery".

Rushed from work to get to the post office before they close at 5:30. I'm here now, but the truck isn't back yet :/

I'm hanging out until they close, but at this point it's not looking promising for today...
 
Well, weird.

It went straight from "in transit" to "attempted delivery failed" at 4:11pm today, never "out for delivery".

Rushed from work to get to the post office before they close at 5:30. I'm here now, but the truck isn't back yet :/

I'm hanging out until they close, but at this point it's not looking promising for today...


My lucky day, it came in while I waited.

From the "pics or shens" department:

IMG_20191217_180126.jpg


Thanks tangoseal!
 
Last edited:
FYI your Name and address is in that, might want to edit =/

Doh

Thanks. Didn't even notice it on the lid.

Funny part is I checked the CPU box to make sure there wasn't something on there, and I completely missed the obvious open box flap right next to it.

The human brain is funny sometimes.

I guess this is how some people miss their background dildos.
 
Last edited:
What kind of clock speed are y'all getting out of these? Also with just pbo and everything on auto my software is reporting a 1.425 v core and bosting to 4.35ish all core is that safe? Also my single core bost is lower at around 3.8-4? What's the key to getting light threded clocks up?

Temps running cinebench stay 50-55ish
 
This CPU is amazing. Just running PBO only, in Prime95 the CPU converges at 68C with 48 threads. In Furmark’s CPU test it runs 4.1 GHz on all cores. This to me is amazing because I’m still running a Corsair H115 AIO that I’m sure represents bottom-of-class in cooling for this platform—and it’s still good for a ~12,900 Cinebench R20 score. Just amazing!
 
Last edited:
This CPU is amazing. Just running PBO only, in Prime95 the CPU converges at 68C with 48 threads. In Furmark’s CPU test it runs 4.1 GHz on all cores. This to me is amazing because I’m still running a Corsair H115 AIO that I’m sure represents bottom-of-class in cooling for this platform—and it’s still good for a ~12,900 Cinebench R20 score. Just amazing!
What vcore is your bosting to? Also what is your single core boost and cinebench score? I think my light core work load boost is not working right the clock speed is all over the place form 3-4.5 peak but stays in the 3.8-4 range and scores about 512. all core cinebench holds a rock stable 4.35 gets about 13600
 
Back
Top