Help with choosing Between True OS or FreeBSD especially for final results of Lian-Li D8000 build

Status
Not open for further replies.
I need to do the highest end workstation tasks I can do

Ok, fine so you want something fast/powerful

I don't care about how a 3900x is going to destroy my old ass hardware in every single way possible

Yet .. you don't?

I get it, you have a stupid fascination with a dual processor system. These days they are pretty much pointless for workstations, you can get MUCH better clockspeed (ie actual performance) in single processor workstation systems. Server cpu's are optimized for throughput, desktop/workstation cpu's are optimized for latency and responsiveness.

Everyone is telling you to use a single cpu workstation because it will be better for you.

You insist on using a dual cpu system because you have some child-like infatuation with them. Fine, get ahold of some old crap to build a dual cpu system and get the bug out of your system, but DON'T actually expect that system to be useful in modern tasks.


Oh, to answer your question, TrueOS uses the FreeBSD kernel (that literally took 2 seconds to google) so it will be the exact same as FreeBSD regarding any issues with ACPI, although issues with ACPI are really a thing of the past, definitely not something I'd be worrying about in 2019.
 
How did you guys even read the OP? I tried, I really did. I even copied and pasted the entire post into Word hoping that it would be able to parse through and fix the grammar. When it couldn't, I went through and manually added punctuation. After all that I still couldn't understand whatever the hell OP was trying to say so I tried clearing out some of chunks of it that were just ramblings.. and it literally didn't do anything to help. That post was honest to God the hardest thing I have read so far. I still don't understand what the hell hes asking.
 
Then why is this even a talking point? You also said Epyc boards are only half as good as Xeon scalable boards because of their RAM support. That's simply untrue.

Fine maximum memory support for AMD EPYC motherboards is only half as good as the maximum support for Intel Xeon Scalable from what I've seen and I've checked the leading server motherboards sites from Supermicro, Tyan, ASUS, Gigabyte, and ASROCK.
 
How did you guys even read the OP? I tried, I really did. I even copied and pasted the entire post into Word hoping that it would be able to parse through and fix the grammar. When it couldn't, I went through and manually added punctuation. After all that I still couldn't understand whatever the hell OP was trying to say so I tried clearing out some of chunks of it that were just ramblings.. and it literally didn't do anything to help. That post was honest to God the hardest thing I have read so far. I still don't understand what the hell hes asking.

Yea okay I need to break up my words into sentences because apparently the first paragraph of my originally post in this thread is two sentences and a paragraph is actually usually about five sentences.
 
I work on a system that has 24TB of RAM, and we've got a CR to expand that because we're running out memory causing the system to dump. So "they" know how to make them. There's just not a market for it outside of the extreme enterprise segment.

I haven't had that problem as far as I know, but I've never had more that 32 GB ECC installed at once.
 
Fine maximum memory support for AMD EPYC motherboards is only half as good as the maximum support for Intel Xeon Scalable from what I've seen and I've checked the leading server motherboards sites from Supermicro, Tyan, ASUS, Gigabyte, and ASROCK.

I haven't had that problem as far as I know, but I've never had more that 32 GB ECC installed at once.

Again, as I said, for your purposes the memory support differences between the two platforms is a non-issue.
 
Ok, fine so you want something fast/powerful



Yet .. you don't?

I get it, you have a stupid fascination with a dual processor system. These days they are pretty much pointless for workstations, you can get MUCH better clockspeed (ie actual performance) in single processor workstation systems. Server cpu's are optimized for throughput, desktop/workstation cpu's are optimized for latency and responsiveness.

Everyone is telling you to use a single cpu workstation because it will be better for you.

You insist on using a dual cpu system because you have some child-like infatuation with them. Fine, get ahold of some old crap to build a dual cpu system and get the bug out of your system, but DON'T actually expect that system to be useful in modern tasks.


Oh, to answer your question, TrueOS uses the FreeBSD kernel (that literally took 2 seconds to google) so it will be the exact same as FreeBSD regarding any issues with ACPI, although issues with ACPI are really a thing of the past, definitely not something I'd be worrying about in 2019.

I already have a single processor workstation using a Lian-Li A71F chassis, a Gigabyte 6PXSV4 motherboard, an Intel Xeon 2011v2 1650v2 processor, 32 GB of WIntec ECC memory, two Nvidia EVGA Geforce 760's with 4 GB of GDDR5, and two 2TB SSHD in RAID that are almost out of space. The Lian-Li A71F can hold up to 7 hard drives normally and 10 hard drives if the top back power supply bay isn't used for a second power supply or if a watercooling block isn't put in the top. I haven't even maxed that system out and it's plenty fast enough, but you people are telling me I've got to settle or make due using a single processor system when I already have one and that not the point of this build or this thread.

Nothing in the FreeBSD documentation mentions anything about ACPI issues being a thing of the past and what it says is ACPI is not an exact science, so apparently ACPI issues aren't a thing of the past for FreeBSD according to the documentation.
 
Well there we go... still hasn't said what he is looking to build other then, big and it has to do business.

Still we now are talking about ECC vs Non-ECC and consumer grade server ram limits he clearly isn't ever going to get close to maxing. Oh and somehow we are talking about running system V off floppies again.

Kudos scharfshutze even when we know where being trolled at least you keep it entertaining.

I still don't see how I could be trolling my own thread or post and most of people probably don't belong in this thread. Dan_D is trying to shove AMD Fartripper down my throat in every response he makes and rest of you are starting to as well.
 
I haven't even maxed that system out and it's plenty fast enough, but you people are telling me I've got to settle or make due using a single processor system when I already have one and that not the point of this build or this thread.

This is an example of your incorrect assumptions about modern HEDT systems and general performance of modern computing hardware. Simply put, you aren't settling with a modern HEDT system using a single processor. A dual processor system ISN'T always faster. In fact, most of the time it won't be. HEDT processors have so many cores at higher clock speeds that the only time dual processor workstations compete is when they are using very specialized applications. Applications and usage scenarios which you aren't using by your own admission. Multiprocessor systems are typically used for server work because they do more tasks in parallel than workstations are used for. When you have 30 VM's or something going with little utilization each, you don't need higher clock speeds. What you need are cores.

Servers, tend to use lower clocked cores simply to save power as datacenter electric bills can run hundreds of thousands of dollars or more annually. Opting for CPU's that are clocked at 2.40GHz (base) are a hell of a lot more attractive than CPU's clocked at 3.8GHz with 50% higher TDP. The only reason why higher clocked CPU's even exist in that market are for specialized applications that require higher levels of performance. Those are the CPU's that cost 10k or more. In the HEDT world, you can get 32c/64t (AMD) or 28c/56t (Intel), at reasonably high clocks because the types of applications that workstations use benefit from cores yes, but they also benefit a great deal from higher clock speeds.

This is why I've been telling you that a $500 Intel Core i9 9900K is faster than your dual Xeon E5-2603 v2 CPUs. You not only get the same core count and twice the thread count, but more than twice the clock speeds. Even though that's not intended for the workstation or HEDT markets, it's still way faster than what you are using. More CPU's does not necessarily make a system faster. For workstation applications, you need a balance of core count and clock speeds for the best results. I do not know why this concept is so alien to you.
 
I haven't had that problem as far as I know, but I've never had more that 32 GB ECC installed at once.

That's kinda the point we've all been making. You (and me for that matter) are NOT the target audience for the parts you're talking about wanting. That 24TB system is running a HUGE in-memory database, running real time financial allocations, and slicing that data a million different ways for 1500+ financial analysts around the world. You and I won't ever need that much capability.
 
That's kinda the point we've all been making. You (and me for that matter) are NOT the target audience for the parts you're talking about wanting. That 24TB system is running a HUGE in-memory database, running real time financial allocations, and slicing that data a million different ways for 1500+ financial analysts around the world. You and I won't ever need that much capability.

Nor could we afford it even if we did at current pricing. That's another point I've tried to make. He keeps bringing up Intel's 56 core CPU's as having more cores than what AMD offers, but at the same time those CPU's are OEM only, and cost 10's of thousands of dollars. Those go in systems costing not mere thousands of dollars, but many tens of thousands or even hundreds of dollars all said and done.

Look, scharfshutze009, right now, AMD offers more cores and ultimately better performance than Intel does in either the HEDT or workstation / server market regarding prices that normal working class people can afford. You do not need a dual processor system and for the sake of argument, yeah, you might be able to build one that's faster using two CPU's, but the costs will be way beyond what you can afford anytime soon. Frankly, even if you did build such a thing, it wouldn't necessarily be faster for what your doing. You continue to disregard clock speeds and shouldn't.

Lastly, your comments about RAM are academic. You don't have anywhere near as much RAM as desktop or HEDT system can support. The difference between 2TB and 4TB is meaningless when you can't afford 64GB or 128GB in a machine now.
 
Maybe he just doesn't understand that today's CPUs are already multiprocessor. There hasn't been a real single-processor system sold in 10 years (not counting super low-end stuff). Even mobile phones are multi-processor.

In the 1950's through the 1970's if you needed multiple processors you had to buy multiple system units, each of which were the size of refrigerators (if not larger).
In the 1980's through the 1990's if you needed multiple processors you bought a motherboard with multiple sockets
Today, multiple processors fit on ONE socket.

Nor could we afford it even if we did at current pricing. That's another point I've tried to make. He keeps bringing up Intel's 56 core CPU's as having more cores than what AMD offers, but at the same time those CPU's are OEM only, and cost 10's of thousands of dollars. Those go in systems costing not mere thousands of dollars, but many tens of thousands or even hundreds of dollars all said and done.

Look, scharfshutze009, right now, AMD offers more cores and ultimately better performance than Intel does in either the HEDT or workstation / server market regarding prices that normal working class people can afford. You do not need a dual processor system and for the sake of argument, yeah, you might be able to build one that's faster using two CPU's, but the costs will be way beyond what you can afford anytime soon. Frankly, even if you did build such a thing, it wouldn't necessarily be faster for what your doing. You continue to disregard clock speeds and shouldn't.

Lastly, your comments about RAM are academic. You don't have anywhere near as much RAM as desktop or HEDT system can support. The difference between 2TB and 4TB is meaningless when you can't afford 64GB or 128GB in a machine now.

I think the project cost for that system was something like $100 million. But that was for dev, test, quality, and production environments, each of which has multiple fallovers. I don't even know what the licensing costs for them look like.
 
Maybe he just doesn't understand that today's CPUs are already multiprocessor. There hasn't been a real single-processor system sold in 10 years (not counting super low-end stuff). Even mobile phones are multi-processor.

In the 1950's through the 1970's if you needed multiple processors you had to buy multiple system units, each of which were the size of refrigerators (if not larger).
In the 1980's through the 1990's if you needed multiple processors you bought a motherboard with multiple sockets
Today, multiple processors fit on ONE socket.




I think the project cost for that system was something like $100 million. But that was for dev, test, quality, and production environments, each of which has multiple fallovers. I don't even know what the licensing costs for them look like.

Also, the systems with more than one socket today, are basically for specialized usage cases in workstation form or are used for high density applications where you need more cores than can be had in a single processor today.
 
Look all want to know is how is ACPI(Advanced Power Configuration Interface) a problem for FreeBSD.


I have fixed your post for you. This is ALL you needed to ask to get answers. The rest of what you wrote doesn't matter, and is a distraction.

As far as writing your own Operating System (is this the second or third year of this project?), you don't need a high end server to write it. If you aren't looking for winning benchmarks and just want to write an OS, then buy whatever is cheapest. If you want a server buy an old use Dell server off EBay and call it good.
 
I still don't see how I could be trolling my own thread or post and most of people probably don't belong in this thread. Dan_D is trying to shove AMD Fartripper down my throat in every response he makes and rest of you are starting to as well.

You seem to roll around your own topics with 101 extraneous bits of informations... it makes it very hard to get at the core of what it is your looking for help understanding.

Fartripper as you like to call it is the best HEDT chip around... and it doesn't sound like you have the funds to build an actual current shipping Epyc or xeon server. Servers are what Epyc and Xeons are for. I am still not 100% clear on what you want this system for... a server for some unknown reason, or a high end workstation for again some unknown reason. The only thing I know you have tried to do with any of your systems is code a kernel and os from scratch which you could do on anything. (in fact if you are serious about coding a basic x86 kernel... a basic cpu would be preferable. Or even trying to code a kernel for arm hardware, in which case something like a pi would be a good choice and cost you next to nothing to tinker with).

If you really just have to have a current dual core machine and you have the funds to make it happen have fun. You can pick up a;
Supermicro Motherboard MBD-H11DSI-NT-B Dual AMD EPYC 7000-series for around $640
AMD EPYC 7451 24-Core 2.3 GHz... for around $2500 each
NEMIX RAM 128GB 2x64GB DDR4-2666 PC4-21300 4Rx4 ECC... good pricing right now at around $700
AMD Radeon Pro WX 9100 16GB... great pricing right now at $1500

I'm sure you could figure out what you want put in it for storage.... but there ya go. For under 8k you could have a dual EPYC machine with 48 cores 128gb and a WX 9100.

Its sounds like it would be serious overkill and thousands more then you want to spend... and no one else anywhere in the world would use that machine as a workstation. But there it is a dual core Epyc. The truth is (and why Dan and everyone else is suggesting you consider Threadripper if you really are in the market) A TR machine vs a dual core epyc 7451 will be half the cost and provide noticeably higher performance on anything anyone would be doing on a workstation. You can easily find a quality TR MOBO for around $300 and even the 32 core threadripper is going to be around a thousand dollars cheaper then just one epyc chip.
 
While I don't usually defend Scharf, not for any personal reason but simply because I struggle beyond comprehension to follow his posts, I've actually provided examples in the past where gaming under Vulkan does actually benefit from more cores - And that includes dual server grade Xeons.

Having said that, and I'm not saying this in direct attack of Windows but simply as the obvious truth as reported by Kyle on [H], the Windows scheduler sucks and does struggle with NUMA multi core implementations - Which is pretty much all modern multicore processors. My example was provided running a native Vulkan title under Linux, I've never tried the same scenario under Windows.
 
This is an example of your incorrect assumptions about modern HEDT systems and general performance of modern computing hardware. Simply put, you aren't settling with a modern HEDT system using a single processor. A dual processor system ISN'T always faster. In fact, most of the time it won't be. HEDT processors have so many cores at higher clock speeds that the only time dual processor workstations compete is when they are using very specialized applications. Applications and usage scenarios which you aren't using by your own admission. Multiprocessor systems are typically used for server work because they do more tasks in parallel than workstations are used for. When you have 30 VM's or something going with little utilization each, you don't need higher clock speeds. What you need are cores.

Look all HEDT is happens to be a high end desktop, so even a single processor highend desktop isn't even a workstation or at least not a true workstation and I know I went on about this with you in the AMD processors thread too about this. However, I wouldn't even consider Ryzen a true workstation processor let alone a lowend Intel Core I-series as I would consider it HEDT(Highend Desktop) and not workstattion as I would consider AMD Ryzen Threadripper and AMD Ryzen ThreadripperWX along with Intel Xeon E3 115x, Intel Xeon socket 1356 and Intel Xeon W actual workstation processor for single processor systems. I didn't say it had to be dual processor, but that's what I'm putting in the Lian-Li D8000 whether you or anyone else on this forums likes it or not.

Servers, tend to use lower clocked cores simply to save power as datacenter electric bills can run hundreds of thousands of dollars or more annually. Opting for CPU's that are clocked at 2.40GHz (base) are a hell of a lot more attractive than CPU's clocked at 3.8GHz with 50% higher TDP. The only reason why higher clocked CPU's even exist in that market are for specialized applications that require higher levels of performance. Those are the CPU's that cost 10k or more. In the HEDT world, you can get 32c/64t (AMD) or 28c/56t (Intel), at reasonably high clocks because the types of applications that workstations use benefit from cores yes, but they also benefit a great deal from higher clock speeds.

I know workstation/server processors tend to use lower clocked cores simply to save power as datacenter electric bills can be high, but that is why intel procesors have speed step and as for AMD I don't know what competeing processor feature they have. I am not building a highend desktop(HEDT), so get that out of your head because I'm only interested in true workstation/server.


This is why I've been telling you that a $500 Intel Core i9 9900K is faster than your dual Xeon E5-2603 v2 CPUs. You not only get the same core count and twice the thread count, but more than twice the clock speeds. Even though that's not intended for the workstation or HEDT markets, it's still way faster than what you are using. More CPU's does not necessarily make a system faster. For workstation applications, you need a balance of core count and clock speeds for the best results. I do not know why this concept is so alien to you.

I am not getting a POS Core i9 for the same reason I never bought and never will buy a POS Intel 2011 Core i-Series Extreme processor regardless of how fast it is because it's not secure enough and has no processor security features. I have a Intel 2011v2 1650v2 clocked at 3.5 GHz with turboboost up to 3.9GHz and it's plenty fast enough and secure enough compared to a POS Core i-series 2011 extreme edition and I plan to get a Xeon W instead of a core i9 that will have plenty of cores and be fast as well as secure enough. I want E5 2600 2011v2 six cores anyway if I can find them, but in the mean time those 2603's will have to due until I can afford to get something better.
 
While I don't usually defend Scharf, not for any personal reason but simply because I struggle beyond comprehension to follow his posts, I've actually provided examples in the past where gaming under Vulkan does actually benefit from more cores - And that includes dual server grade Xeons.

Having said that, and I'm not saying this in direct attack of Windows but simply as the obvious truth as reported by Kyle on [H], the Windows scheduler sucks and does struggle with NUMA multi core implementations - Which is pretty much all modern multicore processors. My example was provided running a native Vulkan title under Linux, I've never tried the same scenario under Windows.

That's not why I want to try or use FreeBSD or TrueOS on the Lian-Li D8000 build though because i actually don't know a better operating system that can handle that many cores, RAM, and that much hard drive space that is possible with the Lian-Li D8000 build, but your reply helps in another way. Also, AT&T never did let the BSD community use FreeBSD for production purposes, so it wouldn't surprise me that TrueOS wouldn't be allowed to be used for production purposes either. However, I don't know for sure if I'll end up using FreeBSD or TrueOS for production purposes anyway in the final result of this build.
 
Maybe he just doesn't understand that today's CPUs are already multiprocessor. There hasn't been a real single-processor system sold in 10 years (not counting super low-end stuff). Even mobile phones are multi-processor.

No because there not multi-processors their multi-core processors that's why you can still find dual or quad processor motherboards for workstation/server, even if you can't find them for HEDT(Highend Desktop).

In the 1950's through the 1970's if you needed multiple processors you had to buy multiple system units, each of which were the size of refrigerators (if not larger).
In the 1980's through the 1990's if you needed multiple processors you bought a motherboard with multiple sockets
Today, multiple processors fit on ONE socket.

In the 1979's I wouldn't have wanted the overweight huge paper weight.
In the 1990's a multi-processor system seemed attractive, but all you had at first was pentium pro, later Pentium II Xeon, and Pentium III Xeon.
In the early 2000's you finally had multi-processor Xeon 603/604, but they costed around $3000 each costing an overall of $12000 or more four up to four processor for a quad processor system and you guys said I was nuts for trying to sell an almost complete Socket 603/604 Intel Xeon System using a Supermicro Sc850P4 chassis that costs me about $600 for around $900 or best offer with about 10 to 30 percent off at most for the lowest accept manually accepted best offer.

I think the project cost for that system was something like $100 million. But that was for dev, test, quality, and production environments, each of which has multiple fallovers. I don't even know what the licensing costs for them look like.

I don't know what you're talking about here.
 
I am not getting a POS Core i9 for the same reason I never bought and never will buy a POS Intel 2011 Core i-Series Extreme processor regardless of how fast it is because it's not secure enough and has no processor security features. I have a Intel 2011v2 1650v2 clocked at 3.5 GHz with turboboost up to 3.9GHz and it's plenty fast enough and secure enough compared to a POS Core i-series 2011 extreme edition and I plan to get a Xeon W instead of a core i9 that will have plenty of cores and be fast as well as secure enough. I want E5 2600 2011v2 six cores anyway if I can find them, but in the mean time those 2603's will have to due until I can afford to get something better.

First off, I never recommended you buy a Core i9 9900K. No where in my posts have I told you to buy any consumer level processor. I've simply used them as a basis for performance comparisons. I said a Core i9 9900K was considerably faster than your dual CPU system and told you why it was faster. It would be faster at any task you can imagine. As for the rest, now you really are out of touch. If you buy any Intel processor based system for the security, you are doing it wrong. Spectre or Meltdown ring a bell? Furthermore, what security features do you think the Xeon has that the Core i7 5960X doesn't have using your example that actually applies to you? Just because a processor or platform has a feature, it doesn't mean that you will be in any position to actually use it. Your Xeon E-5 1650v2 shows that it's a 6c/12t CPU. That means that the Core i7 5960X would stomp the shit out of it and cost less.

The only thing the Xeon has "security wise" that the Core i7/i9 series doesn't is vPro. Chances are you don't know what that is, as you clearly don't understand that some of those features that comprise vPro are in fact in the Core processors anyway. The ones that aren't, are not things you can use in an environment like a home or home office.

vPro and many of the features that make it up are for IT departments. You deploy and run Microsoft SCCM at home? I'll bet you don't. The CPU's in your comparison are pretty much the fucking same thing for your usage. Basically, a Core ix Extreme CPU is a Xeon that's multiplier unlocked, lacks ECC support, and can't be used in dual processor systems. Otherwise, they are identical. You have no idea what your talking about when it comes to hardware and the sooner you accept that, the sooner you can actually educate yourself on the subject matter.
 
Last edited:
You seem to roll around your own topics with 101 extraneous bits of informations... it makes it very hard to get at the core of what it is your looking for help understanding.

Fartripper as you like to call it is the best HEDT chip around... and it doesn't sound like you have the funds to build an actual current shipping Epyc or xeon server. Servers are what Epyc and Xeons are for. I am still not 100% clear on what you want this system for... a server for some unknown reason, or a high end workstation for again some unknown reason. The only thing I know you have tried to do with any of your systems is code a kernel and os from scratch which you could do on anything. (in fact if you are serious about coding a basic x86 kernel... a basic cpu would be preferable. Or even trying to code a kernel for arm hardware, in which case something like a pi would be a good choice and cost you next to nothing to tinker with).

Xeon and Epyc can be used for workstation too that why the boards I buy say workstation/server. I don't need a basic x86 processor to code a kernel as it's already obsolete and can't count the date correctly anymore just as 32-bit processor won't be able to count the date past the year 2038. Also, I'm doing just fine with two Dell T1700 Precision workstations for coding a kernel by using a live bootable operating system cd, such as ubuntu or Linuxmint and a CRU mobile rack with a drive carrier containing a hard drive with blank partitions mounted using qemu.

If you really just have to have a current dual core machine and you have the funds to make it happen have fun. You can pick up a;
Supermicro Motherboard MBD-H11DSI-NT-B Dual AMD EPYC 7000-series for around $640
AMD EPYC 7451 24-Core 2.3 GHz... for around $2500 each
NEMIX RAM 128GB 2x64GB DDR4-2666 PC4-21300 4Rx4 ECC... good pricing right now at around $700
AMD Radeon Pro WX 9100 16GB... great pricing right now at $1500

I'm sure you could figure out what you want put in it for storage.... but there ya go. For under 8k you could have a dual EPYC machine with 48 cores 128gb and a WX 9100.

Yes and that pretty much what I plan to do if I buy AMD this time instead of Intel for this built if the used parts I had lying around don't work.

Its sounds like it would be serious overkill and thousands more then you want to spend... and no one else anywhere in the world would use that machine as a workstation. But there it is a dual core Epyc. The truth is (and why Dan and everyone else is suggesting you consider Threadripper if you really are in the market) A TR machine vs a dual core epyc 7451 will be half the cost and provide noticeably higher performance on anything anyone would be doing on a workstation. You can easily find a quality TR MOBO for around $300 and even the 32 core threadripper is going to be around a thousand dollars cheaper then just one epyc chip.

Look if I wanted fartripper I'd buy it and put it in the Lian-Li A71-F I'm using my Intel 2011v2 1650v2 in, but not right now and I would prefer AMD ThreadripperWX if I did do something AMD for a single processor build.
 
Look if I wanted fartripper I'd buy it and put it in the Lian-Li A71-F I'm using my Intel 2011v2 1650v2 in, but not right now and I would prefer AMD ThreadripperWX if I did do something AMD for a single processor build.

Go the Epyc route then and have fun. I listed one board but Epyc dual cpu boards are easy to find. You can grab them right off newegg. $600-900 depending on the features... and go to it. You can throw one Epyc chip in there for now... or go crazy. Whatever makes you happy I say it's only money.
 
I am not getting a POS Core i9 for the same reason I never bought and never will buy a POS Intel 2011 Core i-Series Extreme processor regardless of how fast it is because it's not secure enough and has no processor security features. I have a Intel 2011v2 1650v2 clocked at 3.5 GHz with turboboost up to 3.9GHz and it's plenty fast enough and secure enough compared to a POS Core i-series 2011 extreme edition and I plan to get a Xeon W instead of a core i9 that will have plenty of cores and be fast as well as secure enough. I want E5 2600 2011v2 six cores anyway if I can find them, but in the mean time those 2603's will have to due until I can afford to get something better.

Dude those Xeons are Ivy Bridge. They use the SAME CORE and have the SAME SECURITY VULNERABILITIES as the consumer chips. In fact since it's Ivy bridge its probably got EVEN MORE vulnerabilities because it's so old. Admit it, you just want a dual processor system because you want one. Not for any actual reason other than you just have your heart set. Go ahead and get it then, but stop allowing yourself to be fooled into believing that it is any way shape or form better or more secure than a modern HEDT chip, or even most of the modern consumer level chips. Hell, I just built a Ryzen 2200G at work for someone and that's literally one of their lowest end chips and it STILL would spank the crap out of one of those Xeons, (maybe even both) and yes even in security.
 
That's not why I want to try or use FreeBSD or TrueOS on the Lian-Li D8000 build though because i actually don't know a better operating system that can handle that many cores, RAM, and that much hard drive space that is possible with the Lian-Li D8000 build, but your reply helps in another way. Also, AT&T never did let the BSD community use FreeBSD for production purposes, so it wouldn't surprise me that TrueOS wouldn't be allowed to be used for production purposes either. However, I don't know for sure if I'll end up using FreeBSD or TrueOS for production purposes anyway in the final result of this build.

AT&T has absolutely no say in how FreeBSD is used, much less TrueOS. None of their UNIX code is in either of them.


(Fixing your busted quoting here)
No because there not multi-processors their multi-core processors that's why you can still find dual or quad processor motherboards for workstation/server, even if you can't find them for HEDT(Highend Desktop).

The term multicore was created so Intel and AMD could keep selling multi-socket systems as high end solutions to SMB and Enterprise segments while still pushing out higher performing single socket systems to the mainstream segment. The first x86 multicore processors were literally two separate and independently complete CPUs placed on one package. To the OS and BIOS, they were indistinguishable from a dual-socket system using two single-core processors.




I don't know what you're talking about here.
I was responding to Dan about the cost of a recent enterprise system I had the pleasure of working on. Not all of us are just mouth-breathing gamer nerds trying to push our fan-faves on unsuspecting victims from our mother's basements. Some of us actually have professional experience with the very products you wish you could have but don't have a clue what to do with them.
 
The only thing the Xeon has "security wise" that the Core i7/i9 series doesn't is vPro. Chances are you don't know what that is, as you clearly don't understand that some of those features that comprise vPro are in fact in the Core processors anyway. The ones that aren't, are not things you can use in an environment like a home or home office.

And ECC, ECC does reduce the attack vector in relation to a couple of the latest Intel security vulnerabilities.

Dude those Xeons are Ivy Bridge. They use the SAME CORE and have the SAME SECURITY VULNERABILITIES as the consumer chips. In fact since it's Ivy bridge its probably got EVEN MORE vulnerabilities because it's so old. Admit it, you just want a dual processor system because you want one. Not for any actual reason other than you just have your heart set. Go ahead and get it then, but stop allowing yourself to be fooled into believing that it is any way shape or form better or more secure than a modern HEDT chip, or even most of the modern consumer level chips. Hell, I just built a Ryzen 2200G at work for someone and that's literally one of their lowest end chips and it STILL would spank the crap out of one of those Xeons, (maybe even both) and yes even in security.

Totally this, Since the inception of the i3/i5/i7 series of processors about the only thing that's changed is the IMC and memory technology used.
 
No because there not multi-processors their multi-core processors that's why you can still find dual or quad processor motherboards for workstation/server, even if you can't find them for HEDT(Highend Desktop).

Again, you have totally missed the mark. I explained this earlier. You are having trouble with basic concepts, much less advanced ones so I'll keep this general and simple. You need to understand that from a performance standpoint, having the processors in separate sockets impacts performance negatively in some respects. That is, two quad core CPU's isn't necessarily better than a single 8 core CPU. There are some situations where that's not true, but they are all in the server world. Those cases depend on memory and PCIe lane configurations. I won't get into the details of that, but for desktop and most workstation applications having all your cores local in a single CPU package is better.

Furthermore, the reason why multi-CPU socket systems exist is primarily to facilitate I/O and to increase CPU / CPU core density in a given space. What that means is that with desktop CPUs, the most I can have right now is generally 28c/56t (Intel) or 32c/64t (AMD). This is basically true of the server market as well, but lets go ahead and toss in your 56c/115t Intel example you love throwing out there. In the server world, its more efficient to have more processing power in a smaller space. Sometimes you need more cores than any single CPU can provide. Therefore, you can use two, four, or eight of them in a single chassis to increase CPU/Core density in a given rackmount space. There is also added efficiency and reduced costs from those CPU's sharing some resources rather than being inside two separate computer systems each having their own PSU's, I/O etc.

In other words, you need to understand that two 6 core CPU's are NOT better than equivalent CPU's with 12c/24t in a single package.

AI was responding to Dan about the cost of a recent enterprise system I had the pleasure of working on. Not all of us are just mouth-breathing gamer nerds trying to push our fan-faves on unsuspecting victims from our mother's basements. Some of us actually have professional experience with the very products you wish you could have but don't have a clue what to do with them.

Indeed.
 
Last edited:
Is this even about Linux any more?

Given lack of use case from the user, and the enjoyable tangent about raspberry pi's I suggest you buy 10 Raspberry Pi 4's and stick all of them in your Lian-Li D8000 case (does it have a window?). Cluster them and live out your multiprocessor dreams (and on linux). ACPI concern alleviated.
 
  • Like
Reactions: Haven
like this
Dude those Xeons are Ivy Bridge. They use the SAME CORE and have the SAME SECURITY VULNERABILITIES as the consumer chips. In fact since it's Ivy bridge its probably got EVEN MORE vulnerabilities because it's so old. Admit it, you just want a dual processor system because you want one. Not for any actual reason other than you just have your heart set. Go ahead and get it then, but stop allowing yourself to be fooled into believing that it is any way shape or form better or more secure than a modern HEDT chip, or even most of the modern consumer level chips. Hell, I just built a Ryzen 2200G at work for someone and that's literally one of their lowest end chips and it STILL would spank the crap out of one of those Xeons, (maybe even both) and yes even in security.

What vulnerabilities considering Spectre and meltdown are security hoaxes that are made up by AMD fans, since Intel doesn't even acknowledge the security vulnerablity anywhere on their site and there are zero yes zero official patches for vulnerabilities that do not exist except in the mind of gulible people like you who still and anyone else who would chime in about that with a spoof website containing the spam content or news about it. I don't care about you lame benchmark scores you claim would spank my dual Xeon 2011v2's either because that's about all you can do with your computer is benchmark and talk smack.
 
AT&T has absolutely no say in how FreeBSD is used, much less TrueOS. None of their UNIX code is in either of them.

Bull crap considering that's what I read about why BSD isn't allowed to be used in a production environment. What I read is that AT&T gets ideas from the BSD communities creation of BSD UNIX and Andrew Tanenbaum happened to have taken part of UNIX of Berkley's Mainframe when he created MInix and help create BSD UNIX maybe not in those exact words, but basically that's what he did.

(Fixing your busted quoting here)


The term multicore was created so Intel and AMD could keep selling multi-socket systems as high end solutions to SMB and Enterprise segments while still pushing out higher performing single socket systems to the mainstream segment. The first x86 multicore processors were literally two separate and independently complete CPUs placed on one package. To the OS and BIOS, they were indistinguishable from a dual-socket system using two single-core processors.


Good I'm still going to build a single processor system using my Lian-Li A71F and a dual processor system using my Lian-Li D8000, which I pretty much already have.


I was responding to Dan about the cost of a recent enterprise system I had the pleasure of working on. Not all of us are just mouth-breathing gamer nerds trying to push our fan-faves on unsuspecting victims from our mother's basements. Some of us actually have professional experience with the very products you wish you could have but don't have a clue what to do with them.

I'm not in my mothers basement like you might, so what if you have professional experience because I've been around since 1980 exactly and I'm a 100 percent disable United States military veteran who's actually seen a military base and what it's like to work for the government.
 
Is this even about Linux any more?

Given lack of use case from the user, and the enjoyable tangent about raspberry pi's I suggest you buy 10 Raspberry Pi 4's and stick all of them in your Lian-Li D8000 case (does it have a window?). Cluster them and live out your multiprocessor dreams (and on linux). ACPI concern alleviated.

I'm not doing that it's sounds stupid.
 
Again, you have totally missed the mark. I explained this earlier. You are having trouble with basic concepts, much less advanced ones so I'll keep this general and simple. You need to understand that from a performance standpoint, having the processors in separate sockets impacts performance negatively in some respects. That is, two quad core CPU's isn't necessarily better than a single 8 core CPU. There are some situations where that's not true, but they are all in the server world. Those cases depend on memory and PCIe lane configurations. I won't get into the details of that, but for desktop and most workstation applications having all your cores local in a single CPU package is better.

I don't care Dan_D I'm not buying or building a AMD fartripper system right now and I'm not making the Lian-Li D8000 build a single processor AMD fartripper build either.

Furthermore, the reason why multi-CPU socket systems exist is primarily to facilitate I/O and to increase CPU / CPU core density in a given space. What that means is that with desktop CPUs, the most I can have right now is generally 28c/56t (Intel) or 32c/64t (AMD). This is basically true of the server market as well, but lets go ahead and toss in your 56c/115t Intel example you love throwing out there. In the server world, its more efficient to have more processing power in a smaller space. Sometimes you need more cores than any single CPU can provide. Therefore, you can use two, four, or eight of them in a single chassis to increase CPU/Core density in a given rackmount space. There is also added efficiency and reduced costs from those CPU's sharing some resources rather than being inside two separate computer systems each having their own PSU's, I/O etc.

In other words, you need to understand that two 6 core CPU's are NOT better than equivalent CPU's with 12c/24t in a single package.

I don't care I can't afford a 12 core processor anytime soon and haven't been able to yet, so I had to settle on two quad-core 2603v2's and will have to make due with two six core 2011v2's or v3's if not v4's or Scalable's let alone an AMD EPYC system considering Opteron is probably out of the question anyway. Also, considering AMD EPYC and Intel Xeon Scalable seem to be the most appealing products to want to use for a multi-processor system anyway. I'm not going to use a multi-core processor with two, four, or eight in a single server chassis out of the home i live in or my apartment near the University I'm attending because I'd have to rent datacenter space for that, which is expensive. Whatever, you meant by the last part though I don't care because all my hardware is working fine and this thread is not about what processor I want to use or asking if I should use.

Indeed.
 
Try both OSes and see which one works best for you. Since there is not cost to you, other than your time, I would say you have nothing to lose and experience to gain.
 
What vulnerabilities considering Spectre and meltdown are security hoaxes that are made up by AMD fans, since Intel doesn't even acknowledge the security vulnerablity anywhere on their site and there are zero yes zero official patches for vulnerabilities that do not exist except in the mind of gulible people like you who still and anyone else who would chime in about that with a spoof website containing the spam content or news about it. I don't care about you lame benchmark scores you claim would spank my dual Xeon 2011v2's either because that's about all you can do with your computer is benchmark and talk smack.
Ok, you have now gone full "tin foil hat".
 
I'm not in my mothers basement like you might, so what if you have professional experience because I've been around since 1980 exactly and I'm a 100 percent disable United States military veteran who's actually seen a military base and what it's like to work for the government.

And your point? I've worked Government IT multiple times, and started using Linux and Unix in 1995. This and $5 gets me a small coffee these days. However I at least know how to ask a question on a forum, and not argue with people trying to help me.
 
And your point? I've worked Government IT multiple times, and started using Linux and Unix in 1995. This and $5 gets me a small coffee these days. However I at least know how to ask a question on a forum, and not argue with people trying to help me.

My point was is that being a 100 percent honorable discharged U. S. military veteran is how I can afford all this stuff. After about either August, September, or October of this year I'll be out of debt with the credit cards I used to buy all the stuff for my previous single processor build that uses the Intel 2011v2 1650v2 and the Gigabyte 6PXSV4 Intel socket 2011v2 based single processor workstation/server motherboard in the Lian-Li A71F, which I'm currently paying $518.00 per month to a debt management company to help reduce the interest rates. Also, in about 9 or 10 more months my loan or remaining balance on the Ford C-Max Energi Plug-in Electric Hybrid at $450 a month will be paid off to Ford Credit for what owed after the repossession as result of me screwing up with managing my money and deciding what to spend it on after trying to return a Minute 1760 watt E2000RTXL2U the same month that caused me to miss a payment because Fedex wanted $408 to return it even with the RMA and my car payment was $450, so I'm not sure which one went through except that my car payment didn't go through for August of 2018.
 
Status
Not open for further replies.
Back
Top